• Visitors can check out the Forum FAQ by clicking this link. You have to register before you can post: click the REGISTER link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. View our Forum Privacy Policy.
  • Want to receive the latest contracting news and advice straight to your inbox? Sign up to the ContractorUK newsletter here. Every sign up will also be entered into a draw to WIN £100 Amazon vouchers!

Reply to: Hybrid Vs SSD

Collapse

You are not logged in or you do not have permission to access this page. This could be due to one of several reasons:

  • You are not logged in. If you are already registered, fill in the form below to log in, or follow the "Sign Up" link to register a new account.
  • You may not have sufficient privileges to access this page. Are you trying to edit someone else's post, access administrative features or some other privileged system?
  • If you are trying to post, the administrator may have disabled your account, or it may be awaiting activation.

Previously on "Hybrid Vs SSD"

Collapse

  • sal
    replied
    Originally posted by d000hg View Post
    And surely SSDs don't lose data, they lose the ability to write data?
    The individual memory blocks in the SSD degrade with each write. The SSD controller/Firmware is trying to distribute the wear and tear evenly but some blocks are just less durable than others, so they are flagged as "bad" and a new block is allocated from the hidden spare portion of the SSD in their place. Usually the firmware is capable of predicting the failure in advance and mark the "bad" blocks before they actually fail, but that's not always the case.

    I'm sure that most of us have had a dead USB stick at one point in time.

    Leave a comment:


  • d000hg
    replied
    The fact they build in failure space is fine though - they are built to last a useful lifetime. In 10 years I won't be using that disk anyway. They've carefully worked out what margin of safety to build in.

    And surely SSDs don't lose data, they lose the ability to write data?

    Leave a comment:


  • sal
    replied
    Originally posted by d000hg View Post
    That just sounds similar to JIT compilers - code is slow the first time it is run but is then cached as a compiled version.I think your information on SSDs is rather behind the times. Not using your SSD in case you wear it out is like leaving the lights turned on in your house 24/7 to avoid blowing the bulb when you turn them on. They're designed to be used this way and unless you use the same disk for a decade it's not likely to be an issue.

    If you think algorithms can only work based on prayer, I worry what kind of IT contractor you can be. This is not rocket science, they don't even need to be complicated algorithms.

    When you consider data-centres rely 100% on algorithms and software to run efficiently, saying a disk can't optimise itself sensibly is crazy.
    My information of the SSDs is quite current, don't worry. There is a reason why the software bundled with most SSDs advises you to turn off indexing service and about 10% of the raw SSD is reserved for swapping out failed segments. So form the typical 256GB SSD you get only 220ish GB. Segments of the SSD fail all the time, it's just that the user is oblivious to the fact as the data is re-allocated to the spare portion.

    Yes modern SSDs come with 3 or 5 years of warranty, but that doesn't make them immortal - frequent writes will destroy your SSD fast, and you might loose data, yes the manufacturer will be happy to replace it, but you will still have to go through the trouble of restoring data / rebuilding your OS

    And the problem with the algorithms is that they can't (for the most part) predict future. And in most cases they don't re-arrange data on the fly but at a pre-determined schedule or when the system is idle. They analyze the data since the last optimization and act based on what already happened, not what files you are going to use in the future.

    So when you start working with files that are on the HDD, they might not be moved to the SSD by the time you are finished with them. The SSD portion of the Hybrid is not a substitute for cache or RAM.

    Leave a comment:


  • Scrag Meister
    replied
    Originally posted by Unix View Post
    I would go for a smaller SSD for OS/Programs/source code etc (256GB) and larger IDE (1TB+) for bigger files you don't use often (videos/pic/porn/ISO/backup etc). This is my current setup and it works very well.
    Can you still buy IDE drives?

    I have 2 striped 128Gb SDDs for system and stuff I want NOW!!, a 600Gb 6Gbs SATA for slightly slower stuff and 2x 1Tb on 3Gbs SATA for other stuff that I don't mind being slower, built in 2011.

    Also have a 4Tb Raid 6 NAS.

    Leave a comment:


  • eek
    replied
    Originally posted by d000hg View Post
    Is that the £499 basic model or a higher spec?

    iFixit have guides IIRC but I'd always assumed a second drive was stuck in an external case and that put me off.

    The only thing that's an issue is the GPU (Intel HD4000 I think) but this doesn't matter to me. Although it is one reason I am hoping for a product refresh to use Haswell and the HD5000 which are a big step up and quite capable really.
    The £499 model (its for the Jrs so doesn't need to be any better)..

    I was hoping for the same spec jump before buying but as there was a mini refresh of the range in June I gave up and bought...

    Leave a comment:


  • d000hg
    replied
    Originally posted by Sysman View Post
    I know someone who has done this. I think he got a new Mac mini delivered straight from Apple to the upgrade company and they delivered it to him once the work was done.

    This looks like the company concerned. Pick the Mac mini section, if you already have one feed in the serial number, and off you go.

    Mac Upgrades - Macintosh Upgrades in the UK
    That looks kind of cool.

    Leave a comment:


  • Unix
    replied
    Originally posted by d000hg View Post
    That just sounds similar to JIT compilers - code is slow the first time it is run but is then cached as a compiled version.I think your information on SSDs is rather behind the times. Not using your SSD in case you wear it out is like leaving the lights turned on in your house 24/7 to avoid blowing the bulb when you turn them on. They're designed to be used this way and unless you use the same disk for a decade it's not likely to be an issue.

    If you think algorithms can only work based on prayer, I worry what kind of IT contractor you can be. This is not rocket science, they don't even need to be complicated algorithms.

    When you consider data-centres rely 100% on algorithms and software to run efficiently, saying a disk can't optimise itself sensibly is crazy.

    Although at the moment I prefer doing the partition myself I can see a time where storage will be a black box to the user. You will have RAM/SSD/HD/Cloud Storage and the algorithms will just handle everything, you just save a file and maybe tag it with metadata and not specify the location. It will be brought from the cloud first time you use it to HD then to SSD if it's used a lot and in RAM if used a hell of a lot (there you go algorithm pseudo code )
    Last edited by Unix; 3 July 2014, 10:58.

    Leave a comment:


  • d000hg
    replied
    Originally posted by sal View Post
    And you know that the movement between the SSD/HDD parts of the hybrid is doesn't happen instantaneous. Working on said audio files for some time and them leaving them alone, will probably not qualify them for movement to the SSD portion. Even if it does you will have to endure the process of some old files being read from the SSD, then written to the HDD so your audio files can be read from the HDD and then written to the SSD. During all this your slowed down by the HDD.
    That just sounds similar to JIT compilers - code is slow the first time it is run but is then cached as a compiled version.
    Not to mention that frequent re-writes are destroying the SSD.
    I think your information on SSDs is rather behind the times. Not using your SSD in case you wear it out is like leaving the lights turned on in your house 24/7 to avoid blowing the bulb when you turn them on. They're designed to be used this way and unless you use the same disk for a decade it's not likely to be an issue.

    In this scenario you are better off with getting some more RAM and keep the files in it while working instead of having to pray for the algorithm to predict your needs.
    If you think algorithms can only work based on prayer, I worry what kind of IT contractor you can be. This is not rocket science, they don't even need to be complicated algorithms.

    When you consider data-centres rely 100% on algorithms and software to run efficiently, saying a disk can't optimise itself sensibly is crazy.
    Last edited by d000hg; 3 July 2014, 10:51.

    Leave a comment:


  • sal
    replied
    Originally posted by Sysman View Post
    Not quite.

    How about that 2.5 GB of stuff in Xcode.app ? Do you really need all of it on the SSD portion?

    Ditto with the stuff in /Library and ~/Library. Both can get quite large.

    I regularly edit audio files of 1-2 GB. I'd be far happier not having to manually shunt them around to work on them and remembering to move them to slower storage when I've done.
    And you know that the movement between the SSD/HDD parts of the hybrid is doesn't happen instantaneous. Working on said audio files for some time and them leaving them alone, will probably not qualify them for movement to the SSD portion. Even if it does you will have to endure the process of some old files being read from the SSD, then written to the HDD so your audio files can be read from the HDD and then written to the SSD. During all this your slowed down by the HDD. Not to mention that frequent re-writes are destroying the SSD.

    In this scenario you are better off with getting some more RAM and keep the files in it while working instead of having to pray for the algorithm to predict your needs.

    If you really need the speed you won't skimp £100 for the SSD. Hybrids are compromise in any aspect and only justified if you don't have the physical ability to put 2 separate disks (even then i would probably prefer the WD offering with 2 drives in the same chassis).

    Leave a comment:


  • Sysman
    replied
    I know someone who has done this. I think he got a new Mac mini delivered straight from Apple to the upgrade company and they delivered it to him once the work was done.

    This looks like the company concerned. Pick the Mac mini section, if you already have one feed in the serial number, and off you go.

    Mac Upgrades - Macintosh Upgrades in the UK
    Last edited by Sysman; 3 July 2014, 09:55.

    Leave a comment:


  • d000hg
    replied
    Is that the £499 basic model or a higher spec?

    iFixit have guides IIRC but I'd always assumed a second drive was stuck in an external case and that put me off.

    The only thing that's an issue is the GPU (Intel HD4000 I think) but this doesn't matter to me. Although it is one reason I am hoping for a product refresh to use Haswell and the HD5000 which are a big step up and quite capable really.

    Leave a comment:


  • eek
    replied
    Originally posted by d000hg View Post
    I've seen others sell that sort of thing - amazing there's room inside the case.

    If doing a cold install of OSX isn't too awkward I might just do that... get the lowest-spec quad-core i7 model and then buy an SSD and 16Gb RAM.

    I wonder though if you could even buy the cheapest £499 dual-core i5 and replace the CPU with an i7 for less. No idea if it's the same mainboard/chipset in both although you'd think in the interest of cost saving they would use the same parts in both.
    I doubt you can upgrade the cpu as its a mobile part so probably soldered in.

    Supposedly its not a difficult task will be able to tell you on the 14th after I've done it to the mac mini I just bought (brand new i5, £415 off the bay)...

    Leave a comment:


  • Sysman
    replied
    Originally posted by d000hg View Post
    It's a compromise by design, but for instance in a MacMini or most laptops you can't physically fit two drives in.
    I gather you can fit an extra SSD drive into a Mac mini but it's not straightforward; you need the right brackets and stuff.

    I've read about a UK company who supplies the mounting kit and they will do the work for you if you ship your system to them.

    Leave a comment:


  • Sysman
    replied
    Originally posted by Unix View Post
    The opposite, its the most efficient as the files are exactly where you want them, an algorithm will get it wrong and be swapping files constantly between both. I _never_ want my OS to be on the IDE and never want 2GB movies taking up SSD. If guess if you are a fanboi of apple then you'll believe any tulip the spout though rather than thinking for yourself.
    Not quite.

    How about that 2.5 GB of stuff in Xcode.app ? Do you really need all of it on the SSD portion?

    Ditto with the stuff in /Library and ~/Library. Both can get quite large.

    I regularly edit audio files of 1-2 GB. I'd be far happier not having to manually shunt them around to work on them and remembering to move them to slower storage when I've done.

    Leave a comment:


  • d000hg
    replied
    I've seen others sell that sort of thing - amazing there's room inside the case.

    If doing a cold install of OSX isn't too awkward I might just do that... get the lowest-spec quad-core i7 model and then buy an SSD and 16Gb RAM.

    I wonder though if you could even buy the cheapest £499 dual-core i5 and replace the CPU with an i7 for less. No idea if it's the same mainboard/chipset in both although you'd think in the interest of cost saving they would use the same parts in both.

    Leave a comment:

Working...
X