lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Toke Eskildsen>
Subject Re: SSD Experience (on developer machine)
Date Tue, 23 Aug 2011 15:11:51 GMT
On Tue, 2011-08-23 at 16:10 +0200, Federico Fissore wrote:

[Toke: Re-writes is not a problem now]
> Maybe this still is a point, thinking at how easy is today to fill your 
> local storage: for example, a "common" user will store video files.

It is only a problem if the SSD is stored to the brim (and don't have
hidden cells to counter the problem). If you store to the brim, you will
have problems working actively with the device - temporary files,
logging and whatnot tends to require that a non-trivial amount of
storage is free. If you are not working actively with the device, the
wear on the cells is not a problem. This brings us back to my initial
point: Yes, you can construct cases where there will be problems. But
they tend to be artificial:

Let's say you have a drive with just 5GB left. Let's say that the cells
can handle 10,000 writes. Doing constant rewrites of the 5GB gives you
10,000 * 5GB = 50TB before the drive gives up. I asked my drive about my
daily write average some time ago. It was 13GB/day. With that scenario,
the drive would live 10+ years.

Admittedly this is just back-of-the-envelope and it ignores a lot of
factors, but it does provides an idea of the amount of punishment they
can take.

> I don't because that one and other articles have scared me (and here 
> definitely fear = lack of information)

I suggest AnandTech. They provide some excellent articles where they do
in-depth analysis and cut through many of the misconceptions as well as
hyperbole that has surrounded SSDs.

> How long past that point do you think we are? Can you give some minimum 
> "model" age? Say, OCZ Vertex since 2 and Intel since 320 ?

I consider Intel X25 kind of a turning point in the history of SSDs.
That drive provided most of the features that modern SSDs has. Later
drives added better bulk speed, better maximum latency and better
TRIMming. Nice things but not as game-changing as the introduction of a
(relatively) cheap, reliable, wear-leveling drive with high performance
for both reads & writes.

[Toke: Use the SSD for tmp files and swap]

> ok for the swap speed, but in using the ssd with swap and temp files 
> enabled, you are saying the opposite of articles around such as

The OCZ Onyx that they test is a pretty old drive, but ignoring that,
they do make statements such as "You can help increase the life of your
SSD by reducing how much the OS write to disk" which is technically
correct, but of no real value as I argued above.

They disable atime which I find is a fine idea, but since the OCZ sucks
at random writes (relatively to other SSDs) I guess they gain a fair
deal of performance there. 

They put tmp in RAM without any explanation (although it fits well with
the atime-thing), but it does not matter since none of their tests use
tmp. For that matter, none of their tests use swap either. They might
claim that their article is based on testing but that is only true for a
subset of their tweaks. The wear due to tmp/noatime is just a claim they
make, without any explanation or calculations.

> btw, if less free disk space = more destructive scenario, then the 
> bigger the safer, and here the price/size ratio suggest a conservative 
> use of SSD. Mine is 120GB and is 60% filled and I'd like not to go 
> beyond that point, to avoid surprises

With 10,000 writes you've got around 720 TB of writes. That is 200GB/day
for the next 10 years. I would suggest checking with S.M.A.R.T-tool to
see if it provides you with write-statistics. I would be surprised if
they were that high.

Toke Eskildsen

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message