lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Markus Jelsma <>
Subject RE: SSD endurance
Date Thu, 12 Mar 2015 22:39:10 GMT
Thanks for sharing Toke! 

Reliability should not be a problem for a Solr cloud environment. A corrupted index cannot
be loaded due to exceptions so the core should not enter an active state. However, what would
happen if parts of the data become corrupted but can still be processed by the codec? I don't
even know if the data has a CRC check to guard against such madness?

-----Original message-----
> From:Toke Eskildsen <>
> Sent: Thursday 12th March 2015 21:33
> To: solr-user <>
> Subject: SSD endurance
> For those who have not yet taken the leap to SSD goodness because they are afraid of
flash wear, the burnout test from The Tech Report seems worth a read. The short story is that
they wrote data to the drives until they wore out. All tested drives survived considerably
longer than guaranteed, but 4/6 failed catastrophically when they did die. 
> I am disappointed about the catastrophic failures. One of the promises of SSDs was graceful
end of life by switching to read-only mode. Some of them did give warnings before the end,
but I wonder how those are communicated in a server environment?
> Regarding Lucene/Solr, the write pattern when updating an index is benign to SSDs: Updates
are relatively bulky, rather than the evil constantly-flip-random-single-bits-and-flush pattern
of databases. With segments being immutable, the bird's eye view is that Lucene creates and
deletes large files, which makes it possible for the SSD's wear-leveler to select the least-used
flash sectors for new writes: The write pattern over time is not too far from the one that
The Tech Report tested with.
> - Toke Eskildsen
> Whose trusty old 160GB Intel X25-M reports an accumulated 36TB of writes.

View raw message