lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Heisey <>
Subject Re: Pros and cons of using RAID or different RAIDS?
Date Sat, 20 Apr 2013 20:15:10 GMT
On 4/20/2013 7:36 AM, Toke Eskildsen wrote:
> Furkan KAMACI []:
>> Is there any documentation that explains pros and cons of using RAID or
>> different RAIDS?
> There's plenty for RAID in general, but I do not know of any in-depth Solr-specific guides.
> For index updates, you want high bulk read- and write-speed. That makes the striped versions,
such as RAID 5 & 6, poor choices for a heavily updated index.
> For searching you want low latency and high throughput for small random access reads.
All the common RAIDs gives you higher throughput for those.

The only RAID level I'm aware of that satisfies speed requirements for
both indexing and queries is RAID10, striping across mirror sets.  The
speed goes up with each pair of disks you add.  The only problem with
RAID10 is that you lose half of your raw disk space, just like with
RAID1.  This is the raid level that I use for my Solr servers.  I have
six 1TB SATA drives, giving me a usable volume of 3TB.  I notice a
significant disk speed increase compared to a server with single or
mirrored disks.  It is faster on both random and contiguous reads.

RAID 5 and 6 (striping with parity) don't lose as much disk space; one
or two disks depending on which one you choose.  Read speed is very good
with these levels, but unfortunately there is a penalty for writes due
to the parity stripes, and that penalty can be quite severe.  If you
have a caching RAID controller, the write penalty is mitigated for
writes that fit in the cache (usually up to 1GB), but once you start
writing continuously, the penalty comes back.

In the event of a disk failure, all RAID levels will have lower
performance during rebuild.  RAID10 will have no performance impact
before you replace the disk, and will have a mild and short-lived
performance impact while the rebuild is happening.  RAID5/6 has a major
performance impact as soon as a disk fails, and an even higher
performance impact during the rebuild, which can take a very long time.
 Rebuilding a failed disk on a RAID6 volume that has 23 1TB disks is a
process that takes about 24 hours, and I can say that from personal


View raw message