cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Cassandra Wiki] Update of "LargeDataSetConsiderations" by PeterSchuller
Date Sat, 18 Dec 2010 16:44:41 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Cassandra Wiki" for change notification.

The "LargeDataSetConsiderations" page has been changed by PeterSchuller.
http://wiki.apache.org/cassandra/LargeDataSetConsiderations?action=diff&rev1=5&rev2=6

--------------------------------------------------

    * The operating system's page cache is affected by compaction and repair operations. If
you are relying on the page cache to keep the active set in memory, you may see significant
degradation on performance as a result of compaction and repair operations.
     * There is work happening to improve this. TODO: link to JIRA tickets about direct i/o,
fadvise, mincore() etc.
   * If you have column families with more than 143 million row keys in them, bloom filter
false positive rates are likely to go up because of implementation concerns that limit the
maximum size of a bloom filter. See [[ArchitectureInternals]] for information on how bloom
filters are used. The negative effects of hitting this limit is that reads will start taking
additional seeks to disk as the row count increases. Note that the effect you are seeing at
any given moment will depend on when compaction was last run, because the bloom filter limit
is per-sstable. It is an issue for column families because after a major compaction, the entire
column family will be in a single sstable.
-   * This will likely be addressed in the future: TODO: add JIRA links to the bigger-bf and
the limit-sstable-size issue.
+   * This will likely be addressed in the future: See [[https://issues.apache.org/jira/browse/CASSANDRA-1608|CASSANDRA-1608]]
and TODO: bigger-bf jira
   * Compaction is currently not concurrent, so only a single compaction runs at a time. This
means that sstable counts may spike during larger compactions as several smaller sstables
are written while a large compaction is happening. This can cause additional seeks on reads.
    * TODO: link to parallel compaction JIRA ticket, file another one specifically for ensuring
this issue is addressed (the pre-existing only deals with using multiple cores for throughput
reasons)
   * Consider the choice of file system. Removal of large files is notoriously slow and seek
bound on e.g. ext2/ext3. Consider xfs or ext4fs.

Mime
View raw message