Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4F14710521 for ; Fri, 12 Apr 2013 14:01:55 +0000 (UTC) Received: (qmail 60717 invoked by uid 500); 12 Apr 2013 14:01:52 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 60694 invoked by uid 500); 12 Apr 2013 14:01:52 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 60685 invoked by uid 99); 12 Apr 2013 14:01:52 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Apr 2013 14:01:52 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of arodriguez@inconcertcc.com designates 200.40.135.92 as permitted sender) Received: from [200.40.135.92] (HELO zcs-mail.inconcertcc.com) (200.40.135.92) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Apr 2013 14:01:48 +0000 Received: from localhost (localhost [127.0.0.1]) by zcs-mail.inconcertcc.com (Postfix) with ESMTP id 62A29254015 for ; Fri, 12 Apr 2013 11:01:08 -0300 (UYT) X-Virus-Scanned: amavisd-new at zcs-mail.inconcertcc.com Received: from zcs-mail.inconcertcc.com ([127.0.0.1]) by localhost (zcs-mail.inconcertcc.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id FqawedfnPi-N for ; Fri, 12 Apr 2013 11:01:04 -0300 (UYT) Received: from mail-wg0-f54.google.com (mail-wg0-f54.google.com [74.125.82.54]) by zcs-mail.inconcertcc.com (Postfix) with ESMTPSA id B6EC7254014 for ; Fri, 12 Apr 2013 11:01:03 -0300 (UYT) Received: by mail-wg0-f54.google.com with SMTP id a12so2673931wgh.33 for ; Fri, 12 Apr 2013 07:01:11 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.180.80.3 with SMTP id n3mr3824961wix.20.1365775271568; Fri, 12 Apr 2013 07:01:11 -0700 (PDT) Received: by 10.194.239.105 with HTTP; Fri, 12 Apr 2013 07:01:11 -0700 (PDT) In-Reply-To: References: <1341489891.34520491@f24.mail.ru> <4D9BD8FA-8A26-42EB-B485-AC1E796D9A8F@thelastpickle.com> <84B566FB5B7B244B81E6F1FEADA9087701D5EAE83D@LDNPCMMGMB01.INTRANET.BARCAPINT.COM> Date: Fri, 12 Apr 2013 11:01:11 -0300 Message-ID: Subject: Re: CorruptedBlockException From: =?ISO-8859-1?Q?Alexis_Rodr=EDguez?= To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=f46d04182538e7ac6b04da2a551e X-Virus-Checked: Checked by ClamAV on apache.org --f46d04182538e7ac6b04da2a551e Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Lanny, We will try that. Thanks a lot On Thu, Apr 11, 2013 at 11:13 PM, Lanny Ripple wrote: > Saw this in earlier versions. Our workaround was disable; drain; snap; > shutdown; delete; link from snap; restart; > > -ljr > > On Apr 11, 2013, at 9:45, wrote: > > I have formulated the following theory regarding C* 1.2.2 which may be > relevant: Whenever there is a disk error during compaction of an SS table > (e.g., bad block, out of disk space), that SStable=92s files stick around > forever after, and do not subsequently get deleted by normal compaction > (minor or major), long after all its records have been deleted. This caus= es > disk usage to rise dramatically. The only way to make the SStable files > disappear is to run =93nodetool cleanup=94 (which takes hours to run).***= * > > ** ** > > Just a theory so far=85.**** > > ** ** > > *From:* Alexis Rodr=EDguez [mailto:arodriguez@inconcertcc.com] > > *Sent:* Thursday, April 11, 2013 5:31 PM > *To:* user@cassandra.apache.org > *Subject:* Re: CorruptedBlockException**** > > ** ** > > Aaron,**** > > ** ** > > It seems that we are in the same situation as Nury, we are storing a lot > of files of ~5MB in a CF.**** > > ** ** > > This happens in a test cluster, with one node using cassandra 1.1.5, we > have commitlog in a different partition than the data directory. Normally > our tests use nearly 13 GB in data, but when the exception on compaction > appears our disk space ramp up to:**** > > ** ** > > # df -h**** > > Filesystem Size Used Avail Use% Mounted on**** > > /dev/sda1 440G 330G 89G 79% /**** > > tmpfs 7.9G 0 7.9G 0% /lib/init/rw**** > > udev 7.9G 160K 7.9G 1% /dev**** > > tmpfs 7.9G 0 7.9G 0% /dev/shm**** > > /dev/sdb1 459G 257G 179G 59% /cassandra**** > > ** ** > > # cd /cassandra/data/Repository/**** > > ** ** > > # ls Files/*tmp* | wc -l**** > > 1671**** > > ** ** > > # du -ch Files | tail -1**** > > 257G total**** > > ** ** > > # du -ch Files/*tmp* | tail -1**** > > 34G total**** > > ** ** > > We are using cassandra 1.1.5 with one node, our schema for that keyspace > is:**** > > ** ** > > [default@unknown] use Repository;**** > > Authenticated to keyspace: Repository**** > > [default@Repository] show schema;**** > > create keyspace Repository**** > > with placement_strategy =3D 'NetworkTopologyStrategy'**** > > and strategy_options =3D {datacenter1 : 1}**** > > and durable_writes =3D true;**** > > ** ** > > use Repository;**** > > ** ** > > create column family Files**** > > with column_type =3D 'Standard'**** > > and comparator =3D 'UTF8Type'**** > > and default_validation_class =3D 'BytesType'**** > > and key_validation_class =3D 'BytesType'**** > > and read_repair_chance =3D 0.1**** > > and dclocal_read_repair_chance =3D 0.0**** > > and gc_grace =3D 864000**** > > and min_compaction_threshold =3D 4**** > > and max_compaction_threshold =3D 32**** > > and replicate_on_write =3D true**** > > and compaction_strategy =3D > 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'**** > > and caching =3D 'KEYS_ONLY'**** > > and compaction_strategy_options =3D {'sstable_size_in_mb' : '120'}**** > > and compression_options =3D {'sstable_compression' : > 'org.apache.cassandra.io.compress.SnappyCompressor'};**** > > ** ** > > In our logs:**** > > ** ** > > ERROR [CompactionExecutor:1831] 2013-04-11 09:12:41,725 > AbstractCassandraDaemon.java (line 135) Exception in thread > Thread[CompactionExecutor:1831,1,main]**** > > java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException= : > (/cassandra/data/Repository/Files/Repository-Files-he-4533-Data.db): > corruption detected, chunk at 43325354 of length 65545.**** > > at > org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.= java:116) > **** > > at > org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow= .java:99) > **** > > at > org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(C= ompactionController.java:176) > **** > > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(= CompactionIterable.java:83) > **** > > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(= CompactionIterable.java:68) > **** > > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.= java:118) > **** > > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeItera= tor.java:101) > **** > > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractItera= tor.java:140) > **** > > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:= 135) > **** > > at > com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)**** > > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractItera= tor.java:140) > **** > > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:= 135) > **** > > at > org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.= java:173) > **** > > at > org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledC= ompactionTask.java:50) > **** > > at > org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(Compac= tionManager.java:154) > **** > > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)**= * > * > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)**= * > * > > at > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)**** > > at java.util.concurrent.FutureTask.run(FutureTask.java:166)**** > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1110) > **** > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:603) > **** > > at java.lang.Thread.run(Thread.java:636)**** > > ** ** > > ** ** > > **** > > ** ** > > On Thu, Jul 5, 2012 at 7:42 PM, aaron morton > wrote:**** > > > But I don't understand, how was all the available space taken away.**** > > Take a look on disk at /var/lib/cassandra/data/ and > /var/lib/cassandra/commitlog to see what is taking up a lot of space. > > Cassandra stores the column names as well as the values, so that can take > up some space.**** > > > > it says that while compaction a CorruptedBlockException has occured.**= * > * > > Are you able to reproduce this error ? > > Thanks > > > ----------------- > Aaron Morton > Freelance Developer > @aaronmorton > http://www.thelastpickle.com**** > > > On 6/07/2012, at 12:04 AM, Nury Redjepow wrote: > > > Hello to all, > > > > I have cassandra instance I'm trying to use to store millions of file > with size ~ 3MB. Data structure is simple, 1 row for 1 file, with row key > being the id of file. > > I'm loaded 1GB of data, and total available space is 10GB. And after a > few hour, all the available space was taken. In log, it says that while > compaction a CorruptedBlockException has occured. But I don't understand, > how was all the available space taken away. > > > > Data structure > > CREATE KEYSPACE largeobjectsWITH placement_strategy =3D 'SimpleStrategy= ' > > AND strategy_options=3D{replication_factor:1}; > > > > create column family content > > with column_type =3D 'Standard' > > and comparator =3D 'UTF8Type' > > and default_validation_class =3D 'BytesType' > > and key_validation_class =3D 'TimeUUIDType' > > and read_repair_chance =3D 0.1 > > and dclocal_read_repair_chance =3D 0.0 > > and gc_grace =3D 864000 > > and min_compaction_threshold =3D 4 > > and max_compaction_threshold =3D 32 > > and replicate_on_write =3D true > > and compaction_strategy =3D 'SizeTieredCompactionStrategy' > > and caching =3D 'keys_only'; > > > > > > Log messages > > > > INFO [FlushWriter:9] 2012-07-04 19:56:00,783 Memtable.java (line 266) > Writing Memtable-content@240294142(3955135/49439187 serialized/live > bytes, 91 ops) > > INFO [FlushWriter:9] 2012-07-04 19:56:00,814 Memtable.java (line 307) > Completed flushing > /var/lib/cassandra/data/largeobjects/content/largeobjects-content-h > > d-1608-Data.db (1991862 bytes) for commitlog position > ReplayPosition(segmentId=3D24245436475633, position=3D78253718) > > INFO [OptionalTasks:1] 2012-07-04 19:56:02,784 MeteredFlusher.java (lin= e > 62) flushing high-traffic column family CFS(Keyspace=3D'largeobjects', > ColumnFamily=3D' > > content') (estimated 46971537 bytes) > > INFO [OptionalTasks:1] 2012-07-04 19:56:02,784 ColumnFamilyStore.java > (line 633) Enqueuing flush of Memtable-content@1755783901(3757723/4697153= 7 > serialized/ > > live bytes, 121 ops) > > INFO [FlushWriter:9] 2012-07-04 19:56:02,785 Memtable.java (line 266) > Writing Memtable-content@1755783901(3757723/46971537 serialized/live > bytes, 121 ops) > > INFO [FlushWriter:9] 2012-07-04 19:56:02,835 Memtable.java (line 307) > Completed flushing > /var/lib/cassandra/data/largeobjects/content/largeobjects-content-h > > d-1609-Data.db (1894897 bytes) for commitlog position > ReplayPosition(segmentId=3D24245436475633, position=3D82028986) > > INFO [OptionalTasks:1] 2012-07-04 19:56:04,785 MeteredFlusher.java (lin= e > 62) flushing high-traffic column family CFS(Keyspace=3D'largeobjects', > ColumnFamily=3D' > > content') (estimated 56971025 bytes) > > INFO [OptionalTasks:1] 2012-07-04 19:56:04,785 ColumnFamilyStore.java > (line 633) Enqueuing flush of Memtable-content@1441175031(4557682/5697102= 5 > serialized/ > > live bytes, 124 ops) > > INFO [FlushWriter:9] 2012-07-04 19:56:04,786 Memtable.java (line 266) > Writing Memtable-content@1441175031(4557682/56971025 serialized/live > bytes, 124 ops) > > INFO [FlushWriter:9] 2012-07-04 19:56:04,814 Memtable.java (line 307) > Completed flushing > /var/lib/cassandra/data/largeobjects/content/largeobjects-content-h > > d-1610-Data.db (2287280 bytes) for commitlog position > ReplayPosition(segmentId=3D24245436475633, position=3D86604648) > > INFO [CompactionExecutor:39] 2012-07-04 19:56:04,815 CompactionTask.jav= a > (line 109) Compacting > [SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/con > > tent/largeobjects-content-hd-1610-Data.db'), > SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/content/largeo= bjects-content-hd-1608-Data.db'), > SSTable > > > Reader(path=3D'/var/lib/cassandra/data/largeobjects/content/largeobjects-= content-hd-1609-Data.db'), > SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/co > > ntent/largeobjects-content-hd-1607-Data.db')] > > INFO [OptionalTasks:1] 2012-07-04 19:56:05,786 MeteredFlusher.java (lin= e > 62) flushing high-traffic column family CFS(Keyspace=3D'largeobjects', > ColumnFamily=3D' > > content') (estimated 28300225 bytes) > > INFO [OptionalTasks:1] 2012-07-04 19:56:05,786 ColumnFamilyStore.java > (line 633) Enqueuing flush of Memtable-content@1828084851(2264018/2830022= 5 > serialized/ > > live bytes, 38 ops) > > INFO [FlushWriter:9] 2012-07-04 19:56:05,787 Memtable.java (line 266) > Writing Memtable-content@1828084851(2264018/28300225 serialized/live > bytes, 38 ops) > > INFO [FlushWriter:9] 2012-07-04 19:56:05,823 Memtable.java (line 307) > Completed flushing > /var/lib/cassandra/data/largeobjects/content/largeobjects-content-h > > d-1612-Data.db (1134604 bytes) for commitlog position > ReplayPosition(segmentId=3D24245436475633, position=3D88874176) > > ERROR [CompactionExecutor:39] 2012-07-04 19:56:06,667 > AbstractCassandraDaemon.java (line 134) Exception in thread > Thread[CompactionExecutor:39,1,main] > > java.io.IOError: > org.apache.cassandra.io.compress.CorruptedBlockException: > (/var/lib/cassandra/data/largeobjects/content/largeobjects-content-hd-161= 0-Data.db > > ): corruption detected, chunk at 1573104 of length 65545. > > at > org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.= java:116) > > at > org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow= .java:99) > > at > org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(C= ompactionController.java:164) > > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(= CompactionIterable.java:83) > > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(= CompactionIterable.java:68) > > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.= java:118) > > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeItera= tor.java:101) > > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractItera= tor.java:140) > > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:= 135) > > at com.google.common.collect.Iterators$7.computeNext(Iterators.java:614= ) > > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractItera= tor.java:140) > > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:= 135) > > at > org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.= java:173) > > at > org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(Compac= tionManager.java:150) > > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > > at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) > > at java.util.concurrent.FutureTask.run(FutureTask.java:166) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1110) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:603) > > at java.lang.Thread.run(Thread.java:679) > > Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: > (/var/lib/cassandra/data/largeobjects/content/largeobjects-content-hd-161= 0-Data.db): > cor > > ruption detected, chunk at 1573104 of length 65545. > > at > org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressC= hunk(CompressedRandomAccessReader.java:98) > > at > org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(Co= mpressedRandomAccessReader.java:77) > > at > org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.j= ava:302) > > at java.io.RandomAccessFile.readFully(RandomAccessFile.java:414) > > at java.io.RandomAccessFile.readFully(RandomAccessFile.java:394) > > at > org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.ja= va:95) > > at > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401) > > at > org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.j= ava:363) > > at > org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.jav= a:119) > > at > org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.jav= a:36) > > at > org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnF= amilySerializer.java:144) > > at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWi= thColumns(SSTableIdentityIterator.java:234) > > at > org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.= java:112) > > ... 20 more > > INFO [OptionalTasks:1] 2012-07-04 19:57:00,796 MeteredFlusher.java (lin= e > 62) flushing high-traffic column family CFS(Keyspace=3D'largeobjects', > ColumnFamily=3D' > > content') (estimated 38711275 bytes) > > INFO [OptionalTasks:1] 2012-07-04 19:57:00,796 ColumnFamilyStore.java > (line 633) Enqueuing flush of Memtable-content@1363920595(3096902/3871127= 5 > serialized/ > > live bytes, 74 ops) > > INFO [FlushWriter:9] 2012-07-04 19:57:00,796 Memtable.java (line 266) > Writing Memtable-content@1363920595(3096902/38711275 serialized/live > bytes, 74 ops) > > INFO [FlushWriter:9] 2012-07-04 19:57:00,821 Memtable.java (line 307) > Completed flushing > /var/lib/cassandra/data/largeobjects/content/largeobjects-content-h > > d-1613-Data.db (1553451 bytes) for commitlog position > ReplayPosition(segmentId=3D24245436475633, position=3D91981808) > > INFO [CompactionExecutor:40] 2012-07-04 19:57:00,822 CompactionTask.jav= a > (line 109) Compacting > [SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/con > > tent/largeobjects-content-hd-1610-Data.db'), > SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/content/largeo= bjects-content-hd-1613-Data.db'), > SSTable > > > Reader(path=3D'/var/lib/cassandra/data/largeobjects/content/largeobjects-= content-hd-1608-Data.db'), > SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/co > > ntent/largeobjects-content-hd-1609-Data.db'), > SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/content/largeo= bjects-content-hd-1607-Data.db'), > SSTabl > > > eReader(path=3D'/var/lib/cassandra/data/largeobjects/content/largeobjects= -content-hd-1612-Data.db')] > > INFO [OptionalTasks:1] 2012-07-04 19:57:01,797 MeteredFlusher.java (lin= e > 62) flushing high-traffic column family CFS(Keyspace=3D'largeobjects', > ColumnFamily=3D' > > content') (estimated 27750950 bytes) > > INFO [OptionalTasks:1] 2012-07-04 19:57:01,797 ColumnFamilyStore.java > (line 633) Enqueuing flush of Memtable-content@289600485(2220076/27750950 > serialized/l > > ive bytes, 70 ops) > > INFO [FlushWriter:9] 2012-07-04 19:57:01,797 Memtable.java (line 266) > Writing Memtable-content@289600485(2220076/27750950 serialized/live > bytes, 70 ops) > > INFO [FlushWriter:9] 2012-07-04 19:57:01,819 Memtable.java (line 307) > Completed flushing > /var/lib/cassandra/data/largeobjects/content/largeobjects-content-h > > d-1615-Data.db (1114538 bytes) for commitlog position > ReplayPosition(segmentId=3D24245436475633, position=3D94212034) > > ERROR [ReadStage:263] 2012-07-04 19:57:02,599 > AbstractCassandraDaemon.java (line 134) Exception in thread > Thread[ReadStage:263,5,main] > > java.lang.RuntimeException: java.lang.RuntimeException: error reading 1 > of 1 > > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StoragePr= oxy.java:1254) > > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java= :1110) > > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.jav= a:603) > > at java.lang.Thread.run(Thread.java:679) > > Caused by: java.lang.RuntimeException: error reading 1 of 1 > > at > org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(Simp= leSliceReader.java:83) > > at > org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(Simp= leSliceReader.java:39) > > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractItera= tor.java:140) > > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:= 135) > > at > org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTab= leSliceIterator.java:108) > > at > org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.= java:147) > > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.j= ava:90) > > at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:47) > > at > org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.jav= a:137) > > at > org.apache.cassandra.db.CollationController.collectAllData(CollationContr= oller.java:283) > > at > org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationC= ontroller.java:63) > > at > org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamily= Store.java:1321) > > > > > >**** > > ** ** > > _______________________________________________ > > This message may contain information that is confidential or privileged. > If you are not an intended recipient of this message, please delete it an= d > any attachments, and notify the sender that you have received it in error= . > Unless specifically stated in the message or otherwise indicated, you may > not duplicate, redistribute or forward this message or any portion thereo= f, > including any attachments, by any means to any other person, including an= y > retail investor or customer. This message is not a recommendation, advice= , > offer or solicitation, to buy/sell any product or service, and is not an > official confirmation of any transaction. Any opinions presented are sole= ly > those of the author and do not necessarily represent those of Barclays. > This message is subject to terms available at: > www.barclays.com/emaildisclaimer and, if received from Barclays' Sales or > Trading desk, the terms available at: > www.barclays.com/salesandtradingdisclaimer/. By messaging with Barclays > you consent to the foregoing. Barclays Bank PLC is a company registered i= n > England (number 1026167) with its registered office at 1 Churchill Place, > London, E14 5HP. This email may relate to or be sent from other members o= f > the Barclays group. > > _______________________________________________ > > --f46d04182538e7ac6b04da2a551e Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
Lanny,

We will try that.=A0

Thanks a lot


On Thu, Apr 1= 1, 2013 at 11:13 PM, Lanny Ripple <lanny@spotright.com> wr= ote:
Saw this in earlier v= ersions. Our workaround was disable; drain; snap; shutdown; delete; link fr= om snap; restart;

=A0 -ljr

On Apr 11, 2013, at 9:45,= <moshe.kr= anc@barclays.com> wrote:

I have formulated the fol= lowing theory regarding C* 1.2.2 which may be relevant: Whenever there is a= disk error during compaction of an SS table (e.g., bad block, out of disk = space), that SStable=92s files stick around forever after, and do not subse= quently get deleted by normal compaction (minor or major), long after all i= ts records have been deleted. This causes disk usage to rise dramatically. = The only way to make the SStable files disappear is to run =93nodetool clea= nup=94 (which takes hours to run).

=A0<= /p>

Just a theory so far= =85.

=A0<= /p>

From: Alexis R= odr=EDguez [mailto:arodriguez@inconcertcc.com]
Sent: Thursday, April 11, 2013 5:31 PM
To: user@cassandra.apache.org
Subject: Re: CorruptedBlockException

<= /div>

=A0

Aa= ron,

=A0

It seems that we are in the same situation as= Nury, we are storing a lot of files of ~5MB in a CF.<= /p>

=A0

This happens in a test cluster, with one node using cassandra = 1.1.5, we have commitlog in a different partition than the data directory. = Normally our tests use nearly 13 GB in data, but when the exception on comp= action appears our disk space ramp up to:

=A0

# df= -h

Filesystem =A0 =A0 =A0 =A0 =A0 =A0= Size =A0Used Avail Use% Mounted on

/dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 440G =A0330G =A0 89G =A079% /<= /span>

tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 7.= 9G =A0 =A0 0 =A07.9G =A0 0% /lib/init/rw

udev =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A07.9G =A0160K =A07.9G =A0= 1% /dev

tmpfs =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 7.9G =A0 =A0 0 =A07.9G =A0 0% /dev/shm

/dev/sdb1 =A0 =A0 =A0 =A0 =A0 =A0 459G =A0257G =A0179G =A059% /c= assandra

=A0

# cd /cassandra/data/Repository/

<= div>

=A0

# ls Files/*tmp* | wc -= l

1671

=A0

# du -ch Files = | tail -1

257G =A0 =A0total

=A0

34G =A0 =A0 total

Authenticated to keyspace: Repository

=A0 and min_compaction_threshold =3D 4

<= /div>

=A0 and max_compaction_threshold =3D 32

=A0 and replicate_on_write =3D true

=A0 and compaction_strategy =3D 'org.apache.cassandra.db.compact= ion.LeveledCompactionStrategy'

=A0 and caching =3D 'KEYS_ONLY'

=

=A0 and compaction_strategy_options =3D {'sstable_size_in_mb= ' : '120'}

=A0 and compression_options =3D {'sstable_compression' := 'org.apache.cassandra.io.compress.SnappyCompressor'};

=A0

=

In our logs:

=A0

<= p class=3D"MsoNormal">E= RROR [CompactionExecutor:1831] 2013-04-11 09:12:41,725 AbstractCassandraDae= mon.java (line 135) Exception in thread Thread[CompactionExecutor:1831,1,ma= in]

java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlock= Exception: (/cassandra/data/Repository/Files/Repository-Files-he-4533-Data.= db): corruption detected, chunk at 43325354 of length 65545.<= u>

=A0 =A0 =A0 =A0 at org.apache.cassandra.db.compaction.Precompact= edRow.merge(PrecompactedRow.java:116)

=A0 =A0 =A0 =A0 at org.= apache.cassandra.db.compaction.PrecompactedRow.<init>(PrecompactedRow= .java:99)

=A0 =A0 =A0 =A0 at org.apache.= cassandra.db.compaction.CompactionController.getCompactedRow(CompactionCont= roller.java:176)

=A0 =A0 =A0 =A0 at org.apache.cassandra.db.compaction.Compaction= Iterable$Reducer.getReduced(CompactionIterable.java:83)

=A0 =A0 =A0 =A0 at org.apache.cassandra.db.compaction.CompactionIterab= le$Reducer.getReduced(CompactionIterable.java:68)

<= /div>

= =A0 =A0 =A0 =A0 at org.apache.cassandra.utils.MergeIterator$ManyToOne.consu= me(MergeIterator.java:118)

=A0 =A0 =A0 =A0 at org.= apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.ja= va:101)

=A0 =A0 =A0 =A0 at com.google.co= mmon.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)

=A0 =A0 =A0 =A0 at com.google.common.collect.AbstractIterator.ha= sNext(AbstractIterator.java:135)

=A0 =A0 =A0 =A0 at com.= google.common.collect.Iterators$7.computeNext(Iterators.java:614)=

=A0 =A0 =A0 =A0 at com.google.common.collect.Abst= ractIterator.tryToComputeNext(AbstractIterator.java:140)

=A0 =A0 =A0 =A0 at com.google.common.collect.AbstractIterator.ha= sNext(AbstractIterator.java:135)

=A0 =A0 =A0 =A0 at org.= apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:1= 73)

=A0 =A0 =A0 =A0 at org.apache.cass= andra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.jav= a:50)

=A0 =A0 =A0 =A0 at org.apache.cassandra.db.compaction.Compaction= Manager$1.runMayThrow(CompactionManager.java:154)

<= /div>

= =A0 =A0 =A0 =A0 at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRu= nnable.java:30)

<= span style=3D"font-family:"Courier New"">=A0 =A0 =A0 =A0 at java.= util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

=A0 =A0 =A0 =A0 at java.util.concurrent.FutureTask$Sync.innerRun= (FutureTask.java:334)

=A0 =A0 =A0 =A0 at= java.util.concurrent.FutureTask.run(FutureTask.java:166)<= /u>

=A0 =A0 =A0 =A0 at java.util.concurrent.ThreadPoolExecutor.runWo= rker(ThreadPoolExecutor.java:1110)

=A0 =A0 =A0 =A0 at java= .util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)=

=A0 =A0 =A0 =A0 at java.lang.Thread.run= (Thread.java:636)

=A0

=A0

= =A0

<= /u>=A0

On Thu, Jul 5, 2012 at 7:42 PM= , aaron morton <aaron@thelastpickle.com> wrote:

> But I don't understand, how was all th= e available space taken away.

Take a look on disk at /var/lib/cassandra/data/<your_keyspace> and /= var/lib/cassandra/commitlog to see what is taking up a lot of space.

Cassandra stores the column names as well as the values, so that can ta= ke up some space.


> =A0= it says that while compaction a CorruptedBlockException has occured.=

Are you able to reproduce this error ?

= Thanks


-----------------
Aaron Morton
Freelance Developer<= br>@aaronmorton
http://www.thelastpickle.com


On 6/07= /2012, at 12:04 AM, Nury Redjepow wrote:

> Hello to all,
><= br>> =A0I have cassandra instance I'm trying to use to store million= s of file with size ~ 3MB. Data structure is simple, 1 row for 1 file, with= row key being the id of file.
> I'm loaded 1GB of data, and total available space is 10GB. And aft= er a few hour, all the available space was taken. In log, it says that whil= e compaction a CorruptedBlockException has occured. But I don't underst= and, how was all the available space taken away.
>
> Data structure
> CREATE KEYSPACE largeobjectsWITH placem= ent_strategy =3D 'SimpleStrategy'
> AND strategy_options=3D{r= eplication_factor:1};
>
> create column family content
> = =A0 with column_type =3D 'Standard'
> =A0 and comparator =3D 'UTF8Type'
> =A0 and default_vali= dation_class =3D 'BytesType'
> =A0 and key_validation_class = =3D 'TimeUUIDType'
> =A0 and read_repair_chance =3D 0.1
&g= t; =A0 and dclocal_read_repair_chance =3D 0.0
> =A0 and gc_grace =3D 864000
> =A0 and min_compaction_threshold = =3D 4
> =A0 and max_compaction_threshold =3D 32
> =A0 and repli= cate_on_write =3D true
> =A0 and compaction_strategy =3D 'SizeTie= redCompactionStrategy'
> =A0 and caching =3D 'keys_only';
>
>
> Log m= essages
>
> INFO [FlushWriter:9] 2012-07-04 19:56:00,783 Memtab= le.java (line 266) Writing Memtable-content@240294142(3955135/49439187 seri= alized/live bytes, 91 ops)
> INFO [FlushWriter:9] 2012-07-04 19:56:00,814 Memtable.java (line 307) = Completed flushing /var/lib/cassandra/data/largeobjects/content/largeobject= s-content-h
> d-1608-Data.db (1991862 bytes) for commitlog position R= eplayPosition(segmentId=3D24245436475633, position=3D78253718)
> INFO [OptionalTasks:1] 2012-07-04 19:56:02,784 MeteredFlusher.java (li= ne 62) flushing high-traffic column family CFS(Keyspace=3D'largeobjects= ', ColumnFamily=3D'
> content') (estimated 46971537 bytes= )
> INFO [OptionalTasks:1] 2012-07-04 19:56:02,784 ColumnFamilyStore.java = (line 633) Enqueuing flush of Memtable-content@1755783901(3757723/46971537 = serialized/
> live bytes, 121 ops)
> INFO [FlushWriter:9] 2012-= 07-04 19:56:02,785 Memtable.java (line 266) Writing Memtable-content@175578= 3901(3757723/46971537 serialized/live bytes, 121 ops)
> INFO [FlushWriter:9] 2012-07-04 19:56:02,835 Memtable.java (line 307) = Completed flushing /var/lib/cassandra/data/largeobjects/content/largeobject= s-content-h
> d-1609-Data.db (1894897 bytes) for commitlog position R= eplayPosition(segmentId=3D24245436475633, position=3D82028986)
> INFO [OptionalTasks:1] 2012-07-04 19:56:04,785 MeteredFlusher.java (li= ne 62) flushing high-traffic column family CFS(Keyspace=3D'largeobjects= ', ColumnFamily=3D'
> content') (estimated 56971025 bytes= )
> INFO [OptionalTasks:1] 2012-07-04 19:56:04,785 ColumnFamilyStore.java = (line 633) Enqueuing flush of Memtable-content@1441175031(4557682/56971025 = serialized/
> live bytes, 124 ops)
> INFO [FlushWriter:9] 2012-= 07-04 19:56:04,786 Memtable.java (line 266) Writing Memtable-content@144117= 5031(4557682/56971025 serialized/live bytes, 124 ops)
> INFO [FlushWriter:9] 2012-07-04 19:56:04,814 Memtable.java (line 307) = Completed flushing /var/lib/cassandra/data/largeobjects/content/largeobject= s-content-h
> d-1610-Data.db (2287280 bytes) for commitlog position R= eplayPosition(segmentId=3D24245436475633, position=3D86604648)
> INFO [CompactionExecutor:39] 2012-07-04 19:56:04,815 CompactionTask.ja= va (line 109) Compacting [SSTableReader(path=3D'/var/lib/cassandra/data= /largeobjects/con
> tent/largeobjects-content-hd-1610-Data.db'), = SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/content/larg= eobjects-content-hd-1608-Data.db'), SSTable
> Reader(path=3D'/var/lib/cassandra/data/largeobjects/content/largeo= bjects-content-hd-1609-Data.db'), SSTableReader(path=3D'/var/lib/ca= ssandra/data/largeobjects/co
> ntent/largeobjects-content-hd-1607-Dat= a.db')]
> INFO [OptionalTasks:1] 2012-07-04 19:56:05,786 MeteredFlusher.java (li= ne 62) flushing high-traffic column family CFS(Keyspace=3D'largeobjects= ', ColumnFamily=3D'
> content') (estimated 28300225 bytes= )
> INFO [OptionalTasks:1] 2012-07-04 19:56:05,786 ColumnFamilyStore.java = (line 633) Enqueuing flush of Memtable-content@1828084851(2264018/28300225 = serialized/
> live bytes, 38 ops)
> INFO [FlushWriter:9] 2012-0= 7-04 19:56:05,787 Memtable.java (line 266) Writing Memtable-content@1828084= 851(2264018/28300225 serialized/live bytes, 38 ops)
> INFO [FlushWriter:9] 2012-07-04 19:56:05,823 Memtable.java (line 307) = Completed flushing /var/lib/cassandra/data/largeobjects/content/largeobject= s-content-h
> d-1612-Data.db (1134604 bytes) for commitlog position R= eplayPosition(segmentId=3D24245436475633, position=3D88874176)
> ERROR [CompactionExecutor:39] 2012-07-04 19:56:06,667 AbstractCassandr= aDaemon.java (line 134) Exception in thread Thread[CompactionExecutor:39,1,= main]
> java.io.IOError: org.apache.cassandra.io.compress.CorruptedBl= ockException: (/var/lib/cassandra/data/largeobjects/content/largeobjects-co= ntent-hd-1610-Data.db
> ): corruption detected, chunk at 1573104 of length 65545.
> at o= rg.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.jav= a:116)
> at org.apache.cassandra.db.compaction.PrecompactedRow.<in= it>(PrecompactedRow.java:99)
> at org.apache.cassandra.db.compaction.CompactionController.getCompacte= dRow(CompactionController.java:164)
> at org.apache.cassandra.db.comp= action.CompactionIterable$Reducer.getReduced(CompactionIterable.java:83) > at org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getRe= duced(CompactionIterable.java:68)
> at org.apache.cassandra.utils.Mer= geIterator$ManyToOne.consume(MergeIterator.java:118)
> at org.apache.= cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:101)=
> at com.google.common.collect.AbstractIterator.tryToComputeNext(Abstrac= tIterator.java:140)
> at com.google.common.collect.AbstractIterator.h= asNext(AbstractIterator.java:135)
> at com.google.common.collect.Iter= ators$7.computeNext(Iterators.java:614)
> at com.google.common.collect.AbstractIterator.tryToComputeNext(Abstrac= tIterator.java:140)
> at com.google.common.collect.AbstractIterator.h= asNext(AbstractIterator.java:135)
> at org.apache.cassandra.db.compac= tion.CompactionTask.execute(CompactionTask.java:173)
> at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(= CompactionManager.java:150)
> at org.apache.cassandra.utils.WrappedRu= nnable.run(WrappedRunnable.java:30)
> at java.util.concurrent.Executo= rs$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)<= br>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>= at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.ja= va:1110)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:603)
> at java.lang.Thread.run(Thread.java:679)
> Cause= d by: org.apache.cassandra.io.compress.CorruptedBlockException: (/var/lib/c= assandra/data/largeobjects/content/largeobjects-content-hd-1610-Data.db): c= or
> ruption detected, chunk at 1573104 of length 65545.
> at org.apa= che.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(Comp= ressedRandomAccessReader.java:98)
> at org.apache.cassandra.io.compre= ss.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:= 77)
> at org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessRe= ader.java:302)
> at java.io.RandomAccessFile.readFully(RandomAccessFi= le.java:414)
> at java.io.RandomAccessFile.readFully(RandomAccessFile= .java:394)
> at org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTrac= ker.java:95)
> at org.apache.cassandra.utils.ByteBufferUtil.read(Byte= BufferUtil.java:401)
> at org.apache.cassandra.utils.ByteBufferUtil.r= eadWithLength(ByteBufferUtil.java:363)
> at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializ= er.java:119)
> at org.apache.cassandra.db.ColumnSerializer.deserializ= e(ColumnSerializer.java:36)
> at org.apache.cassandra.db.ColumnFamily= Serializer.deserializeColumns(ColumnFamilySerializer.java:144)
> at org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFa= milyWithColumns(SSTableIdentityIterator.java:234)
> at org.apache.cas= sandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:112)
> ... 20 more
> INFO [OptionalTasks:1] 2012-07-04 19:57:00,796 Met= eredFlusher.java (line 62) flushing high-traffic column family CFS(Keyspace= =3D'largeobjects', ColumnFamily=3D'
> content') (esti= mated 38711275 bytes)
> INFO [OptionalTasks:1] 2012-07-04 19:57:00,796 ColumnFamilyStore.java = (line 633) Enqueuing flush of Memtable-content@1363920595(3096902/38711275 = serialized/
> live bytes, 74 ops)
> INFO [FlushWriter:9] 2012-0= 7-04 19:57:00,796 Memtable.java (line 266) Writing Memtable-content@1363920= 595(3096902/38711275 serialized/live bytes, 74 ops)
> INFO [FlushWriter:9] 2012-07-04 19:57:00,821 Memtable.java (line 307) = Completed flushing /var/lib/cassandra/data/largeobjects/content/largeobject= s-content-h
> d-1613-Data.db (1553451 bytes) for commitlog position R= eplayPosition(segmentId=3D24245436475633, position=3D91981808)
> INFO [CompactionExecutor:40] 2012-07-04 19:57:00,822 CompactionTask.ja= va (line 109) Compacting [SSTableReader(path=3D'/var/lib/cassandra/data= /largeobjects/con
> tent/largeobjects-content-hd-1610-Data.db'), = SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/content/larg= eobjects-content-hd-1613-Data.db'), SSTable
> Reader(path=3D'/var/lib/cassandra/data/largeobjects/content/largeo= bjects-content-hd-1608-Data.db'), SSTableReader(path=3D'/var/lib/ca= ssandra/data/largeobjects/co
> ntent/largeobjects-content-hd-1609-Dat= a.db'), SSTableReader(path=3D'/var/lib/cassandra/data/largeobjects/= content/largeobjects-content-hd-1607-Data.db'), SSTabl
> eReader(path=3D'/var/lib/cassandra/data/largeobjects/content/large= objects-content-hd-1612-Data.db')]
> INFO [OptionalTasks:1] 2012-= 07-04 19:57:01,797 MeteredFlusher.java (line 62) flushing high-traffic colu= mn family CFS(Keyspace=3D'largeobjects', ColumnFamily=3D'
> content') (estimated 27750950 bytes)
> INFO [OptionalTasks:1= ] 2012-07-04 19:57:01,797 ColumnFamilyStore.java (line 633) Enqueuing flush= of Memtable-content@289600485(2220076/27750950 serialized/l
> ive by= tes, 70 ops)
> INFO [FlushWriter:9] 2012-07-04 19:57:01,797 Memtable.java (line 266) = Writing Memtable-content@289600485(2220076/27750950 serialized/live bytes, = 70 ops)
> INFO [FlushWriter:9] 2012-07-04 19:57:01,819 Memtable.java = (line 307) Completed flushing /var/lib/cassandra/data/largeobjects/content/= largeobjects-content-h
> d-1615-Data.db (1114538 bytes) for commitlog position ReplayPosition(s= egmentId=3D24245436475633, position=3D94212034)
> ERROR [ReadStage:26= 3] 2012-07-04 19:57:02,599 AbstractCassandraDaemon.java (line 134) Exceptio= n in thread Thread[ReadStage:263,5,main]
> java.lang.RuntimeException: java.lang.RuntimeException: error reading = 1 of 1
> at org.apache.cassandra.service.StorageProxy$DroppableRunnab= le.run(StorageProxy.java:1254)
> at java.util.concurrent.ThreadPoolEx= ecutor.runWorker(ThreadPoolExecutor.java:1110)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecut= or.java:603)
> at java.lang.Thread.run(Thread.java:679)
> Cause= d by: java.lang.RuntimeException: error reading 1 of 1
> at org.apach= e.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceRead= er.java:83)
> at org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNex= t(SimpleSliceReader.java:39)
> at com.google.common.collect.AbstractI= terator.tryToComputeNext(AbstractIterator.java:140)
> at com.google.c= ommon.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext= (SSTableSliceIterator.java:108)
> at org.apache.cassandra.utils.Merge= Iterator$Candidate.advance(MergeIterator.java:147)
> at org.apache.ca= ssandra.utils.MergeIterator$ManyToOne.<init>(MergeIterator.java:90) > at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:47)=
> at org.apache.cassandra.db.filter.QueryFilter.collateColumns(Query= Filter.java:137)
> at org.apache.cassandra.db.CollationController.col= lectAllData(CollationController.java:283)
> at org.apache.cassandra.db.CollationController.getTopLevelColumns(Coll= ationController.java:63)
> at org.apache.cassandra.db.ColumnFamilySto= re.getTopLevelColumns(ColumnFamilyStore.java:1321)
>
>
><= u>

=A0

_______________________________________________

This message may contain information that is confidential or privileged.= If=20 you are not an intended recipient of this message, please delete it and any= =20 attachments, and notify the sender that you have received it in error. Unle= ss=20 specifically stated in the message or otherwise indicated, you may not=20 duplicate, redistribute or forward this message or any portion thereof,=20 including any attachments, by any means to any other person, including any= =20 retail investor or customer. This message is not a recommendation, advice, = offer=20 or solicitation, to buy/sell any product or service, and is not an official= =20 confirmation of any transaction. Any opinions presented are solely those of= the=20 author and do not necessarily represent those of Barclays. This message is= =20 subject to terms available at: www.barclays.com/emaildisclaimer=20 and, if received from Barclays' Sales or Trading desk, the terms availa= ble at:=20 www.barclays.com/salesandtradingdisclaimer/.=20 By messaging with Barclays you consent to the foregoing. Barclays Bank PLC = is a=20 company registered in England (number 1026167) with its registered office a= t 1=20 Churchill Place, London, E14 5HP. This email may relate to or be sent from = other=20 members of the Barclays group.

_______________________________________________


--f46d04182538e7ac6b04da2a551e--