Return-Path: X-Original-To: apmail-cassandra-commits-archive@www.apache.org Delivered-To: apmail-cassandra-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6F1D112CFE for ; Sun, 11 May 2014 01:43:57 +0000 (UTC) Received: (qmail 73620 invoked by uid 500); 10 May 2014 22:03:10 -0000 Delivered-To: apmail-cassandra-commits-archive@cassandra.apache.org Received: (qmail 59341 invoked by uid 500); 10 May 2014 22:02:11 -0000 Mailing-List: contact commits-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@cassandra.apache.org Delivered-To: mailing list commits@cassandra.apache.org Received: (qmail 32999 invoked by uid 99); 10 May 2014 22:00:24 -0000 Received: from Unknown (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 10 May 2014 22:00:24 +0000 Date: Sat, 10 May 2014 22:00:24 +0000 (UTC) From: "Tyler Hobbs (JIRA)" To: commits@cassandra.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (CASSANDRA-6525) Cannot select data which using "WHERE" MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/CASSANDRA-6525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13994056#comment-13994056 ] Tyler Hobbs commented on CASSANDRA-6525: ---------------------------------------- Considering that drop/recreate seems to be necessary to reproduce the issue and that using a disk_access_mode of "standard" with no compression seems to fix the issue, I believe the problem is that old FileCacheService entries are being reused with new SSTables. The FileCacheService is only used for PoolingSegmentedFiles, which are used if compression or mmap disk access mode are enabled. Since FileCacheService uses (String) file paths as keys, new SSTables with the same filename can lookup old entries. The only question is why the old FileCacheService entries are not being invalidated; this basically means that SSTableReader.close() is not being called in some cases. > Cannot select data which using "WHERE" > -------------------------------------- > > Key: CASSANDRA-6525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6525 > Project: Cassandra > Issue Type: Bug > Environment: Linux RHEL5 > RAM: 1GB > Cassandra 2.0.3 > CQL spec 3.1.1 > Thrift protocol 19.38.0 > Reporter: Silence Chow > Assignee: Tyler Hobbs > Fix For: 2.0.8 > > Attachments: 6981_test.py > > > I am developing a system on my single machine using VMware Player with 1GB Ram and 1Gb HHD. When I select all data, I didn't have any problems. But when I using "WHERE" and it has just below 10 records. I have got this error in system log: > {noformat} > ERROR [ReadStage:41] 2013-12-25 18:52:11,913 CassandraDaemon.java (line 187) Exception in thread Thread[ReadStage:41,5,main] > java.io.IOError: java.io.EOFException > at org.apache.cassandra.db.Column$1.computeNext(Column.java:79) > at org.apache.cassandra.db.Column$1.computeNext(Column.java:64) > at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88) > at org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37) > at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82) > at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157) > at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140) > at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:144) > at org.apache.cassandra.utils.MergeIterator$ManyToOne.(MergeIterator.java:87) > at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:46) > at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:120) > at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80) > at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:72) > at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:297) > at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53) > at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1487) > at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1306) > at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:332) > at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65) > at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1401) > at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1936) > at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.lang.Thread.run(Unknown Source) > Caused by: java.io.EOFException > at java.io.RandomAccessFile.readFully(Unknown Source) > at java.io.RandomAccessFile.readFully(Unknown Source) > at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:348) > at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392) > at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) > at org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:74) > at org.apache.cassandra.db.Column$1.computeNext(Column.java:75) > ... 27 more > {noformat} > E.g. > {{SELECT * FROM table;}} > Its fine. > {{SELECT * FROM table WHERE field = 'N';}} > field is the partition key. > Its said "Request did not complete within rpc_timeout." in cqlsh -- This message was sent by Atlassian JIRA (v6.2#6252)