incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: problems with many columns on a row
Date Sun, 05 Jun 2011 04:45:18 GMT
It is rarely a good idea to let the data disk get to far over 50% utilisation. With so little
free space the compaction process will have trouble running http://wiki.apache.org/cassandra/MemtableSSTable

As you are on the RC1 I would just drop the data and start again. If you need to keep it you
can use multiple data directories as specified in the cassandra.yaml file. See the data_file_directories
setting. (the recommendation is to use 1 data directory) 

The exception looks pretty odd, something wacky with the column family definition. Have you
been changing the schema ? 

For the delete problem, something looks odd about the timestamps you are using.  How was the
data inserted ? 

This is your data sample...

[default@TestKS] get CFTest['44656661756c747c65333332356231342d373937392d313165302d613663382d3132333133633033336163347c5461626c65737c5765625369746573'];
=> (column=count, value=3331353030, timestamp=1464439894)
=> (column=split, value=3334, timestamp=1464439894)
 
Time stamps are normally microseconds since the unix epoch http://wiki.apache.org/cassandra/DataModel?highlight=%28timestamp%29

This is what the CLI will use, e.g. 

[default@dev] set data[ascii('foo')]['bar'] = 'baz';
Value inserted.
[default@dev] get data['foo'];                                                    
=> (column=bar, value=62617a, timestamp=1307248484615000)
Returned 1 results.
[default@dev] del data['foo'];
row removed.
[default@dev] get data['foo'];                
Returned 0 results.
[default@dev] 


The higher numbers created by the client should still work, but I would look into this first.


Cheers


-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 5 Jun 2011, at 10:09, Mario Micklisch wrote:

> Yes, checked the log file, no errors there.
> 
> With debug logging it confirms to receive the write too and it is also in the commitlog.
> 
> DEBUG 22:00:14,057 insert writing local RowMutation(keyspace='TestKS', key='44656661756c747c65333332356231342d373937392d313165302d613663382d3132333133633033336163347c5461626c65737c5765625369746573',
modifications=[CFTest])
> DEBUG 22:00:14,057 applying mutation of row 44656661756c747c65333332356231342d373937392d313165302d613663382d3132333133633033336163347c5461626c65737c5765625369746573
> 
> 
> But doing compact with the nodetool triggered an error:
> 
> ERROR [CompactionExecutor:8] 2011-06-04 21:47:44,021 CompactionManager.java (line 510)
insufficient space to compact even the two smallest files, aborting
> ERROR [CompactionExecutor:8] 2011-06-04 21:47:44,024 CompactionManager.java (line 510)
insufficient space to compact even the two smallest files, aborting   
> 
> The data folder has currently a size of about 1GB, there are 150GB free disk space on
the volume where I pointed all cassandra directories but only 3.5GB free disk space on the
operating system disk.
> 
> Could this be the reason? How can I set the environment variables to let it only use
the dedicated volume?
> 
> 
> Trying to use sstable2json did not work (throws an exception, am I using the wrong parameter?):
> 
> # sstable2json ./CFTest-g-40-Data.db  
> log4j:WARN No appenders could be found for logger (org.apache.cassandra.config.DatabaseDescriptor).
> log4j:WARN Please initialize the log4j system properly.
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
> {
> Exception in thread "main" java.lang.NullPointerException
> 	at org.apache.cassandra.db.ColumnFamily.<init>(ColumnFamily.java:82)
> 	at org.apache.cassandra.db.ColumnFamily.create(ColumnFamily.java:70)
> 	at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:142)
> 	at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:90)
> 	at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:74)
> 	at org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:179)
> 	at org.apache.cassandra.io.sstable.SSTableScanner$KeyScanningIterator.next(SSTableScanner.java:144)
> 	at org.apache.cassandra.io.sstable.SSTableScanner.next(SSTableScanner.java:136)
> 	at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:313)
> 	at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:344)
> 	at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:357)
> 	at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:415)
> 
> 
> 
> Cheers,
>  Mario
> 
> 2011/6/4 Jonathan Ellis <jbellis@gmail.com>
> Did you check the server log for errors?
> 
> See if the problem persists after running nodetool compact. If it
> does, use sstable2json to export the row in question.


Mime
View raw message