cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kiril Menshikov <kmenshi...@gmail.com>
Subject Re: Extract big data to file
Date Wed, 08 Feb 2017 19:39:49 GMT
Did you try to receive data through the code? cqlsh probably not the right tool to fetch 360G.


> On Feb 8, 2017, at 12:34, Cogumelos Maravilha <cogumelosmaravilha@sapo.pt> wrote:
> 
> Hi list,
> 
> My database stores data from Kafka. Using C* 3.0.10
> 
> In my cluster I'm using:
> AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
> 
> The result of extract one day of data uncompressed is around 360G.
> 
> I've find these approaches:
> 
> echo "SELECT kafka from red where datetimestamp >= '2017-02-02 00:00:00' and datetimestamp
< '2017-02-02 15:00:01';" | cqlsh 100.100.221.146 9042 > result.txt
> Here by default I get 100 rows.
> 
> Using CAPTURE result.csv with paging off I always get the error out of memory. With paging
on I need to put something heavy in the top of the Enter key. Crazy thing need to enable paging
to get ride of out of memory! I've take a look to the result file and is empty, perhaps is
cooking the result in memory to in the end past to disk.
> 
> Is there another approach like this on ACID databases:
> copy (select kafka from red where datetimestamp >= '2017-02-02 00:00:00' and datetimestamp
< '2017-02-02 15:00:01') to 'result.csv' WITH CSV HEADER;
> 
> Thanks in advance.
> 


Mime
View raw message