cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "qihuang.zheng"<>
Subject Re: C* Table Changed and Data Migration with new primary key
Date Tue, 27 Oct 2015 01:58:41 GMT
Tks Doan! we would try spark. as our online already has spark1.4.1. but our C* version is 2.0.15,
As spark1.4 need 2.1.5, when I query size get error “Failed to fetch size estimates for
system.size_estimates”. but that’s not a serious problem.
I tried use spark1.4.1 and cass2.01.5 on test env, It’s really fast than just use java driver
But we may meet some problem on producet env, as our spark node deploy totally different with
cassandra nodes. 


发送时间:2015年10月22日(周四) 19:50
主题:Re: C* Table Changed and Data Migration with new primary key

Use Spark to distribute the job of copying data all over the cluster and help accelerating
the migration. The Spark connector does auto paging in the background with the Java Driver
Le 22 oct. 2015 11:03, "qihuang.zheng" a écrit :

I tried using java driver with auto paging query: setFetchSize instead of token function.
as Cass has this feature already.
ref from here:

But I tried in test envrionment with only 1million data read then insert 3 tables, It’s
too slow.
After running 20 min, Exception likeNoHostAvailableException happen, offcourse data did’t
sync completed.
And our product env has nearly 25 billion data. which is unacceptble for this case. It’s
there other ways?

Thanks  Regards,

发送时间:2015年10月22日(周四) 13:52
主题:Re: C* Table Changed and Data Migration with new primary key

Because the data format has changed, you’ll need to read it out and write it back in again.

This means using either a driver (java, python, c++, etc), or something like spark.

In either case, split up the token range so you can parallelize it for significant speed improvements.

From: "qihuang.zheng"
Reply-To: ""
Date: Wednesday, October 21, 2015 at 6:18 PM
To: user
Subject: C* Table Changed and Data Migration with new primary key

Hi All:
 We have a table defined only one partition key and some cluster key.
 attribute text,   
 partner text,  
 app text,    
 "timestamp" bigint, 
 event text,     
 PRIMARY KEY ((attribute), partner, app, "timestamp")
And now we want to split original test1 table to 3 tables like this: 
test_global : PRIMARY KEY ((attribute),“timestamp")
test_partner: PRIMARY KEY ((attribute, partner), "timestamp”)
test_app:    PRIMARY KEY ((attribute, partner, app), “timestamp”)

Why we split original table because when queryglobal databy timestamp desc like this:
select * from test1 where attribute=? order by timestamp desc
is not support in Cass. As class order by support should use all clustering key.
But sql like this:
select * from test1 where attribute=? order by partner desc,app desc, timestamp desc
can’t query the right global data by ts desc.
After Split table we could do globa data query right: select * from test_global where attribute=?
order by timestamp desc.

Now we have a problem ofdata migration.
As I Know,sstableloaderis the most easy way,but could’t deal with different table name.
(Am I right?)
Andcpcmd in cqlsh can’t fit our situation because our data is two large. (10Nodes, one nodes
has 400G data)
I alos try JavaAPI by query the origin table and then insert into 3 different splited table.But
seems too slow

Any Solution aboult quick data migration?

PS: Cass version: 2.0.15

Thanks  Regards,
View raw message