I am trying to do the same thing, as in our project, we want to load the data from Cassandra into Hadoop cluster, and SSTable is one obvious option, as you can get the changed data since last batch loading directly from the SSTable incremental backup files.
But, based on so far my research (I maybe wrong, as I just did limited research about the SSTable, I hope someone in this forum can tell me that I am wrong), it maybe is NOT a good option:
1) sstable2json looks like NOT a scalable solution to get the data out from the Cassandra, and it needs the access to "data" directory to get some meta data from system keyspace for the column family data dumped, which maybe is not an option in your MR environment.
2) So far I am thinking reuse the same API as being used in the sstable2json, but I have to provide these metadata in the API, like validator types/partitioner etc. I am surprised that as a backup, the column family SSTable dump files DOESN't contain these information by itself. Shouldn't it find out this from the SSTable files(ONLY) by itself?
3) The big trouble comes this if you want to parse the SSTables in your MR code. The API internal will load the Index/Compression_Info information from the Index/Compression files, which it assumes located in the same place as the data file, but it will use the FileSteam internal. So if these data files are in the DFS (Distributed File System), so far, I didn't find a way to tell the API to use the stream from the DFS, instead of Local File Input stream. So basically you have 2 options: a) Copy these files from DSF to local file system (Same as what Knewton guys did at https://github.com/Knewton/KassandraMRHelper) b) Develop your own API to access the SStable files directly ( My guess is that Netflix guys probably did this way. They have a project called "Aegisthus" (See here: http://techblog.netflix.com/2012/02/aegisthus-bulk-data-pipeline-out-of.html), but it is not open source.
4) About the performance, I am not sure, as SSTable2Json underline is using the same Cassandra API, but running in MR give us some support in scalability, as we can reuse the Hadoop framework for a lot of benefits it can bring.