I have actually.
I tested on 10 small nodes on Amazon EC2, each with 1 EBS disk. I've been avoiding large nodes for now since they are 4x the cost of a small, and 10 small would translate to 2.5 large nodes. We figured it's better to slice things into more nodes, since 2 or 3 nodes would mean large chunks of data would need to be moved if a node failed.
Under pure write loads with a fairly default config and 3x replication, we achieved 1,000 writes per second and probably could have pushed it a little bit more (perhaps to 2k per second). Write speed barely slowed even as we pushed past 50 million keys. Keys were 255 bytes with a single column containing 768 bytes.
Things got much worse when we introduced reads, however. We did a 50/50 read write split. IO went up, and nodes failed a couple hours into the test with out of memory errors. My theory is that the reads caused much more IO, which caused writes to get backed up in memory.