Be short - what's the ideal column size in real world?

Long description - I'm working on a prototype, the application is a data store that holding blobs sizing from couple of KB to hundreds of MB, close to 1GB in the worst case. The data model is really simple - key is a string (UUID-like thing), and value is the blob, the only operations are "set", "get", and "delete".

The reason I pick up Cassandra is the feature of high availability and dynamic growth, also high write throughput is a great advantage since read/write ratio is about 1:100. Another idea is using a simple key-value store to keep UUID to location mapping, and store blob data as file in a NFS server, but managing growth is not that straightforward.

If the blob size is too big to fit into Cassandra, what's the ideal size? And if this is the case, I will try to cut it into slices but still keep everything in Cassandra, is this better than NFS solution?

Thanks,

CB

P.S. The real reason I want to try Cassandra is that I want to play with something new