Couple of ideas, one is to multiplex the even log stream (using flume or kafka) and feed it straight into your secondary system. The event system should allow you to rate limit inserts if that is a concern.
The other is to use partitioning.
Group the log entries per user into some sensible partition, e.g. per day or per week. So your row key is "user_id : partition_start".
You can then keep a record of dirty partitions, this can be tricky depending on scale. It could be a row for each user, and a column for each dirty partition. Loading the delta then requires a range scan over the dirty partitions CF to read all rows, and then reading the dirty partition for the user. You would want to look at a low GC Grace and LDB for the dirty partitions CF.
Hope that helps.
Freelance Cassandra Developer