Is there a way to skip corrupted rows when doing compaction?
We are currently deploying 2 nodes with replicationfactor=2 but one node reports lots of exceptions like java.io.UTFDataFormatException: malformed input around byte 72. My guess is that some of the data in the SSTable is corrupted but not all because I can still read data out of the related CF but for some keys.
It's OK for us to throw away a small portion of the data to get the nodes working normal.
If there is no such way to skip corrupted rows can I just clean all the data in the corrupted node and then add it back to the cluster?
Will it automatically migrating data from the other node?