incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hive13 Wong <hiv...@gmail.com>
Subject Re: Skipping corrupted rows when doing compaction
Date Tue, 01 Jun 2010 13:46:59 GMT
Thanks, Jonathan

I'm using 0.6.1
And another thing is that I get lots of zero-sized tmp files in the data
directory.
When I restarted cassandra those tmp files will be deleted then new empty
tmp files will be generated gradually, while still lots
of UTFDataFormatException in the system.log

Using 0.6.2 and DiskAccessMode=standard will skip corrupted rows?

On Tue, Jun 1, 2010 at 9:08 PM, Jonathan Ellis <jbellis@gmail.com> wrote:

> If you're on a version earlier than 0.6.1, you might be running into
> https://issues.apache.org/jira/browse/CASSANDRA-866.  Upgrading will
> fix it, you don't need to reload data.
>
> It's also worth trying 0.6.2 and DiskAccessMode=standard, in case
> you've found another similar bug.
>
> On Tue, Jun 1, 2010 at 7:37 AM, hive13 Wong <hive13@gmail.com> wrote:
> > Hi,
> > Is there a way to skip corrupted rows when doing compaction?
> > We are currently deploying 2 nodes with replicationfactor=2 but one node
> > reports lots of exceptions like java.io.UTFDataFormatException: malformed
> > input around byte 72. My guess is that some of the data in the SSTable is
> > corrupted but not all because I can still read data out of the related CF
> > but for some keys.
> > It's OK for us to throw away a small portion of the data to get the nodes
> > working normal.
> > If there is no such way to skip corrupted rows can I just clean all the
> data
> > in the corrupted node and then add it back to the cluster?
> > Will it automatically migrating data from the other node?
> > Thanks.
> > Ivan
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of Riptano, the source for professional Cassandra support
> http://riptano.com
>

Mime
View raw message