hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: corrupt .logs block
Date Mon, 08 Aug 2011 23:24:40 GMT
Well, if its a log no longer used, then you could just delete it.
That'll get rid of the fsck complaint (True, logs are not per table so
to be safe you'd need to flush all tables -- this would get all edits
that the log could be carrying out into the filesystem into hfiles).

St.Ack

On Mon, Aug 8, 2011 at 4:20 PM, Geoff Hendrey <ghendrey@decarta.com> wrote:
> Ah. Thanks for that. No, I don't need the log anymore. I am aware of how
> to flush a table from the hbase shell. But since the "fsck /" tells me a
> log file is corrupt, but not which table the corruption pertains to,
> does this mean I have to flush all my tables (I have a lot of tables).
>
> -geoff
>
> -----Original Message-----
> From: saint.ack@gmail.com [mailto:saint.ack@gmail.com] On Behalf Of
> Stack
> Sent: Monday, August 08, 2011 4:09 PM
> To: user@hbase.apache.org
> Subject: Re: corrupt .logs block
>
> On Sat, Aug 6, 2011 at 12:12 PM, Geoff Hendrey <ghendrey@decarta.com>
> wrote:
>> I've got a corrupt HDFS block in a region server's ".logs" directory.
>
> You see this when you do hdfs fsck?  Is the log still needed?  You
> could do a flush across the cluster and that should do away with your
> dependency on this log.
>
> St.Ack
>

Mime
View raw message