hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Broken HBASE ( Help Needed)
Date Mon, 02 Apr 2012 15:44:09 GMT
Can you run 'bin/hbase hbck' and see if there is any inconsistency ?

Thanks

On Mon, Apr 2, 2012 at 7:07 AM, Toni Moreno <toni.moreno@gmail.com> wrote:

> when I try count data rows I have this output after a while.--
>
> hbase(main):001:0> list
> TABLE
> tsdb
> tsdb-uid
> 2 row(s) in 0.7600 seconds
>
> hbase(main):002:0> count 'tsdb-uid'
>
> ERROR: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to
> find region for tsdb-uid,,99999999999999 after 7 tries.
>
>
>
> 2012/4/2 Toni Moreno <toni.moreno@gmail.com>
>
> >
> > Hi guys.
> >
> > I have a working hbase  0.92.0 ( with OpenTSDB 1.1.0 ) A problem happened
> > some days ago, and I can not access  now to may data, it seems a
> corruption
> > data on HBASE.
> >
> > ¿ How can I fix this  corruption with hbase tools/commands ?
> >
> >
> >
> > HBASE log shows:
> >
> > 2012-04-02 14:06:12,379 INFO org.apache.hadoop.fs.FSInputChecker: Found
> > checksum error: b[630, 630]=
> > org.apache.hadoop.fs.ChecksumException: Checksum error:
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > at 3668992
> >         at
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
> >         at
> >
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> >         at java.io.DataInputStream.read(DataInputStream.java:132)
> >         at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >         at
> >
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> >         at
> > org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:180)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:789)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:407)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2012-04-02 14:06:12,380 DEBUG
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/c332a6033e280b786219866513f45fe1/recovered.edits/0000000000000181211.temp
> > 2012-04-02 14:06:12,381 WARN
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing old
> > edits file. It could be the result of a previous failed split attempt.
> > Deleting
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/c332a6033e280b786219866513f45fe1/recovered.edits/0000000000000181211,
> > length=1837832
> > 2012-04-02 14:06:12,383 DEBUG
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/f989c6d3d2e9a385913300b72499c21e/recovered.edits/0000000000000181210.temp
> > 2012-04-02 14:06:12,383 WARN
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Found existing old
> > edits file. It could be the result of a previous failed split attempt.
> > Deleting
> >
> file:/opt/hbase/data/splitlog/dwilyast02,48204,1333368305163_file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423/tsdb/f989c6d3d2e9a385913300b72499c21e/recovered.edits/0000000000000181210,
> > length=1830526
> > 2012-04-02 14:06:12,386 INFO
> > org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Processed 424
> edits
> > across 2 regions threw away edits for 0 regions; log
> >
> file=file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > is corrupted=false progress failed=false
> > 2012-04-02 14:06:12,386 WARN
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > failed, returning error
> > org.apache.hadoop.fs.ChecksumException: Checksum error:
> >
> file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting/dwilyast02%2C55897%2C1332401896263.1332650381423
> > at 3668992
> >         at
> >
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.readChunk(ChecksumFileSystem.java:219)
> >         at
> >
> org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:237)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
> >         at
> > org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
> >         at java.io.DataInputStream.read(DataInputStream.java:132)
> >         at java.io.DataInputStream.readFully(DataInputStream.java:178)
> >         at
> >
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> >         at
> > org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1988)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1888)
> >         at
> > org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1934)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:206)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:180)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:789)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:407)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFileToTemp(HLogSplitter.java:351)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:113)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:266)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:197)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:165)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2012-04-02 14:06:12,399 INFO
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: successfully
> > transitioned task
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > to final state err
> > 2012-04-02 14:06:12,399 INFO
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: worker
> > dwilyast02,48204,1333368305163 done with task
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > in 127ms
> > 2012-04-02 14:06:12,399 INFO
> > org.apache.hadoop.hbase.master.SplitLogManager: task
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > entered state err dwilyast02,48204,1333368305163
> > 2012-04-02 14:06:12,400 WARN
> > org.apache.hadoop.hbase.master.SplitLogManager: Error splitting
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > 2012-04-02 14:06:12,400 WARN
> > org.apache.hadoop.hbase.master.SplitLogManager: error while splitting
> logs
> > in [file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting,
> > file:/opt/hbase/data/.logs/dwilyast02,64391,1332830608263-splitting]
> > installed = 1 but only 0 done
> > 2012-04-02 14:06:12,400 WARN
> > org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting of
> > [dwilyast02,55897,1332401896263, dwilyast02,64391,1332830608263]
> > java.io.IOException: error or interrupt while splitting logs in
> > [file:/opt/hbase/data/.logs/dwilyast02,55897,1332401896263-splitting,
> > file:/opt/hbase/data/.logs/dwilyast02,64391,1332830608263-splitting]
> Task =
> > installed = 1 done = 0 error = 1
> >         at
> >
> org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:268)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:276)
> >         at
> >
> org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterFileSystem.java:216)
> >         at
> >
> org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:487)
> >         at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:326)
> >         at
> >
> org.apache.hadoop.hbase.master.HMasterCommandLine$LocalHMaster.run(HMasterCommandLine.java:218)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2012-04-02 14:06:12,410 DEBUG
> > org.apache.hadoop.hbase.master.SplitLogManager$DeleteAsyncCallback:
> deleted
> >
> /hbase/splitlog/file%3A%2Fopt%2Fhbase%2Fdata%2F.logs%2Fdwilyast02%2C55897%2C1332401896263-splitting%2Fdwilyast02%252C55897%252C1332401896263.1332650381423
> > 2012-04-02 14:06:12,410 DEBUG
> > org.apache.hadoop.hbase.regionserver.SplitLogWorker: tasks arrived or
> > departed
> >
> > --
> >
> > Att
> >
> > Toni Moreno
> >
> > 699706656
> >
> >
> >
> > *Si no quieres perderte en el olvido tan pronto como estés muerto y
> > corrompido, *
> >
> > *escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*
> >
> >
> >
> > *Benjamin Franklin*
> >
> >
>
>
> --
>
> Att
>
> Toni Moreno
>
> 699706656
>
>
>
> *Si no quieres perderte en el olvido tan pronto como estés muerto y
> corrompido, *
>
> *escribe cosas dignas de leerse, o haz cosas dignas de escribirse.*
>
>
>
> *Benjamin Franklin*
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message