hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Koch <ogd...@googlemail.com>
Subject Re: How to remove all traces of a dropped table.
Date Sun, 28 Apr 2013 18:24:25 GMT
Hello,

Thank you for your responses JD and Kevin. I am pretty sure we delete ZK
info - however not sure if there was stale information in .META. I have to
check.

/David


On Thu, Apr 25, 2013 at 3:55 PM, Kevin O'dell <kevin.odell@cloudera.com>wrote:

> David,
>
>   I have only seen this once before and I actually had to drop the META
> table and rebuild it with HBCK.  After that the import worked. I am pretty
> sure I cleaned up the ZK as well. It was very strange indeed.  If you can
> reproduce this can you open a JIRA as this is no longer a one off scenario.
> On Apr 25, 2013 9:28 AM, "Jean-Marc Spaggiari" <jean-marc@spaggiari.org>
> wrote:
>
> > Hi David,
> >
> > After you dropped your table, did you looked into the ZK server to see
> > if all nodes related to this table got removed to?
> >
> > Also, have you tried to run HBCK after the drop to see if you system if
> > fine?
> >
> > JM
> >
> > 2013/4/16 David Koch <ogdude@googlemail.com>:
> > > Hello,
> > >
> > > We had problems with not being able to scan over a large (~8k regions)
> > > table so we disabled and dropped it and decided to re-import data from
> > > scratch into a table with the SAME name. This never worked and I list
> > some
> > > log extracts below.
> > >
> > > The only way to make the import go through was to import into a table
> > with
> > > a different name. Hence my question:
> > >
> > > How do I remove all traces of a table which was dropped? Our cluster
> > > consists of 30 machines, running CDH4.0.1 with HBase 0.92.1.
> > >
> > > Thank you,
> > >
> > > /David
> > >
> > > Log stuff:
> > >
> > > The Mapper job reads text and the output are Puts. A couple of minutes
> > into
> > > the job it fails with the following message in the task log:
> > >
> > > 2013-04-16 17:11:16,918 WARN
> > >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation:
> > > Encountered problems when prefetch META table:
> > > java.io.IOException: HRegionInfo was null or empty in Meta for
> my_table,
> > > row=my_table,\xC1\xE7T\x01a8OM\xB0\xCE/\x97\x88"\xB7y,99999999999999
> > >
> > > <repeat 9 times>
> > >
> > > 2013-04-16 17:11:16,924 INFO
> org.apache.hadoop.mapred.TaskLogsTruncater:
> > > Initializing logs' truncater with mapRetainSize=-1 and
> > reduceRetainSize=-1
> > > 2013-04-16 17:11:16,926 ERROR
> > > org.apache.hadoop.security.UserGroupInformation:
> > PriviledgedActionException
> > > as:jenkins (auth:SIMPLE) cause:java.io.IOException: HRegionInfo was
> null
> > or
> > > empty in .META.,
> > >
> >
> row=keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
> > >
> >
> my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
> > > 2013-04-16 17:11:16,926 WARN org.apache.hadoop.mapred.Child: Error
> > running
> > > child
> > > java.io.IOException: HRegionInfo was null or empty in .META.,
> > >
> >
> row=keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
> > >
> >
> my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
> > >     at
> > >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:957)
> > >     at
> > >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
> > >     at
> > >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:1524)
> > >     at
> > >
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatch(HConnectionManager.java:1409)
> > >     at
> > org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:943)
> > >     at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:820)
> > >     at org.apache.hadoop.hbase.client.HTable.put(HTable.java:795)
> > >     at
> > >
> >
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:121)
> > >     at
> > >
> >
> org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:82)
> > >     at
> > >
> >
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:533)
> > >     at
> > >
> >
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:88)
> > >     at
> > >
> >
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:106)
> > >     at
> > >
> >
> com.mycompany.data.tools.export.Export2HBase$JsonImporterMapper.map(Export2HBase.java:81)
> > >     at
> > >
> >
> com.mycompany.data.tools.export.Export2HBase$JsonImporterMapper.map(Export2HBase.java:50)
> > >     at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:140)
> > >     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
> > >     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
> > >     at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
> > >     at java.security.AccessController.doPrivileged(Native Method)
> > >     at javax.security.auth.Subject.doAs(Subject.java:396)
> > >     at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
> > >     at org.apache.hadoop.mapred.Child.main(Child.java:264)
> > > 2013-04-16 17:11:16,929 INFO org.apache.hadoop.mapred.Task: Runnning
> > > cleanup for the task
> > >
> > > The master server contains stuff like this:
> > >
> > > WARN org.apache.hadoop.hbase.master.CatalogJanitor:
> REGIONINFO_QUALIFIER
> > is
> > > empty in
> > >
> >
> keyvalues={my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:server/1366035344492/Put/vlen=22,
> > >
> >
> my_table,\xA4\xDC\x82\x84OAB\xC1\xBA\xE9\xE7\xA9\xE8\x81\x16\x09,1365996567593.50bb0cbde855cbdc4006051531dba162./info:serverstartcode/1366035344492/Put/vlen=8}
> > >
> > >
> > > We tried pre-splitting the table, same outcome. We deleted all
> Zookeeper
> > > info in /hbase using zkcli, no help.
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message