hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Deepak Sharma (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
Date Wed, 09 Apr 2014 10:32:18 GMT

    [ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13964003#comment-13964003
] 

Deepak Sharma commented on HBASE-10933:
---------------------------------------

this issue we face only when there is 1 region corresponding to the table 

since if more than 1 regions are there then as per following code tablesInfo sortedmap will
have (tableName, modTInfo) for the table that has HdfsOrphan dir , since this for loop will
iterate for other normal regions ,

for (HbckInfo hbi: hbckInfos) {

      if (hbi.getHdfsHRI() == null) {
        // was an orphan
        continue;
      }


      // get table name from hdfs, populate various HBaseFsck tables.
      String tableName = Bytes.toString(hbi.getTableName());
      if (tableName == null) {
        // There was an entry in META not in the HDFS?
        LOG.warn("tableName was null for: " + hbi);
        continue;
      }

      TableInfo modTInfo = tablesInfo.get(tableName);
      if (modTInfo == null) {
        // only executed once per table.
        modTInfo = new TableInfo(tableName);
        Path hbaseRoot = FSUtils.getRootDir(getConf());
        tablesInfo.put(tableName, modTInfo);
        try {
          HTableDescriptor htd =
              FSTableDescriptors.getTableDescriptor(hbaseRoot.getFileSystem(getConf()),
              hbaseRoot, tableName);
          modTInfo.htds.add(htd);
        } catch (IOException ioe) {
          if (!orphanTableDirs.containsKey(tableName)) {
            LOG.warn("Unable to read .tableinfo from " + hbaseRoot, ioe);
            //should only report once for each table
            errors.reportError(ERROR_CODE.NO_TABLEINFO_FILE,
                "Unable to read .tableinfo from " + hbaseRoot + "/" + tableName);
            Set<String> columns = new HashSet<String>();
            orphanTableDirs.put(tableName, getColumnFamilyList(columns, hbi));
          }
        }
      }
      if (!hbi.isSkipChecks()) {
        modTInfo.addRegionInfo(hbi);
      }
    }

> hbck -fixHdfsOrphans is not working properly it throws null pointer exception
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-10933
>                 URL: https://issues.apache.org/jira/browse/HBASE-10933
>             Project: HBase
>          Issue Type: Bug
>          Components: hbck
>    Affects Versions: 0.94.16
>            Reporter: Deepak Sharma
>            Assignee: Deepak Sharma
>            Priority: Critical
>
> if we regioninfo file is not existing in hbase region then if we run hbck repair or hbck
-fixHdfsOrphans
> then it is not able to resolve this problem it throws null pointer exception
> 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck (HBaseFsck.java:adoptHdfsOrphans(470))
- Attempting to handle orphan hdfs dir: hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
> java.lang.NullPointerException
> 	at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
> 	at com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
> 	at com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
> 	at junit.framework.TestCase.runBare(TestCase.java:132)
> 	at junit.framework.TestResult$1.protect(TestResult.java:110)
> 	at junit.framework.TestResult.runProtected(TestResult.java:128)
> 	at junit.framework.TestResult.run(TestResult.java:113)
> 	at junit.framework.TestCase.run(TestCase.java:124)
> 	at junit.framework.TestSuite.runTest(TestSuite.java:243)
> 	at junit.framework.TestSuite.run(TestSuite.java:238)
> 	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> 	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> 	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> 	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> 	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> 	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> 	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> problem i got it is because since in HbaseFsck class 
>  private void adoptHdfsOrphan(HbckInfo hi)
> we are intializing tableinfo using SortedMap<String, TableInfo> tablesInfo object
> TableInfo tableInfo = tablesInfo.get(tableName);
> but  in private SortedMap<String, TableInfo> loadHdfsRegionInfos()
>  for (HbckInfo hbi: hbckInfos) {
>       if (hbi.getHdfsHRI() == null) {
>         // was an orphan
>         continue;
>       }
> we have check if a region is orphan then that table will can not be added in SortedMap<String,
TableInfo> tablesInfo
> so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message