hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Hsieh (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
Date Mon, 16 Jun 2014 17:26:02 GMT

    [ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14032661#comment-14032661
] 

Jonathan Hsieh commented on HBASE-10933:
----------------------------------------

[~kashifjs@huawei.com], Here's a quick review:  Nice tests.  I have a few style suggestions/fixes.

This specific method isn't correct.  if we have a key "abc", the next key is actually "abc\x0",
not "abd".  Rename the method or fix the code.  Please add unit test to demonstrate expected
behavior.
{code}
+
+  public static boolean nextKey(byte[] key, int offset, int length) {
+    if (length == 0) {
+      return false;
+    }
+    int i = offset + length - 1;
+    while (key[i] == -1) {
+      key[i] = 0;
+      i--;
+      if (i < offset) {
+        return false;
+      }
+    }
+    key[i] = (byte) (key[i] + 1);
+    return true;
+  }
+
{code}

This is added and only used once in the code.  Why make all the generic code when you only
need the more specific code?  Please remove.

{code}
+  /**
+   * Increment the key to the next key
+   * @param key the key to increment
+   * @return a new byte array with the next key or null if the key could not be incremented
because
+   *         it's already at its max value.
+   */
+  public static byte[] nextKey(byte[] key) {
+    byte[] nextStartRow = new byte[key.length];
+    System.arraycopy(key, 0, nextStartRow, 0, key.length);
+    if (!nextKey(nextStartRow, nextStartRow.length)) {
+      return null;
+    }
+    return nextStartRow;
+  }
+
+  /**
+   * Increment the key in-place to the next key
+   * @param key the key to increment
+   * @param length the length of the key
+   * @return true if the key can be incremented and false otherwise if the key is at its
max value.
+   */
+  public static boolean nextKey(byte[] key, int length) {
+    return nextKey(key, 0, length);
+  }
+{
{code}

Nice catch!
{code}
===================================================================
--- src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java	(revision 1596187)
+++ src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java	(working copy)
@@ -597,14 +597,14 @@
     FileSystem fs = p.getFileSystem(getConf());
     FileStatus[] dirs = fs.listStatus(p);
     if (dirs == null) {
-      LOG.warn("Attempt to adopt ophan hdfs region skipped becuase no files present in "
+
+      LOG.warn("Attempt to adopt orphan hdfs region skipped because no files present in "
+
           p + ". This dir could probably be deleted.");
       return ;
     }
 
     String tableName = Bytes.toString(hi.getTableName());
     TableInfo tableInfo = tablesInfo.get(tableName);
-    Preconditions.checkNotNull("Table " + tableName + "' not present!", tableInfo);
+    Preconditions.checkNotNull(tableInfo, "Table " + tableName + "' not present!");
     HTableDescriptor template = tableInfo.getHTD();
{code}

Style issue. Just pass in the list of split keys instead of having the boolean withSplits
arg?  Then if the splits are null or empty list it is the base case with one region/withSplits=false.
 Remove boolean wlthSpilts and add rowkeys null/empty list check instead.
{code}
  * @throws InterruptedException
    * @throws KeeperException
    */
-  HTable setupTable(String tablename) throws Exception {
+  HTable setupTable(String tablename, boolean withSplits, byte[][] rowkeys) throws Exception
{
     HTableDescriptor desc = new HTableDescriptor(tablename);
     HColumnDescriptor hcd = new HColumnDescriptor(Bytes.toString(FAM));
     desc.addFamily(hcd); // If a table has no CF's it doesn't get checked
-    TEST_UTIL.getHBaseAdmin().createTable(desc, SPLITS);
+    if (withSplits) {
+      TEST_UTIL.getHBaseAdmin().createTable(desc, SPLITS);
+    } else {
+      TEST_UTIL.getHBaseAdmin().createTable(desc);
+    }
     tbl = new HTable(TEST_UTIL.getConfiguration(), tablename);
 
     List<Put> puts = new ArrayList<Put>();
-    for (byte[] row : ROWKEYS) {
+    for (byte[] row : rowkeys) {
       Put p = new Put(row);
       p.add(FAM, Bytes.toBytes("val"), row);
       puts.add(p);
@@ -871,7 +882,7 @@
{code} 


> hbck -fixHdfsOrphans is not working properly it throws null pointer exception
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-10933
>                 URL: https://issues.apache.org/jira/browse/HBASE-10933
>             Project: HBase
>          Issue Type: Bug
>          Components: hbck
>    Affects Versions: 0.94.16, 0.98.2
>            Reporter: Deepak Sharma
>            Assignee: Kashif J S
>            Priority: Critical
>             Fix For: 0.99.0, 0.94.21
>
>         Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, HBASE-10933-trunk-v1.patch,
HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, TestResults-trunk.txt
>
>
> if we regioninfo file is not existing in hbase region then if we run hbck repair or hbck
-fixHdfsOrphans
> then it is not able to resolve this problem it throws null pointer exception
> {code}
> 2014-04-08 20:11:49,750 INFO  [main] util.HBaseFsck (HBaseFsck.java:adoptHdfsOrphans(470))
- Attempting to handle orphan hdfs dir: hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
> java.lang.NullPointerException
> 	at org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
> 	at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
> 	at com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
> 	at com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
> 	at junit.framework.TestCase.runBare(TestCase.java:132)
> 	at junit.framework.TestResult$1.protect(TestResult.java:110)
> 	at junit.framework.TestResult.runProtected(TestResult.java:128)
> 	at junit.framework.TestResult.run(TestResult.java:113)
> 	at junit.framework.TestCase.run(TestCase.java:124)
> 	at junit.framework.TestSuite.runTest(TestSuite.java:243)
> 	at junit.framework.TestSuite.run(TestSuite.java:238)
> 	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> 	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> 	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> 	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> 	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> 	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> 	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}
> problem i got it is because since in HbaseFsck class 
> {code}
>  private void adoptHdfsOrphan(HbckInfo hi)
> {code}
> we are intializing tableinfo using SortedMap<String, TableInfo> tablesInfo object
> {code}
> TableInfo tableInfo = tablesInfo.get(tableName);
> {code}
> but  in private SortedMap<String, TableInfo> loadHdfsRegionInfos()
> {code}
>  for (HbckInfo hbi: hbckInfos) {
>       if (hbi.getHdfsHRI() == null) {
>         // was an orphan
>         continue;
>       }
> {code}
> we have check if a region is orphan then that table will can not be added in SortedMap<String,
TableInfo> tablesInfo
> so later while using this we get null pointer exception



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message