Return-Path: Delivered-To: apmail-hadoop-hbase-dev-archive@minotaur.apache.org Received: (qmail 44457 invoked from network); 19 Oct 2009 06:34:56 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 19 Oct 2009 06:34:56 -0000 Received: (qmail 60221 invoked by uid 500); 19 Oct 2009 06:34:56 -0000 Delivered-To: apmail-hadoop-hbase-dev-archive@hadoop.apache.org Received: (qmail 60196 invoked by uid 500); 19 Oct 2009 06:34:56 -0000 Mailing-List: contact hbase-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-dev@hadoop.apache.org Delivered-To: mailing list hbase-dev@hadoop.apache.org Received: (qmail 60186 invoked by uid 99); 19 Oct 2009 06:34:56 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 19 Oct 2009 06:34:56 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 19 Oct 2009 06:34:53 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id B642D234C045 for ; Sun, 18 Oct 2009 23:34:31 -0700 (PDT) Message-ID: <974966539.1255934071732.JavaMail.jira@brutus> Date: Sun, 18 Oct 2009 23:34:31 -0700 (PDT) From: "Andrew Purtell (JIRA)" To: hbase-dev@hadoop.apache.org Subject: [jira] Commented: (HBASE-1916) FindBugs and javac warnings cleanup In-Reply-To: <108158824.1255877551515.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HBASE-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12767224#action_12767224 ] Andrew Purtell commented on HBASE-1916: --------------------------------------- Not all tests on branch succeed currently. See HBASE-1917. Holding off on committing this to branch until all tests are working again. > FindBugs and javac warnings cleanup > ----------------------------------- > > Key: HBASE-1916 > URL: https://issues.apache.org/jira/browse/HBASE-1916 > Project: Hadoop HBase > Issue Type: Bug > Reporter: Andrew Purtell > Assignee: Andrew Purtell > Priority: Minor > Fix For: 0.20.2, 0.21.0 > > Attachments: HBASE-1916-branch-shouldfix.patch, omnibus-cleanup-branch.patch, omnibus-cleanup-trunk.patch > > > FindBugs was run on the 0.20 branch code recently and produced ~400 warnings, which were presented to me in a meeting. So I installed the Eclipse FindBugs plugin, analyzed both trunk and 0.20 branch, and systematically went through and addressed the warnings. About a quarter of them were incorrect (spurious, pointless, etc.). Of the remainder, most were stylistic. A few were real bugs. Attached are big patches against trunk and branch which roll up all of the changes. They are too numerous to separate out and mostly do not warrant special attention. > All tests pass with patches applied. > There are a few real bugs fixed: > Would-be null dereference (fall through from null test): > {code} > Index: src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java > =================================================================== > @@ -1222,12 +1222,13 @@ > // queue at time iterator was taken out. Apparently goes from oldest. > for (ToDoEntry e: this.toDo) { > HMsg msg = e.msg; > - if (msg == null) { > + if (msg != null) { > + if (msg.isType(HMsg.Type.MSG_REGION_OPEN)) { > + addProcessingMessage(msg.getRegionInfo()); > + } > + } else { > LOG.warn("Message is empty: " + e); > } > - if (e.msg.isType(HMsg.Type.MSG_REGION_OPEN)) { > - addProcessingMessage(e.msg.getRegionInfo()); > - } > } > } > {code} > Invalid attempt to remove scanner from string keyed map using an integer key instead of scanner name: > {code} > Index: src/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java > =================================================================== > @@ -1900,7 +1902,8 @@ > null: results.toArray(new Result[0]); > } catch (Throwable t) { > if (t instanceof NotServingRegionException) { > - this.scanners.remove(scannerId); > + String scannerName = String.valueOf(scannerId); > + this.scanners.remove(scannerName); > } > throw convertThrowableToIOE(cleanup(t)); > } > {code} > Invalid equality test: > {code} > Index: src/java/org/apache/hadoop/hbase/util/FSUtils.java > =================================================================== > @@ -301,11 +301,15 @@ > // should be regions. Then in each region, should only be family > // directories. Under each of these, should be one file only. > Path d = tableDirs[i].getPath(); > - if (d.getName().equals(HConstants.HREGION_LOGDIR_NAME)) continue; > + if (d.getName().equals(HConstants.HREGION_LOGDIR_NAME)) { > + continue; > + } > FileStatus [] regionDirs = fs.listStatus(d, new DirFilter(fs)); > for (int j = 0; j < regionDirs.length; j++) { > Path dd = regionDirs[j].getPath(); > - if (dd.equals(HConstants.HREGION_COMPACTIONDIR_NAME)) continue; > + if (dd.getName().equals(HConstants.HREGION_COMPACTIONDIR_NAME)) { > + continue; > + } > // Else its a region name. Now look in region for families. > FileStatus [] familyDirs = fs.listStatus(dd, new DirFilter(fs)); > for (int k = 0; k < familyDirs.length; k++) { > @@ -360,11 +364,15 @@ > // only be family directories. Under each of these, should be a mapfile > // and info directory and in these only one file. > Path d = tableDirs[i].getPath(); > - if (d.getName().equals(HConstants.HREGION_LOGDIR_NAME)) continue; > + if (d.getName().equals(HConstants.HREGION_LOGDIR_NAME)) { > + continue; > + } > FileStatus [] regionDirs = fs.listStatus(d, new DirFilter(fs)); > for (int j = 0; j < regionDirs.length; j++) { > Path dd = regionDirs[j].getPath(); > - if (dd.equals(HConstants.HREGION_COMPACTIONDIR_NAME)) continue; > + if (dd.getName().equals(HConstants.HREGION_COMPACTIONDIR_NAME)) { > + continue; > + } > // Else its a region name. Now look in region for families. > FileStatus [] familyDirs = fs.listStatus(dd, new DirFilter(fs)); > for (int k = 0; k < familyDirs.length; k++) { > {code} > Minor race: > {code} > Index: src/java/org/apache/hadoop/hbase/client/HConnectionManager.java > =================================================================== > @@ -197,7 +197,7 @@ > * Get this watcher's ZKW, instanciate it if necessary. > * @return ZKW > */ > - public ZooKeeperWrapper getZooKeeperWrapper() throws IOException { > + public synchronized ZooKeeperWrapper getZooKeeperWrapper() throws IOException { > if(zooKeeperWrapper == null) { > zooKeeperWrapper = new ZooKeeperWrapper(conf, this); > } > {code} > Sleep within lock. Rewritten to sleep using lock instead. Also, notify when sleep is no longer necessary. Saves 20-30 seconds in TestZooKeeper: > {code} > Index: src/java/org/apache/hadoop/hbase/client/HConnectionManager.java > =================================================================== > @@ -323,6 +323,7 @@ > > if (tryMaster.isMasterRunning()) { > this.master = tryMaster; > + this.masterLock.notifyAll(); > break; > } > > @@ -340,7 +341,7 @@ > > // Cannot connect to master or it is not running. Sleep & retry > try { > - Thread.sleep(getPauseTime(tries)); > + this.masterLock.wait(getPauseTime(tries)); > } catch (InterruptedException e) { > // continue > } > {code} > 'column' and 'newColumn' are not guaranteed to not be null along some code paths. Protect against possible null dereference and make the static tool happy. > {code} > Index: src/java/org/apache/hadoop/hbase/regionserver/WildcardColumnTracker.java > =================================================================== > @@ -165,74 +165,80 @@ > this.newColumn = newColumns.get(newIndex); > return MatchCode.INCLUDE; > } > - > - > - // There are new and old, figure which to check first > - int ret = Bytes.compareTo(column.getBuffer(), column.getOffset(), > + > + if (column != null && newColumn != null) { > + // There are new and old, figure which to check first > + int ret = Bytes.compareTo(column.getBuffer(), column.getOffset(), > column.getLength(), newColumn.getBuffer(), newColumn.getOffset(), > newColumn.getLength()); > > - // Old is smaller than new, compare against old > - if(ret <= -1) { > - ret = Bytes.compareTo(column.getBuffer(), column.getOffset(), > + // Old is smaller than new, compare against old > + if(ret <= -1) { > + ret = Bytes.compareTo(column.getBuffer(), column.getOffset(), > column.getLength(), bytes, offset, length); > > + // Same column > + if(ret == 0) { > + if(column.increment() > this.maxVersions) { > + return MatchCode.SKIP; > + } > + return MatchCode.INCLUDE; > + } > + > + // Specified column is bigger than current column > + // Move down current column and check again > + if(ret <= -1) { > + if(++index == columns.size()) { > + this.column = null; > + } else { > + this.column = columns.get(index); > + } > + return checkColumn(bytes, offset, length); > + } > + > + // ret >= 1 > + // Specified column is smaller than current column > + // Nothing to match against, add to new and include > + newColumns.add(new ColumnCount(bytes, offset, length, 1)); > + return MatchCode.INCLUDE; > + } > + } > + > + if (newColumn != null) { > + // Cannot be equal, so ret >= 1 > + // New is smaller than old, compare against new > + int ret = Bytes.compareTo(newColumn.getBuffer(), newColumn.getOffset(), > + newColumn.getLength(), bytes, offset, length); > + > // Same column > if(ret == 0) { > - if(column.increment() > this.maxVersions) { > + if(newColumn.increment() > this.maxVersions) { > return MatchCode.SKIP; > } > return MatchCode.INCLUDE; > } > - > + > // Specified column is bigger than current column > // Move down current column and check again > if(ret <= -1) { > - if(++index == columns.size()) { > - this.column = null; > + if(++newIndex == newColumns.size()) { > + this.newColumn = null; > } else { > - this.column = columns.get(index); > + this.newColumn = newColumns.get(newIndex); > } > return checkColumn(bytes, offset, length); > } > - > + > // ret >= 1 > // Specified column is smaller than current column > // Nothing to match against, add to new and include > newColumns.add(new ColumnCount(bytes, offset, length, 1)); > return MatchCode.INCLUDE; > } > - > - // Cannot be equal, so ret >= 1 > - // New is smaller than old, compare against new > - > - ret = Bytes.compareTo(newColumn.getBuffer(), newColumn.getOffset(), > - newColumn.getLength(), bytes, offset, length); > - > - // Same column > - if(ret == 0) { > - if(newColumn.increment() > this.maxVersions) { > - return MatchCode.SKIP; > - } > - return MatchCode.INCLUDE; > - } > - > - // Specified column is bigger than current column > - // Move down current column and check again > - if(ret <= -1) { > - if(++newIndex == newColumns.size()) { > - this.newColumn = null; > - } else { > - this.newColumn = newColumns.get(newIndex); > - } > - return checkColumn(bytes, offset, length); > - } > - > - // ret >= 1 > - // Specified column is smaller than current column > - // Nothing to match against, add to new and include > + > + // No match happened, add to new and include > newColumns.add(new ColumnCount(bytes, offset, length, 1)); > - return MatchCode.INCLUDE; > + return MatchCode.INCLUDE; > } > {code} -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.