Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 98A02CD2D for ; Fri, 16 Jan 2015 08:11:33 +0000 (UTC) Received: (qmail 92635 invoked by uid 500); 16 Jan 2015 08:11:35 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 92595 invoked by uid 500); 16 Jan 2015 08:11:35 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 92584 invoked by uid 99); 16 Jan 2015 08:11:35 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Jan 2015 08:11:35 +0000 Date: Fri, 16 Jan 2015 08:11:35 +0000 (UTC) From: "Walter Su (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDFS-7633) When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime thows IllegalArgumentException MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Walter Su updated HDFS-7633: ---------------------------- Status: Open (was: Patch Available) > When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime thows IllegalArgumentException > ----------------------------------------------------------------------------------------------------------- > > Key: HDFS-7633 > URL: https://issues.apache.org/jira/browse/HDFS-7633 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Affects Versions: 2.6.0 > Reporter: Walter Su > Assignee: Walter Su > Priority: Minor > > issue: > When Total blocks of one of my DNs reaches 33554432, It refuses to accept more blocks, this is the ERROR. > 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client at /172.1.1.8:50490 [Receiving block BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | datasight-198:25009:DataXceiver error processing WRITE_BLOCK operation src: /172.1.1.8:50490 dst: /172.1.1.11:25009 | org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250) > java.lang.IllegalArgumentException: n must be positive > at java.util.Random.nextInt(Random.java:300) > at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263) > at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276) > at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193) > at org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124) > at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) > at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232) > at java.lang.Thread.run(Thread.java:745) > analysis: > in function org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime() > when blockMap.size() is too big, > Math.max(blockMap.size(),1) * 600 is int type, and negtive > Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive > (int)period is Integer.MIN_VALUE > Math.abs((int)period) is Integer.MIN_VALUE , which is negtive > DFSUtil.getRandom().nextInt(periodInt) will thows IllegalArgumentException > I use Java HotSpot (build 1.7.0_05-b05) -- This message was sent by Atlassian JIRA (v6.3.4#6332)