hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7633) When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime thows IllegalArgumentException
Date Fri, 16 Jan 2015 08:20:35 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14279964#comment-14279964
] 

Hadoop QA commented on HDFS-7633:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12692716/h7633_20150116.patch
  against trunk revision 5d1cca3.

    {color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9246//console

This message is automatically generated.

> When Datanode has too many blocks, BlockPoolSliceScanner.getNewBlockScanTime thows IllegalArgumentException
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7633
>                 URL: https://issues.apache.org/jira/browse/HDFS-7633
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>            Reporter: Walter Su
>            Assignee: Walter Su
>            Priority: Minor
>         Attachments: h7633_20150116.patch
>
>
> issue:
> When Total blocks of one of my DNs reaches 33554432, It refuses to accept more blocks,
this is the ERROR.
> 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client  at /172.1.1.8:50490 [Receiving
block BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | datasight-198:25009:DataXceiver
error processing WRITE_BLOCK operation  src: /172.1.1.8:50490 dst: /172.1.1.11:25009 | org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
> java.lang.IllegalArgumentException: n must be positive
>         at java.util.Random.nextInt(Random.java:300)
>         at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263)
>         at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276)
>         at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193)
>         at org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
>         at java.lang.Thread.run(Thread.java:745)
> analysis:
> in function org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime()
> when blockMap.size() is too big,
> Math.max(blockMap.size(),1)  * 600  is int type, and negtive
> Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive
> (int)period  is Integer.MIN_VALUE
> Math.abs((int)period) is Integer.MIN_VALUE , which is negtive
> DFSUtil.getRandom().nextInt(periodInt)  will thows IllegalArgumentException
> I use Java HotSpot (build 1.7.0_05-b05)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message