hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yong Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (HDFS-7633) BlockPoolSliceScanner fails when Datanode has too many blocks
Date Mon, 25 May 2015 03:36:18 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yong Zhang reassigned HDFS-7633:
--------------------------------

    Assignee: Yong Zhang  (was: Walter Su)

> BlockPoolSliceScanner fails when Datanode has too many blocks
> -------------------------------------------------------------
>
>                 Key: HDFS-7633
>                 URL: https://issues.apache.org/jira/browse/HDFS-7633
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>            Reporter: Walter Su
>            Assignee: Yong Zhang
>            Priority: Minor
>             Fix For: 2.6.1
>
>         Attachments: HDFS-7633.patch
>
>
> issue:
> When Total blocks of one of my DNs reaches 33554432, It refuses to accept more blocks,
this is the ERROR.
> 2015-01-16 15:21:44,571 | ERROR | DataXceiver for client  at /172.1.1.8:50490 [Receiving
block BP-1976278848-172.1.1.2-1419846518085:blk_1221043436_147936990] | datasight-198:25009:DataXceiver
error processing WRITE_BLOCK operation  src: /172.1.1.8:50490 dst: /172.1.1.11:25009 | org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:250)
> java.lang.IllegalArgumentException: n must be positive
>         at java.util.Random.nextInt(Random.java:300)
>         at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime(BlockPoolSliceScanner.java:263)
>         at org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.addBlock(BlockPoolSliceScanner.java:276)
>         at org.apache.hadoop.hdfs.server.datanode.DataBlockScanner.addBlock(DataBlockScanner.java:193)
>         at org.apache.hadoop.hdfs.server.datanode.DataNode.closeBlock(DataNode.java:1733)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:765)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>         at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>         at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
>         at java.lang.Thread.run(Thread.java:745)
> analysis:
> in function org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.getNewBlockScanTime()
> when blockMap.size() is too big,
> Math.max(blockMap.size(),1)  * 600  is int type, and negtive
> Math.max(blockMap.size(),1) * 600 * 1000L is long type, and negtive
> (int)period  is Integer.MIN_VALUE
> Math.abs((int)period) is Integer.MIN_VALUE , which is negtive
> DFSUtil.getRandom().nextInt(periodInt)  will thows IllegalArgumentException
> I use Java HotSpot (build 1.7.0_05-b05)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message