hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7868) Use proper blocksize to choose target for blocks
Date Thu, 05 Mar 2015 10:11:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14348524#comment-14348524

Hadoop QA commented on HDFS-7868:

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  against trunk revision 5e9b814.

    {color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9746//console

This message is automatically generated.

> Use proper blocksize to choose target for blocks
> ------------------------------------------------
>                 Key: HDFS-7868
>                 URL: https://issues.apache.org/jira/browse/HDFS-7868
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: zhouyingchao
>            Assignee: zhouyingchao
>         Attachments: HDFS-7868-001.patch, HDFS-7868-002.patch
> In BlockPlacementPolicyDefault.java:isGoodTarget, the passed-in blockSize is used to
determine if there is enough room for a new block on a data node. However, in two conditions
the blockSize might not be proper for the purpose: (a) the passed in block size is just the
size of the last block of a file, which might be very small (for e.g., called from BlockManager.ReplicationWork.chooseTargets).
(b) A file which might be created with a smaller blocksize.
> In these conditions, the calculated scheduledSize might be smaller than the actual value,
which finally might lead to following failure of writing or replication.

This message was sent by Atlassian JIRA

View raw message