Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A2B8311E22 for ; Wed, 20 Aug 2014 23:35:31 +0000 (UTC) Received: (qmail 37241 invoked by uid 500); 20 Aug 2014 23:35:31 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 37195 invoked by uid 500); 20 Aug 2014 23:35:31 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 37184 invoked by uid 99); 20 Aug 2014 23:35:31 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 20 Aug 2014 23:35:31 +0000 Date: Wed, 20 Aug 2014 23:35:31 +0000 (UTC) From: "Hadoop QA (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-6758) block writer should pass the expected block size to DataXceiverServer MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14104796#comment-14104796 ] Hadoop QA commented on HDFS-6758: --------------------------------- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12663155/HDFS-6758.02.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:red}-1 release audit{color}. The applied patch generated 3 release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-hdfs-project/hadoop-hdfs: org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7698//console This message is automatically generated. > block writer should pass the expected block size to DataXceiverServer > --------------------------------------------------------------------- > > Key: HDFS-6758 > URL: https://issues.apache.org/jira/browse/HDFS-6758 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, hdfs-client > Affects Versions: 2.4.1 > Reporter: Arpit Agarwal > Assignee: Arpit Agarwal > Attachments: HDFS-6758.01.patch, HDFS-6758.02.patch > > > DataXceiver initializes the block size to the default block size for the cluster. This size is later used by the FsDatasetImpl when applying VolumeChoosingPolicy. > {code} > block.setNumBytes(dataXceiverServer.estimateBlockSize); > {code} > where > {code} > /** > * We need an estimate for block size to check if the disk partition has > * enough space. For now we set it to be the default block size set > * in the server side configuration, which is not ideal because the > * default block size should be a client-size configuration. > * A better solution is to include in the header the estimated block size, > * i.e. either the actual block size or the default block size. > */ > final long estimateBlockSize; > {code} > In most cases the writer can just pass the maximum expected block size to the DN instead of having to use the cluster default. -- This message was sent by Atlassian JIRA (v6.2#6252)