Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D7A004DF2 for ; Mon, 6 Jun 2011 22:53:21 +0000 (UTC) Received: (qmail 82459 invoked by uid 500); 6 Jun 2011 22:53:21 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 82393 invoked by uid 500); 6 Jun 2011 22:53:21 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 82265 invoked by uid 99); 6 Jun 2011 22:53:21 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Jun 2011 22:53:21 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 06 Jun 2011 22:53:20 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 552E3104743 for ; Mon, 6 Jun 2011 22:53:00 +0000 (UTC) Date: Mon, 6 Jun 2011 22:53:00 +0000 (UTC) From: "Hadoop QA (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <1913470562.2348.1307400780345.JavaMail.tomcat@hel.zones.apache.org> In-Reply-To: <493746506.18997.1305637249918.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Commented] (HDFS-1950) Blocks that are under construction are not getting read if the blocks are more than 10. Only complete blocks are read properly. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-1950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045190#comment-13045190 ] Hadoop QA commented on HDFS-1950: --------------------------------- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12481631/hdfs-1950-trunk-test.txt against trunk revision 1132779. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 2 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these core unit tests: org.apache.hadoop.cli.TestHDFSCLI +1 contrib tests. The patch passed contrib unit tests. +1 system test framework. The patch passed system test framework compile. Test results: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/729//testReport/ Findbugs warnings: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/729//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Console output: https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/729//console This message is automatically generated. > Blocks that are under construction are not getting read if the blocks are more than 10. Only complete blocks are read properly. > -------------------------------------------------------------------------------------------------------------------------------- > > Key: HDFS-1950 > URL: https://issues.apache.org/jira/browse/HDFS-1950 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs client, name-node > Affects Versions: 0.20-append > Reporter: ramkrishna.s.vasudevan > Fix For: 0.20-append > > Attachments: HDFS-1950-2.patch, hdfs-1950-0.20-append-tests.txt, hdfs-1950-trunk-test.txt, hdfs-1950-trunk-test.txt > > > Before going to the root cause lets see the read behavior for a file having more than 10 blocks in append case.. > Logic: > ==== > There is prefetch size dfs.read.prefetch.size for the DFSInputStream which has default value of 10 > This prefetch size is the number of blocks that the client will fetch from the namenode for reading a file.. > For example lets assume that a file X having 22 blocks is residing in HDFS > The reader first fetches first 10 blocks from the namenode and start reading > After the above step , the reader fetches the next 10 blocks from NN and continue reading > Then the reader fetches the remaining 2 blocks from NN and complete the write > Cause: > ======= > Lets see the cause for this issue now... > Scenario that will fail is "Writer wrote 10+ blocks and a partial block and called sync. Reader trying to read the file will not get the last partial block" . > Client first gets the 10 block locations from the NN. Now it checks whether the file is under construction and if so it gets the size of the last partial block from datanode and reads the full file > However when the number of blocks is more than 10, the last block will not be in the first fetch. It will be in the second or other blocks(last block will be in (num of blocks / 10)th fetch) > The problem now is, in DFSClient there is no logic to get the size of the last partial block(as in case of point 1), for the rest of the fetches other than first fetch, the reader will not be able to read the complete data synced...........!! > also the InputStream.available api uses the first fetched block size to iterate. Ideally this size has to be increased -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira