Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A677910559 for ; Sat, 22 Feb 2014 12:54:22 +0000 (UTC) Received: (qmail 26119 invoked by uid 500); 22 Feb 2014 12:54:21 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 26087 invoked by uid 500); 22 Feb 2014 12:54:20 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 26065 invoked by uid 99); 22 Feb 2014 12:54:19 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 22 Feb 2014 12:54:19 +0000 Date: Sat, 22 Feb 2014 12:54:19 +0000 (UTC) From: "haosdent (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-5996) hadoop 1.1.2. hdfs write bug MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909359#comment-13909359 ] haosdent commented on HDFS-5996: -------------------------------- https://issues.apache.org/jira/browse/HDFS-744 > hadoop 1.1.2. hdfs write bug > ------------------------------- > > Key: HDFS-5996 > URL: https://issues.apache.org/jira/browse/HDFS-5996 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs > Affects Versions: 1.1.2 > Environment: one master and three slave ,all of them are normal > Reporter: wangmeng > Fix For: 1.1.2 > > Original Estimate: 504h > Remaining Estimate: 504h > > I am a student from China ,my research is Hive data storage on hadoop .There is a hdfs-write bug when I used sql : insert overwrite table wangmeng select * from testTable (this sql is translated into N map( no Reduce) jobs,each map .corresponding to a HDFS file output On disk. ) No matter what value N is , there will always exists some DfsdataoutputStream buffer can not write to disk at last ,such as N=160 files ,then there my be about 5 write-faliure files .,the write-failured hdfs--file size on disk is always 0 bytes rather than a value which is between 0 and zhe correct size. .There does not have any exceptions to throw . and the HDFS WRITTEN statistical data is absolutely correct . > When I debug , I find those write-failed DFS-buffer own absolutely correct values on its buffer ,but the buffer can not write to disk at last although I use Dfsdataoutputstream.flush() , Dfsdataoutputstream close() . > .I can not find the reason those dfs-buffer can not write success. Now I choose a method to avoide this problem by using a temporary file : for example , if the DFS-buffer will write to its destination FINAL, now I will let this DFS-buffer write to a temporary file TEM first ,and then I move the TEM data to the destination just by change the hdfs-- file path. This method can avoid the DFS-buffer write -failure .Now I want to fix this problem radically , so How can I patch my codes about this problem and is there anything I can do ? Many Thanks. -- This message was sent by Atlassian JIRA (v6.1.5#6160)