Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 25407 invoked from network); 1 Jun 2007 20:23:57 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 1 Jun 2007 20:23:57 -0000 Received: (qmail 47031 invoked by uid 500); 1 Jun 2007 20:24:00 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 46995 invoked by uid 500); 1 Jun 2007 20:24:00 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 46986 invoked by uid 99); 1 Jun 2007 20:24:00 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Jun 2007 13:24:00 -0700 X-ASF-Spam-Status: No, hits=0.0 required=10.0 tests= X-Spam-Check-By: apache.org Received-SPF: neutral (herse.apache.org: local policy) Received: from [203.99.254.143] (HELO rsmtp1.corp.hki.yahoo.com) (203.99.254.143) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Jun 2007 13:23:55 -0700 Received: from comehaspaintlx (vpn-client50.bangalore.corp.yahoo.com [10.80.52.50]) (authenticated bits=0) by rsmtp1.corp.hki.yahoo.com (8.13.8/8.13.6/y.rout) with ESMTP id l51KNQNv051779 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NO) for ; Fri, 1 Jun 2007 13:23:28 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; s=serpent; d=yahoo-inc.com; c=nofws; q=dns; h=from:to:references:subject:date:message-id:mime-version: content-type:content-transfer-encoding:x-mailer:in-reply-to: thread-index:x-mimeole; b=ls9aJwKkXM1HVr5W9ncv/WR4Lpd7Cpa0SAf+Z/A6qK9FljSnG0CTSurW6qDjoJjn From: "Devaraj Das" To: References: <18610318.1180725795820.JavaMail.jira@brutus> Subject: RE: [jira] Commented: (HADOOP-1396) FileNotFound exception on DFS block Date: Sat, 2 Jun 2007 01:53:25 +0530 Message-ID: <002c01c7a48a$b72e5ea0$dc21500a@ds.corp.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook 11 In-Reply-To: Thread-Index: AcekiSRPMA+nr0w6R4WR2kTOOy5muAAAYb/A X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.3028 X-Virus-Checked: Checked by ClamAV on apache.org I don't think it is that critical though. -----Original Message----- From: Nigel Daley [mailto:ndaley@yahoo-inc.com] Sent: Saturday, June 02, 2007 1:41 AM To: hadoop-dev@lucene.apache.org Subject: Re: [jira] Commented: (HADOOP-1396) FileNotFound exception on DFS block Should this go into 0.13? On Jun 1, 2007, at 12:23 PM, Devaraj Das (JIRA) wrote: > > [ https://issues.apache.org/jira/browse/HADOOP-1396? > page=com.atlassian.jira.plugin.system.issuetabpanels:comment- > tabpanel#action_12500816 ] > > Devaraj Das commented on HADOOP-1396: > ------------------------------------- > > +1 > >> FileNotFound exception on DFS block >> ----------------------------------- >> >> Key: HADOOP-1396 >> URL: https://issues.apache.org/jira/browse/ >> HADOOP-1396 >> Project: Hadoop >> Issue Type: Bug >> Components: dfs >> Affects Versions: 0.12.3 >> Reporter: Devaraj Das >> Assignee: dhruba borthakur >> Fix For: 0.14.0 >> >> Attachments: tempBakcupFile.patch >> >> >> Got a couple of exceptions of the form illustrated below. This was >> for a randomwriter run (and every node in the cluster has multiple >> disks). >> java.io.FileNotFoundException: /tmp/dfs/data/tmp/ >> client-8395631522349067878 (No such file or directory) >> at java.io.FileInputStream.open(Native Method) >> at java.io.FileInputStream.(FileInputStream.java:106) >> at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock >> (DFSClient.java:1323) >> at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush >> (DFSClient.java:1274) >> at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write >> (DFSClient.java:1256) >> at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write >> (FSDataOutputStream.java:38) >> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105) >> at java.io.DataOutputStream.write(DataOutputStream.java:90) >> at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write >> (ChecksumFileSystem.java:402) >> at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write >> (FSDataOutputStream.java:38) >> at java.io.BufferedOutputStream.flushBuffer >> (BufferedOutputStream.java:65) >> at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) >> at java.io.DataOutputStream.write(DataOutputStream.java:90) >> at org.apache.hadoop.io.SequenceFile$Writer.append >> (SequenceFile.java:775) >> at org.apache.hadoop.examples.RandomWriter$Map.map >> (RandomWriter.java:158) >> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48) >> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:187) >> at org.apache.hadoop.mapred.TaskTracker$Child.main >> (TaskTracker.java:1709) >> So it seems like the bug reported in HADOOP-758 still exists. > > -- > This message is automatically generated by JIRA. > - > You can reply to this email to add a comment to the issue online. >