hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dhruba Borthakur" <dhr...@yahoo-inc.com>
Subject RE: [jira] Commented: (HADOOP-1396) FileNotFound exception on DFS block
Date Sat, 02 Jun 2007 04:47:32 GMT
This bug is *not* a regression. I vote for not including it into 0.13.

Thanks,
dhruba

-----Original Message-----
From: Nigel Daley [mailto:ndaley@yahoo-inc.com] 
Sent: Friday, June 01, 2007 1:11 PM
To: hadoop-dev@lucene.apache.org
Subject: Re: [jira] Commented: (HADOOP-1396) FileNotFound exception on DFS
block

Should this go into 0.13?

On Jun 1, 2007, at 12:23 PM, Devaraj Das (JIRA) wrote:

>
>     [ https://issues.apache.org/jira/browse/HADOOP-1396? 
> page=com.atlassian.jira.plugin.system.issuetabpanels:comment- 
> tabpanel#action_12500816 ]
>
> Devaraj Das commented on HADOOP-1396:
> -------------------------------------
>
> +1
>
>> FileNotFound exception on DFS block
>> -----------------------------------
>>
>>                 Key: HADOOP-1396
>>                 URL: https://issues.apache.org/jira/browse/ 
>> HADOOP-1396
>>             Project: Hadoop
>>          Issue Type: Bug
>>          Components: dfs
>>    Affects Versions: 0.12.3
>>            Reporter: Devaraj Das
>>            Assignee: dhruba borthakur
>>             Fix For: 0.14.0
>>
>>         Attachments: tempBakcupFile.patch
>>
>>
>> Got a couple of exceptions of the form illustrated below. This was  
>> for a randomwriter run (and every node in the cluster has multiple  
>> disks).
>> java.io.FileNotFoundException: /tmp/dfs/data/tmp/ 
>> client-8395631522349067878 (No such file or directory)
>> 	at java.io.FileInputStream.open(Native Method)
>> 	at java.io.FileInputStream.(FileInputStream.java:106)
>> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock 
>> (DFSClient.java:1323)
>> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush 
>> (DFSClient.java:1274)
>> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write 
>> (DFSClient.java:1256)
>> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write 
>> (FSDataOutputStream.java:38)
>> 	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
>> 	at java.io.DataOutputStream.write(DataOutputStream.java:90)
>> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write 
>> (ChecksumFileSystem.java:402)
>> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write 
>> (FSDataOutputStream.java:38)
>> 	at java.io.BufferedOutputStream.flushBuffer 
>> (BufferedOutputStream.java:65)
>> 	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
>> 	at java.io.DataOutputStream.write(DataOutputStream.java:90)
>> 	at org.apache.hadoop.io.SequenceFile$Writer.append 
>> (SequenceFile.java:775)
>> 	at org.apache.hadoop.examples.RandomWriter$Map.map 
>> (RandomWriter.java:158)
>> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
>> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:187)
>> 	at org.apache.hadoop.mapred.TaskTracker$Child.main 
>> (TaskTracker.java:1709)
>> So it seems like the bug reported in HADOOP-758 still exists.
>
> -- 
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>



Mime
View raw message