hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1396) FileNotFound exception on DFS block
Date Sat, 02 Jun 2007 19:49:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500966
] 

Hadoop QA commented on HADOOP-1396:
-----------------------------------

+1

http://issues.apache.org/jira/secure/attachment/12358649/tempBakcupFile.patch applied and
successfully tested against trunk revision r543622.

Test results:   http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/237/testReport/
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/237/console

> FileNotFound exception on DFS block
> -----------------------------------
>
>                 Key: HADOOP-1396
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1396
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.12.3
>            Reporter: Devaraj Das
>            Assignee: dhruba borthakur
>             Fix For: 0.14.0
>
>         Attachments: tempBakcupFile.patch
>
>
> Got a couple of exceptions of the form illustrated below. This was for a randomwriter
run (and every node in the cluster has multiple disks).
> java.io.FileNotFoundException: /tmp/dfs/data/tmp/client-8395631522349067878 (No such
file or directory)
> 	at java.io.FileInputStream.open(Native Method)
> 	at java.io.FileInputStream.(FileInputStream.java:106)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1323)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1274)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(DFSClient.java:1256)
> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
> 	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
> 	at java.io.DataOutputStream.write(DataOutputStream.java:90)
> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write(ChecksumFileSystem.java:402)
> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
> 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
> 	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
> 	at java.io.DataOutputStream.write(DataOutputStream.java:90)
> 	at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:775)
> 	at org.apache.hadoop.examples.RandomWriter$Map.map(RandomWriter.java:158)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:187)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1709)
> So it seems like the bug reported in HADOOP-758 still exists.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message