hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Johan Oskarson (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-643) failure closing block of file
Date Thu, 26 Oct 2006 13:25:18 GMT
failure closing block of file
-----------------------------

                 Key: HADOOP-643
                 URL: http://issues.apache.org/jira/browse/HADOOP-643
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.7.2
            Reporter: Johan Oskarson
            Priority: Critical


I've been getting "failure closing block of file" on random files.
Both datanode and tasktracker running on node7. No problems with pinging.
Guess it got stuck after the NPE in DataNode.

Job cannot start because of:

java.io.IOException: failure closing block of file /home/hadoop/mapred/system/submit_99u9cd/.job.jar.crc
to node node7:50010
        at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.internalClose(DFSClient.java:1199)
        at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1163)
        at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1241)
        at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
        at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
        at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
        at org.apache.hadoop.fs.FSDataOutputStream$Summer.close(FSDataOutputStream.java:96)
        at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
        at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
        at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
        at org.apache.hadoop.fs.FileUtil.copyContent(FileUtil.java:205)
        at org.apache.hadoop.fs.FileUtil.copyContent(FileUtil.java:190)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:77)
        at org.apache.hadoop.dfs.DistributedFileSystem.copyFromLocalFile(DistributedFileSystem.java:186)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:289)
        at org.apache.hadoop.mapred.jobcontrol.Job.submit(Job.java:314)
        at org.apache.hadoop.mapred.jobcontrol.JobControl.startReadyJobs(JobControl.java:248)
        at org.apache.hadoop.mapred.jobcontrol.JobControl.run(JobControl.java:280)
        at java.lang.Thread.run(Thread.java:595)
Caused by: java.net.SocketTimeoutException: Read timed out
        at java.net.SocketInputStream.socketRead0(Native Method)
        at java.net.SocketInputStream.read(SocketInputStream.java:129)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:313)
        at java.io.DataInputStream.readFully(DataInputStream.java:176)
        at java.io.DataInputStream.readLong(DataInputStream.java:380)
        at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.internalClose(DFSClient.java:1193)


Cxception in datanode.out on node7:
Exception in thread "org.apache.hadoop.dfs.DataNode$DataXceiveServer@1c86be5" java.lang.NullPointerException
        at org.apache.hadoop.dfs.FSDataset$FSDir.checkDirTree(FSDataset.java:162)
        at org.apache.hadoop.dfs.FSDataset$FSDir.checkDirTree(FSDataset.java:162)
        at org.apache.hadoop.dfs.FSDataset$FSVolume.checkDirs(FSDataset.java:238)
        at org.apache.hadoop.dfs.FSDataset$FSVolumeSet.checkDirs(FSDataset.java:326)
        at org.apache.hadoop.dfs.FSDataset.checkDataDir(FSDataset.java:522)
        at org.apache.hadoop.dfs.DataNode$DataXceiveServer.run(DataNode.java:480)
        at java.lang.Thread.run(Thread.java:595)


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message