hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Cao" <jonath...@rockyou.com>
Subject File append corrupts file w/ smaill size file
Date Wed, 10 Dec 2008 00:41:25 GMT
We are evaluating the file append feature on 0.19.0. I got the following
error while trying to append one small size file to another one (both are
less than 1 block). While the file size check indicated the file size has
changed to reflect the new size, but the file itself has been apparently
corrupted. Same append works fine for large file size. This issue could be
related to bug https://issues.apache.org/jira/browse/HADOOP-4423.

[hadoop@cloud-1 ~]$ hadoop dfs -cat hdfs:///user/hadoop/test1.txt
abcd
[hadoop@cloud-1 ~]$ hadoop dfs -cat hdfs:///user/hadoop/test2.txt
08/12/09 16:35:05 INFO hdfs.DFSClient: Could not obtain block
blk_-7713572143166377177_1010 from any node:  java.io.IOException: No live
nodes contain current block
08/12/09 16:35:08 INFO hdfs.DFSClient: Could not obtain block
blk_-7713572143166377177_1010 from any node:  java.io.IOException: No live
nodes contain current block

-------------------------------------------------------------------------------------------------------------------------------------------------------------------
08/12/09 16:28:50 WARN hdfs.DFSClient: Error Recovery for block
blk_-7713572143166377177_1009 bad datanode[0] 192.168.1.10:50010
08/12/09 16:28:50 WARN hdfs.DFSClient: Error Recovery for block
blk_-7713572143166377177_1009 in pipeline 192.168.1.10:50010,
192.168.1.8:50010: bad datanode 192.168.1.10:50010
08/12/09 16:28:50 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor
exception  for block blk_-7713572143166377177_1010java.io.EOFException
        at java.io.DataInputStream.readFully(DataInputStream.java:180)
        at java.io.DataInputStream.readLong(DataInputStream.java:399)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$ResponseProcessor.run(DFSClient.java:2318)

08/12/09 16:28:50 WARN hdfs.DFSClient: Error Recovery for block
blk_-7713572143166377177_1010 bad datanode[0] 192.168.1.8:50010
Exception in thread "main" java.io.IOException: All datanodes
192.168.1.8:50010 are bad. Aborting...
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2442)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:1997)
        at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2160)

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message