hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Emmanuel JOKE" <joke...@gmail.com>
Subject DataXCeiver error ?
Date Sun, 24 Jun 2007 12:30:38 GMT
Hi Guys,

I run a cluster of 2 machine on Linux 2.6 and Java 1.6 and i keep saying
this kind of error in the slave datanode.

FIRST ERROR:
2007-06-24 08:25:22,688 ERROR dfs.DataNode - DataXCeiver
java.io.IOException: Block blk_674889550290164539 has already been started
(though not completed), and thus cannot be created.
        at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:507)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(
DataNode.java:767)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:596)
        at java.lang.Thread.run(Thread.java:619)

SECOND ERROR:
2007-06-24 08:25:34,227 ERROR dfs.DataNode - DataXCeiver
java.io.IOException: Block blk_674889550290164539 is valid, and cannot be
written to.
        at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:491)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(
DataNode.java:767)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:596)
        at java.lang.Thread.run(Thread.java:619)

It doesn't look to affect my crawler but i'm wondering if it could affect
the performance.
Is it normal  or do I have done something wrong ?

E

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message