hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thomas Friol <tho...@anyware-tech.com>
Subject Re: still getting "is valid, and cannot be written to"
Date Wed, 05 Sep 2007 17:11:24 GMT
Our cluster has 4 nodes and i set the mapred.subimt.replication
parameter to 2 on all nodes and the master. Everything has been restarted.
Unfortuantely, we still have the same exception :

2007-09-05 17:01:59,623 ERROR org.apache.hadoop.dfs.DataNode:
DataXceiver: java.io.IOException: Block blk_-5969983648201186681 is
valid, and cannot be written to.
        at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:515)
        at
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:822)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:727)
        at java.lang.Thread.run(Thread.java:595)


Any ideas for another workaround ?

Cheers,
Thomas.

Doug Cutting a écrit :
> Torsten Curdt wrote:
>> So far we never increased the replication.
>
> JobClient increases the replication of submitted job.xml and job.jar
> files, to give them increased availability within the cluster.  So
> that may be what triggers this.  You could try setting
> mapred.submit.replication to three to effectively disable this.
>
> Doug

-- 
Thomas FRIOL
Joost
ANYWARE TECHNOLOGIES
Tél      : +33 (0)561 000 653
Portable : +33 (0)609 704 810
Fax      : +33 (0)561 005 146
www.anyware-tech.com


Mime
View raw message