hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From AaRon <aww...@gmail.com>
Subject Replication warning
Date Thu, 23 Nov 2006 04:23:15 GMT
Hi,

I just began exploring hadoop using standalone operation. My indexing job
ended with the following error and the output directory is not created:

[11:25:29#aaron@melon034] ~/hadoop-0.8.0 >bin/hadoop jar
/home/NGPP/Aaron/nutchwax-0.6.1/nutchwax-0.6.1.jar all inputs outputs test
06/11/23 11:25:38 INFO conf.Configuration: parsing
file:/home/NGPP/Aaron/hadoop-0.8.0/hadoop-default.xml
06/11/23 11:25:38 INFO conf.Configuration: parsing
file:/tmp/hadoop-unjar11027/nutch-default.xml
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/tmp/hadoop-unjar11027/wax-default.xml
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/home/NGPP/Aaron/hadoop-0.8.0/conf/mapred-default.xml
06/11/23 11:25:39 INFO ipc.Client:
org.apache.hadoop.io.ObjectWritableConnection culler maxidletime= 1000ms
06/11/23 11:25:39 INFO ipc.Client:
org.apache.hadoop.io.ObjectWritableConnection Culler: starting
061123 112539 importing arcs in inputs to outputs/segments/20061123112539
061123 112539 ImportArcs segment: outputs/segments/20061123112539, src:
inputs, collection: test
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/home/NGPP/Aaron/hadoop-0.8.0/hadoop-default.xml
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/tmp/hadoop-unjar11027/nutch-default.xml
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/tmp/hadoop-unjar11027/wax-default.xml
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/home/NGPP/Aaron/hadoop-0.8.0/conf/mapred-default.xml
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/home/NGPP/Aaron/hadoop-0.8.0/conf/mapred-default.xml
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/home/NGPP/Aaron/hadoop-0.8.0/conf/mapred-default.xml
06/11/23 11:25:39 INFO conf.Configuration: parsing
file:/home/NGPP/Aaron/hadoop-0.8.0/hadoop-default.xml
06/11/23 11:25:41 INFO mapred.JobClient: Running job: job_0001
06/11/23 11:25:42 INFO mapred.JobClient:  map 0% reduce 0%
06/11/23 11:25:51 INFO mapred.JobClient:  map 50% reduce 0%
06/11/23 11:25:59 INFO mapred.JobClient:  map 50% reduce 8%
06/11/23 11:26:08 INFO mapred.JobClient:  map 100% reduce 100%
Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:399)
        at org.archive.access.nutch.ImportArcs.importArcs(ImportArcs.java
:519)
        at org.archive.access.nutch.IndexArcs.doImport(IndexArcs.java:154)
        at org.archive.access.nutch.IndexArcs.doAll(IndexArcs.java:139)
        at org.archive.access.nutch.IndexArcs.doJob(IndexArcs.java:246)
        at org.archive.access.nutch.IndexArcs.main(IndexArcs.java:439)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:585)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


>From namenode log, there is a huge number of these warnings generated even
though I have dfs.replication set to 1 in conf/hadoop-site.xml.

WARN org.apache.hadoop.fs.FSNamesystem: Zero targets found,
forbidden1.size=1 forbidden2.size()=0
WARN org.apache.hadoop.fs.FSNamesystem: Replication requested of 2 is larger
than cluster size (1). Using cluster size.

Can anyone advise? Thanks.

-Ron

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message