hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manikandan Saravanan <manikan...@thesocialpeople.net>
Subject Re: Hadoop doesn't find the input file
Date Sat, 04 Jan 2014 17:39:37 GMT
Hmm.. I just removed the “crawl” directory (output directory) from the command and it works!
I’m storing the output in a Cassandra cluster using Gora anyway. So I don’t think I want
to store that on HDFS :)
Manikandan Saravanan
Architect - Technology

On 4 January 2014 at 11:06:56 pm, Ted Yu (yuzhihong@gmail.com) wrote:

Can you pastebin the stack trace involving the NPE ?


On Jan 4, 2014, at 9:25 AM, Manikandan Saravanan <manikandan@thesocialpeople.net> wrote:


I’m trying to run Nutch 2.2.1 on a Haddop 2-node cluster. My hadoop cluster is running fine
and I’ve successfully added the input and output directory on to HDFS. But when I run

$HADOOP_HOME/bin/hadoop jar /nutch/apache-nutch-2.2.1.job org.apache.nutch.crawl.Crawler urls
-dir crawl -depth 3 -topN 5

I’m getting something like:

INFO input.FileInputFormat: Total input paths to process : 0

Which, I understand, is meaning that Hadoop cannot locate the input files. The job ends for
obvious reasons citing the null pointer exception. Can someone help me out?

Manikandan Saravanan
Architect - Technology
View raw message