Return-Path: Delivered-To: apmail-lucene-hadoop-user-archive@locus.apache.org Received: (qmail 78971 invoked from network); 3 Sep 2007 05:38:34 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 3 Sep 2007 05:38:34 -0000 Received: (qmail 47778 invoked by uid 500); 3 Sep 2007 05:38:28 -0000 Delivered-To: apmail-lucene-hadoop-user-archive@lucene.apache.org Received: (qmail 47738 invoked by uid 500); 3 Sep 2007 05:38:27 -0000 Mailing-List: contact hadoop-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-user@lucene.apache.org Delivered-To: mailing list hadoop-user@lucene.apache.org Received: (qmail 47729 invoked by uid 99); 3 Sep 2007 05:38:27 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 02 Sep 2007 22:38:27 -0700 X-ASF-Spam-Status: No, hits=3.7 required=10.0 tests=DNS_FROM_OPENWHOIS,FORGED_HOTMAIL_RCVD2,SPF_HELO_PASS,SPF_PASS,WHOIS_MYPRIVREG X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of lists@nabble.com designates 216.139.236.158 as permitted sender) Received: from [216.139.236.158] (HELO kuber.nabble.com) (216.139.236.158) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Sep 2007 05:38:21 +0000 Received: from isper.nabble.com ([192.168.236.156]) by kuber.nabble.com with esmtp (Exim 4.63) (envelope-from ) id 1IS4dN-0003MF-8T for hadoop-user@lucene.apache.org; Sun, 02 Sep 2007 22:38:01 -0700 Message-ID: <12456626.post@talk.nabble.com> Date: Sun, 2 Sep 2007 22:38:01 -0700 (PDT) From: chsanthosh To: hadoop-user@lucene.apache.org Subject: Re: exception during Dedup on multiple nodes In-Reply-To: <8551555f0708080615n10b147bew728cadca5e4a7d10@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-Nabble-From: chsanthosh@hotmail.com References: <8551555f0708080615n10b147bew728cadca5e4a7d10@mail.gmail.com> X-Virus-Checked: Checked by ClamAV on apache.org Hi, I'm also in the same situation.. After that what are the steps you followed for getting the things done... Please help me out.. Thanks & Regards, Santhosh.Ch prem kumar-4 wrote: > > Hello, > I am running nutch 0.9 in three nodes on an nfs mounted drive. For more > info > on my setup please refer : > http://joey.mazzarelli.com/2007/07/25/nutch-and-hadoop-as-user-with-nfs/ > > A simple nutch crawl fails with during the dedup phase. The stack trace of > the problem I am facing is as follows: > > task_0037_m_000001_3: log4j:ERROR setFile(null,true) call failed. > task_0037_m_000001_3: java.io.FileNotFoundException: > /home/pl162331/opt/nutch/crawler/logs/mishti (Is a directory) > task_0037_m_000001_3: at java.io.FileOutputStream.openAppend(Native > Method) > task_0037_m_000001_3: at java.io.FileOutputStream.( > FileOutputStream.java:177) > task_0037_m_000001_3: at java.io.FileOutputStream.( > FileOutputStream.java:102) > task_0037_m_000001_3: at org.apache.log4j.FileAppender.setFile( > FileAppender.java:289) > task_0037_m_000001_3: at org.apache.log4j.FileAppender.activateOptions( > FileAppender.java:163) > task_0037_m_000001_3: at > org.apache.log4j.DailyRollingFileAppender.activateOptions( > DailyRollingFileAppender.java:215) > task_0037_m_000001_3: at > org.apache.log4j.config.PropertySetter.activate( > PropertySetter.java:256) > task_0037_m_000001_3: at > org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java > :132) > task_0037_m_000001_3: at > org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:96) > task_0037_m_000001_3: at > org.apache.log4j.PropertyConfigurator.parseAppender( > PropertyConfigurator.java:654) > task_0037_m_000001_3: at > org.apache.log4j.PropertyConfigurator.parseCategory( > PropertyConfigurator.java:612) > task_0037_m_000001_3: at > org.apache.log4j.PropertyConfigurator.configureRootCategory( > PropertyConfigurator.java:509) > task_0037_m_000001_3: at > org.apache.log4j.PropertyConfigurator.doConfigure > (PropertyConfigurator.java:415) > task_0037_m_000001_3: at > org.apache.log4j.PropertyConfigurator.doConfigure > (PropertyConfigurator.java:441) > task_0037_m_000001_3: at > org.apache.log4j.helpers.OptionConverter.selectAndConfigure( > OptionConverter.java:468) > task_0037_m_000001_3: at org.apache.log4j.LogManager.( > LogManager.java:122) > task_0037_m_000001_3: at org.apache.log4j.Logger.getLogger(Logger.java > :104) > task_0037_m_000001_3: at > org.apache.commons.logging.impl.Log4JLogger.getLogger(Log4JLogger.java:229) > task_0037_m_000001_3: at org.apache.commons.logging.impl.Log4JLogger > .(Log4JLogger.java:65) > task_0037_m_000001_3: at > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > task_0037_m_000001_3: at > sun.reflect.NativeConstructorAccessorImpl.newInstance( > NativeConstructorAccessorImpl.java:39) > task_0037_m_000001_3: at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance( > DelegatingConstructorAccessorImpl.java:27) > task_0037_m_000001_3: at java.lang.reflect.Constructor.newInstance( > Constructor.java:494) > task_0037_m_000001_3: at > org.apache.commons.logging.impl.LogFactoryImpl.newInstance( > LogFactoryImpl.java:529) > task_0037_m_000001_3: at > org.apache.commons.logging.impl.LogFactoryImpl.getInstance( > LogFactoryImpl.java:235) > task_0037_m_000001_3: at org.apache.commons.logging.LogFactory.getLog( > LogFactory.java:370) > task_0037_m_000001_3: at org.apache.hadoop.mapred.TaskTracker.( > TaskTracker.java:82) > task_0037_m_000001_3: at > org.apache.hadoop.mapred.TaskTracker$Child.main( > TaskTracker.java:1423) > task_0037_m_000001_3: log4j:ERROR Either File or DatePattern options are > not > set for appender [DRFA]. > Exception in thread "main" java.io.IOException: Job failed! > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:604) > at org.apache.nutch.indexer.DeleteDuplicates.dedup( > DeleteDuplicates.java:439) > at org.apache.nutch.crawl.Crawl.main(Crawl.java:135) > > > The log folders have sufficient permissions too. Unable to proceed > further. > Any help would be appreciated. > > Cheers! > Prem > > -- View this message in context: http://www.nabble.com/exception-during-Dedup-on-multiple-nodes-tf4236301.html#a12456626 Sent from the Hadoop Users mailing list archive at Nabble.com.