Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 91285 invoked from network); 12 Dec 2010 10:24:56 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 12 Dec 2010 10:24:56 -0000 Received: (qmail 73530 invoked by uid 500); 12 Dec 2010 10:24:54 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 73210 invoked by uid 500); 12 Dec 2010 10:24:53 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 73201 invoked by uid 99); 12 Dec 2010 10:24:52 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 12 Dec 2010 10:24:52 +0000 X-ASF-Spam-Status: No, hits=4.0 required=10.0 tests=FREEMAIL_FROM,FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of patodirahul@gmail.com designates 74.125.82.176 as permitted sender) Received: from [74.125.82.176] (HELO mail-wy0-f176.google.com) (74.125.82.176) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 12 Dec 2010 10:24:45 +0000 Received: by wye20 with SMTP id 20so5234566wye.35 for ; Sun, 12 Dec 2010 02:24:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type; bh=35CWjdaHod3H1GI3vS9o1YClOW44aF1VnRLGGLlTBiU=; b=R5IaPGEGxgnTpGqDcNocE1veFb9JLdI2Q0/9CIx4fkUbuAybMmS1tm8V8jat+BekiL 4sC4iSkItCLhj+/RbSGf6eTc6E+9/JWL58Ct6V9pZBjt2hMl+dz8re9jf7Nq9U1O71Fh wwmrydBNv+YeU82SjMt7RsJZw9BMyoy1fAuws= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=qUP9V/lbIT4MC7q1wssZorsm6ZsAy8ZYT1K+Il9RPsJzFeVYBUYL7GZioRI61SOsss 5zlgHs3qM7cGbDOXVM/5c6u+8nUD/SCeUaXHS1xF+VMAeLV3JCcVdSizxHAnnk1LZYam YmGna+8qm8AiSNIPun4UWDMmY9EfZpUd+/T14= MIME-Version: 1.0 Received: by 10.216.25.136 with SMTP id z8mr1563034wez.93.1292149465034; Sun, 12 Dec 2010 02:24:25 -0800 (PST) Received: by 10.216.16.131 with HTTP; Sun, 12 Dec 2010 02:24:24 -0800 (PST) In-Reply-To: References: <4D046E9C.5080103@minsoft.com> <4D047A68.6050406@minsoft.com> Date: Sun, 12 Dec 2010 15:54:24 +0530 Message-ID: Subject: Re: exceptions copying files into HDFS From: rahul patodi To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=0016e6d7eebcdc2f10049733fd94 X-Virus-Checked: Checked by ClamAV on apache.org --0016e6d7eebcdc2f10049733fd94 Content-Type: text/plain; charset=ISO-8859-1 you can follow this tutorial: http://hadoop-tutorial.blogspot.com/2010/11/running-hadoop-in-distributed-mode.html http://cloudera-tutorial.blogspot.com/2010/11/running-cloudera-in-distributed-mode.html also, before running any job please ensure all the required processes are running on the correct node like on master: Namenode, jobtracker, secondarynamenode(if you are not running secondary name node on another system) on slave: datanode, tasktracker On Sun, Dec 12, 2010 at 2:46 PM, Varadharajan Mukundan wrote: > HI, > > > jps reports DataNode, NameNode, and SecondayNameNode as running: > > > > rock@ritter:/tmp/hadoop-rock> jps > > 31177 Jps > > 29909 DataNode > > 29751 NameNode > > 30052 SecondaryNameNode > > In master node, the output of the "JPS" will contain a tasktracker, > jobtracker, namenode, secondary namenode, datanode(optional, depending on > your config) and your slaves will have tasktracker, datanodes in their jps > output. If you need more help on configuring hadoop, i recommend you to > take > a look at > > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ > > > > > > Here is the contents of the Hadoop node tree. The only thing that looks > > like a log file are the dncp_block_verification.log.curr files, and those > > are empty. > > Note the presence of the in_use.lock files, which suggests that this node > is > > indeed being used. > > > The logs will be in the "logs" directory in $HADOOP_HOME (hadoop home > directory), are you looking for logs in this directory? > > > -- > Thanks, > M. Varadharajan > > ------------------------------------------------ > > "Experience is what you get when you didn't get what you wanted" > -By Prof. Randy Pausch in "The Last Lecture" > > My Journal :- www.thinkasgeek.wordpress.com > -- -Thanks and Regards, Rahul Patodi Associate Software Engineer, Impetus Infotech (India) Private Limited, www.impetus.com Mob:09907074413 --0016e6d7eebcdc2f10049733fd94--