Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 524CB10773 for ; Fri, 12 Apr 2013 10:24:20 +0000 (UTC) Received: (qmail 74007 invoked by uid 500); 12 Apr 2013 10:24:15 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 73921 invoked by uid 500); 12 Apr 2013 10:24:15 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 73905 invoked by uid 99); 12 Apr 2013 10:24:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Apr 2013 10:24:14 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of harsh@cloudera.com designates 209.85.223.182 as permitted sender) Received: from [209.85.223.182] (HELO mail-ie0-f182.google.com) (209.85.223.182) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Apr 2013 10:24:09 +0000 Received: by mail-ie0-f182.google.com with SMTP id at1so3156463iec.27 for ; Fri, 12 Apr 2013 03:23:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type:x-gm-message-state; bh=jMo+8L3qrWIsYmq6OD8okSzWewoqeuenrPNd7z7ZcKk=; b=ntxZkBMdDgUIVdsGwDXV3WCrsJ/9wYvf3AkiUARCjEvHKskxiMwoG4qgoTF7q60iTu tqJEijDUaT5C+s9Q5yrEdX6Mq2r7wPS2qfTnKXXrvPbWLaSEeyDQQLMqUlUmDoLFzJwD J7kfirQ5EEsZ1VFmexnvNjwyNTCdmtd1Cg66ljQs1AeyGNJDzR7BeCtw/zNG/mLHa4fH kvnc4UU3qzktS3rbJJYpiyZ4IgolqtZVAwHt0vY4SNClfP64PVDueU4Pu6zk5N5XMsLk bheSFeLvyxcHItxOIv3uF3xAzroSq6S+pmX13DsVQIppE2i+AmO96gs8iZhheHDZ3CAO rNkA== X-Received: by 10.50.171.73 with SMTP id as9mr1295306igc.23.1365762228024; Fri, 12 Apr 2013 03:23:48 -0700 (PDT) MIME-Version: 1.0 Received: by 10.50.135.37 with HTTP; Fri, 12 Apr 2013 03:23:27 -0700 (PDT) In-Reply-To: <1365761537.98391.YahooMailNeo@web120604.mail.ne1.yahoo.com> References: <1365759947.62009.YahooMailNeo@web120603.mail.ne1.yahoo.com> <1365761537.98391.YahooMailNeo@web120604.mail.ne1.yahoo.com> From: Harsh J Date: Fri, 12 Apr 2013 15:53:27 +0530 Message-ID: Subject: Re: Map tasks never getting started To: "" , sandeep reddy Content-Type: text/plain; charset=ISO-8859-1 X-Gm-Message-State: ALoCoQlzy3KvkVmdXVgBu4jb8660pOr3LKcVgPBTeMZiq3nKDUm6jaC20mEdHnrbgTOLjSmpy0ve X-Virus-Checked: Checked by ClamAV on apache.org The default mapping comes from the rack awareness script, not the topology data file it uses. Also, note that if you start a TT or DN before you update the topology data and refresh it, the default rack value gets assigned instead. On Fri, Apr 12, 2013 at 3:42 PM, sandeep reddy wrote: > Harsh, > > My topology file looks similar as following: > > 192.168.2.20 host1.example.com host1 /tc/rack1 > 192.168.2.21 host2.example.com host2 /tc/rack1 > 192.168.2.22 host3.example.com host3 /tc/rack1 > 192.168.2.23 host4.example.com host4 /tc/rack2 > 192.168.2.24 host5.example.com host5 /tc/rack2 > > If I am not wrong all hosts have the same levers. > > Can you please let me know how to including the default rack value used > upon a mismatch? > > Thanks, > Sandeep. > ________________________________ > From: Harsh J > To: "" ; sandeep reddy > > Sent: Friday, April 12, 2013 3:22 PM > Subject: Re: Map tasks never getting started > > Your rack topology configuration is broken (mismatching levels) and > you're therefore hitting MAPREDUCE-1740. Either upgrade to a more > current release, or fix your rack configuration to be of the same > levels for all mapped nodes (including the default rack value used > upon a mismatch) and restart the JobTracker. > > On Fri, Apr 12, 2013 at 3:15 PM, sandeep reddy > wrote: >> Hi, >> >> While running a mapred job map task never getting assigned. The message >> for >> a map in web interface is "No Task Attempts found". >> When I check the logs there are couple of errors as following: >> >> 2013-04-12 02:29:04,454 ERROR >> org.apache.hadoop.security.UserGroupInformation: >> PriviledgedActionException >> as:jobs cause:java.io.IOException: java.lang.NullPointerException >> 2013-04-12 02:29:04,455 INFO org.apache.hadoop.ipc.Server: IPC Server >> handler 8 on 4571, call >> heartbeat(org.apache.hadoop.mapred.TaskTrackerStatus@8f9c09e, false, >> false, >> true, 4) from 192.168.2.20:59968: error: java.io.IOException: >> java.lang.NullPointerException >> java.io.IOException: java.lang.NullPointerException >> at >> >> org.apache.hadoop.mapred.JobInProgress.getMatchingLevelForNodes(JobInProgress.java:1699) >> at >> >> org.apache.hadoop.mapred.JobInProgress.addRunningTaskToTIP(JobInProgress.java:1784) >> at >> >> org.apache.hadoop.mapred.JobInProgress.obtainNewNonLocalMapTask(JobInProgress.java:1440) >> at >> >> org.apache.hadoop.mapred.JobQueueTaskScheduler.assignTasks(JobQueueTaskScheduler.java:189) >> at >> org.apache.hadoop.mapred.JobTracker.heartbeat(JobTracker.java:3398) >> at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) >> at >> >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> at java.lang.reflect.Method.invoke(Method.java:601) >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563) >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388) >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384) >> at java.security.AccessController.doPrivileged(Native Method) >> at javax.security.auth.Subject.doAs(Subject.java:415) >> at >> >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093) >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382) >> 2013-04-12 02:29:04,463 INFO org.apache.hadoop.mapred.JobTracker: Removing >> task 'attempt_201304120228_0001_m_000002_0' >> >> Some please help me to resolve this issue. >> I am using hadoop-1.0.2 >> >> Thanks, >> Sandeep. > > > > -- > Harsh J > > -- Harsh J