Return-Path: Delivered-To: apmail-hadoop-general-archive@minotaur.apache.org Received: (qmail 23870 invoked from network); 3 Aug 2010 00:46:27 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 3 Aug 2010 00:46:27 -0000 Received: (qmail 31201 invoked by uid 500); 3 Aug 2010 00:46:26 -0000 Delivered-To: apmail-hadoop-general-archive@hadoop.apache.org Received: (qmail 31131 invoked by uid 500); 3 Aug 2010 00:46:25 -0000 Mailing-List: contact general-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: general@hadoop.apache.org Delivered-To: mailing list general@hadoop.apache.org Received: (qmail 31123 invoked by uid 99); 3 Aug 2010 00:46:25 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Aug 2010 00:46:25 +0000 X-ASF-Spam-Status: No, hits=0.6 required=10.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_MED,SPF_NEUTRAL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [74.125.149.205] (HELO na3sys009aog111.obsmtp.com) (74.125.149.205) by apache.org (qpsmtpd/0.29) with SMTP; Tue, 03 Aug 2010 00:46:18 +0000 Received: from source ([209.85.161.54]) by na3sys009aob111.postini.com ([74.125.148.12]) with SMTP ID DSNKTFdmwcCDz1iSg3Lc7UOyTyNsCa5qOTCB@postini.com; Mon, 02 Aug 2010 17:45:57 PDT Received: by fxm13 with SMTP id 13so1948960fxm.13 for ; Mon, 02 Aug 2010 17:45:53 -0700 (PDT) MIME-Version: 1.0 Received: by 10.239.172.67 with SMTP id z3mr315745hbe.92.1280796352873; Mon, 02 Aug 2010 17:45:52 -0700 (PDT) Received: by 10.239.156.195 with HTTP; Mon, 2 Aug 2010 17:45:52 -0700 (PDT) In-Reply-To: References: Date: Mon, 2 Aug 2010 20:45:52 -0400 Message-ID: Subject: Re: java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap From: Sameer Joshi To: general@hadoop.apache.org Content-Type: multipart/alternative; boundary=0016364187e3a4873f048ce0a357 X-Virus-Checked: Checked by ClamAV on apache.org --0016364187e3a4873f048ce0a357 Content-Type: text/plain; charset=ISO-8859-1 Thanks Sonal, I tried it, and seemed to make some progress as it hung instead of giving the error. How do you refer to it in the hadoop conf files? For example, in mapred-site.xml I have it as mapred.job.tracker 173.1.86.194.reverse.gogrid.com:54311 Am I doing this correctly? Thanks and regards, Sameer On Sun, Aug 1, 2010 at 10:59 AM, Sonal Goyal wrote: > 173.1.45.202 173.1.45.202.reverse.gogrid.com 28684_1_85017_358406 > > Change that to whatever your ips/hostnames are. > > Thanks and Regards, > Sonal > www.meghsoft.com > http://in.linkedin.com/in/sonalgoyal > > > On Sun, Aug 1, 2010 at 7:35 AM, Sameer Joshi < > sameer.joshi@serenesoftware.com> wrote: > > > Hi again Sonal, > > Just checking, if I'm running a pseudo-cluster on one node, so the master > > and slaves are all localhost. What may the /etc/hosts look like in this > > case? > > > > Thanks, > > Sameer > > > > On Sat, Jul 31, 2010 at 3:37 AM, Sameer Joshi < > > sameer.joshi@serenesoftware.com> wrote: > > > > > Hi Sonal, > > > Sorry for the late reply, I was traveling. The logs, unfortunately, did > > not > > > yield too much information. I'll try some more configurations with > > etc/hosts > > > edits, and see if I can't get it to behave. > > > Thanks so much again for your help. > > > Sameer > > > > > > > > > On Wed, Jul 28, 2010 at 1:20 PM, Sonal Goyal > >wrote: > > > > > >> Hi Sameer, > > >> > > >> Do you see all processes up? Can you check the logs and see what is > > going > > >> wrong? I suspect you will need to make the entries in /etc/hosts > > anyways. > > >> Somewhere in hadoop code, the ip hostname mapping is done, I forget > the > > >> exact location. > > >> > > >> Thanks and Regards, > > >> Sonal > > >> www.meghsoft.com > > >> http://in.linkedin.com/in/sonalgoyal > > >> > > >> > > >> On Wed, Jul 28, 2010 at 10:38 PM, Sameer Joshi < > > >> sameer.joshi@serenesoftware.com> wrote: > > >> > > >> > Hi Sonal, > > >> > Thank you for replying. > > >> > > > >> > I'm trying to run it as single node, so I had localhost in all the > > conf > > >> > files and in /etc/hosts. I tried replacing it with 127.0.0.1 in all > > >> Hadoop > > >> > conf files (read to do that on some forum), but with no change. > > >> > > > >> > I figured I didn't need hostname since I was trying to run it > locally, > > >> but > > >> > still tried replacing localhost with 173.1.86.194. However, as > > expected, > > >> it > > >> > could not connect to server. > > >> > > > >> > Any thoughts? > > >> > > > >> > Thanks for your help, I'm new at this and feel much obliged. > > >> > > > >> > Regards, > > >> > Sameer > > >> > > > >> > On Wed, Jul 28, 2010 at 12:12 AM, Sonal Goyal < > sonalgoyal4@gmail.com> > > >> > wrote: > > >> > > > >> > > Hi Sameer, > > >> > > > > >> > > For GoGrid, you will have to configure /etc/hosts with the ip and > > >> > hostname > > >> > > for master and slaves. If you run hostname and nslookup -sil, you > > will > > >> > get > > >> > > the details for your machines. Make sure /etc/hosts has an entry > > like: > > >> > > > > >> > > 173.1.45.202 173.1.45.202.reverse.gogrid.com 28684_1_85017_358406 > > >> > > > > >> > > Thanks and Regards, > > >> > > Sonal > > >> > > www.meghsoft.com > > >> > > http://in.linkedin.com/in/sonalgoyal > > >> > > > > >> > > > > >> > > On Wed, Jul 28, 2010 at 6:42 AM, Sameer Joshi < > > >> > > sameer.joshi@serenesoftware.com> wrote: > > >> > > > > >> > > > I installed Java and Hadoop on a GoGrid cloud server using Red > Hat > > >> > > > Enterprise Linux Server release 5.1 (Tikanga). Hadoop installed > > fine > > >> > and > > >> > > > starts fine, however I get an error > > (java.lang.NullPointerException > > >> at > > >> > > > java.util.concurrent.ConcurrentHashMap) while running the Hadoop > > >> > > wordcount > > >> > > > example. My guess was that this was a localhost or IPv6 issue. > > >> > > > > > >> > > > * I have tested replacing 'localhost' with both the local IP, > and > > >> > server > > >> > > IP > > >> > > > addresses (when out of options) in Hadoop conf > > >> > > > * I have disabled IPv6 both in sysctl.conf and hadoop-env.sh > > (former > > >> > > > followed by a server restart) > > >> > > > > > >> > > > Any thoughts? > > >> > > > Thank you. > > >> > > > > > >> > > > > > >> > > > The output is given below > > >> > > > > > >> > > > # bin/hadoop jar hadoop-0.20.2-examples.jar wordcount datasets > > >> > tests/out7 > > >> > > > 10/07/27 05:44:59 INFO input.FileInputFormat: Total input paths > to > > >> > > process > > >> > > > : > > >> > > > 1 > > >> > > > 10/07/27 05:44:59 INFO mapred.JobClient: Running job: > > >> > > job_201007270544_0001 > > >> > > > 10/07/27 05:45:00 INFO mapred.JobClient: map 0% reduce 0% > > >> > > > 10/07/27 05:45:12 INFO mapred.JobClient: map 100% reduce 0% > > >> > > > 10/07/27 05:45:17 INFO mapred.JobClient: Task Id : > > >> > > > attempt_201007270544_0001_r_000000_0, Status : FAILED > > >> > > > > > >> > > > Error: java.lang.NullPointerException > > >> > > > at java.util.concurrent.ConcurrentHashMap.get(Unknown Source) > > >> > > > at > > >> > > > > > >> > > > > > >> > > > > >> > > > >> > > > org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683) > > >> > > > > > >> > > > at > > >> > > > > > >> > > > > > >> > > > > >> > > > >> > > > org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605) > > >> > > > > > >> > > > > > >> > > > 10/07/27 05:45:24 INFO mapred.JobClient: Task Id : > > >> > > > attempt_201007270544_0001_r_000000_1, Status : FAILED > > >> > > > Error: java.lang.NullPointerException > > >> > > > at java.util.concurrent.ConcurrentHashMap.get(Unknown Source) > > >> > > > at > > >> > > > > > >> > > > > > >> > > > > >> > > > >> > > > org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683) > > >> > > > > > >> > > > at > > >> > > > > > >> > > > > > >> > > > > >> > > > >> > > > org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605) > > >> > > > > > >> > > > > > >> > > > 10/07/27 05:45:30 INFO mapred.JobClient: Task Id : > > >> > > > attempt_201007270544_0001_r_000000_2, Status : FAILED > > >> > > > Error: java.lang.NullPointerException > > >> > > > at java.util.concurrent.ConcurrentHashMap.get(Unknown Source) > > >> > > > at > > >> > > > > > >> > > > > > >> > > > > >> > > > >> > > > org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.getMapCompletionEvents(ReduceTask.java:2683) > > >> > > > > > >> > > > at > > >> > > > > > >> > > > > > >> > > > > >> > > > >> > > > org.apache.hadoop.mapred.ReduceTask$ReduceCopier$GetMapEventsThread.run(ReduceTask.java:2605) > > >> > > > > > >> > > > > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Job complete: > > >> > > > job_201007270544_0001 > > >> > > > > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Counters: 12 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Job Counters > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Launched reduce tasks=4 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Launched map tasks=1 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Data-local map tasks=1 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Failed reduce tasks=1 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: FileSystemCounters > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: HDFS_BYTES_READ=15319 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: > FILE_BYTES_WRITTEN=12847 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Map-Reduce Framework > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Combine output > > records=934 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Map input records=149 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Spilled Records=934 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Map output bytes=25346 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Combine input > > records=2541 > > >> > > > 10/07/27 05:45:39 INFO mapred.JobClient: Map output records=2541 > > >> > > > > > >> > > > -- > > >> > > > Dr. Sameer Joshi, Ph.D. > > >> > > > Senior computer scientist, > > >> > > > Serene Software. > > >> > > > > > >> > > > > >> > > > >> > > > >> > > > >> > -- > > >> > Dr. Sameer Joshi, Ph.D. > > >> > Senior computer scientist, > > >> > Serene Software. > > >> > > > >> > > > > > > > > > > > > -- > > > Dr. Sameer Joshi, Ph.D. > > > Senior computer scientist, > > > Serene Software. > > > > > > > > > > > > -- > > Dr. Sameer Joshi, Ph.D. > > Senior computer scientist, > > Serene Software. > > > -- Dr. Sameer Joshi, Ph.D. Senior computer scientist, Serene Software. --0016364187e3a4873f048ce0a357--