Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7AC1A6291 for ; Tue, 19 Jul 2011 13:51:23 +0000 (UTC) Received: (qmail 33173 invoked by uid 500); 19 Jul 2011 13:51:22 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 33110 invoked by uid 500); 19 Jul 2011 13:51:22 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 33102 invoked by uid 99); 19 Jul 2011 13:51:22 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Jul 2011 13:51:22 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of edlinuxguru@gmail.com designates 209.85.214.176 as permitted sender) Received: from [209.85.214.176] (HELO mail-iw0-f176.google.com) (209.85.214.176) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Jul 2011 13:51:15 +0000 Received: by iwi5 with SMTP id 5so5683960iwi.35 for ; Tue, 19 Jul 2011 06:50:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=8xTyzA1G6F0cgim2YYy2Qq8KolOOU8X70C90+SW4y28=; b=S43sBVI48ZWT7eQI57ZYT+RF5arlx/kwcyxTBWjMJ9R+vmCiHAUUG80pqPgbQCJmv4 R+UPku7Rp+8rrvl4XokIadqEn8eOYWb0aYi4kjJ1wTMDbSDpg84LobO8oR44qGNB9AM8 06GRNm+G9f/2PD15lnpjjaZZdD0qmeuWM5RLE= MIME-Version: 1.0 Received: by 10.42.157.7 with SMTP id b7mr8810580icx.1.1311083455120; Tue, 19 Jul 2011 06:50:55 -0700 (PDT) Received: by 10.42.18.137 with HTTP; Tue, 19 Jul 2011 06:50:55 -0700 (PDT) In-Reply-To: References: Date: Tue, 19 Jul 2011 09:50:55 -0400 Message-ID: Subject: Re: Problem in Hadoop(0.20.2) with hive From: Edward Capriolo To: user@hive.apache.org Content-Type: multipart/alternative; boundary=90e6ba613b0a9d063f04a86c676d --90e6ba613b0a9d063f04a86c676d Content-Type: text/plain; charset=ISO-8859-1 On Tue, Jul 19, 2011 at 9:46 AM, Vikas Srivastava < vikas.srivastava@one97.net> wrote: > Hey Edward, > > thanks for responding but i try to ping all the *data-node* from * > name-node* and they all are responding.. > > i won't be able to figure it out where the problem persist. > > query is running fine when i dont use any map reduce.. but while using and > map tasks...its get stuck into that.. > > Regards > Vikas Srivastava > 9560885900 > > > > > > On Tue, Jul 19, 2011 at 7:03 PM, Edward Capriolo wrote: > >> It must be a hostname or DNS problem. Use dig and ping to find out what is >> wrong. >> >> On Tue, Jul 19, 2011 at 9:05 AM, Vikas Srivastava < >> vikas.srivastava@one97.net> wrote: >> >>> >>> >>> On Tue, Jul 19, 2011 at 6:29 PM, Vikas Srivastava < >>> vikas.srivastava@one97.net> wrote: >>> >>>> >>>> HI Team, >>>>> >>>>> >>>>> we are using 1 namenode with 11 Datanode each of (16GB ram and 1.4 tb >>>>> hdd) >>>>> >>>>> i m getting this error while running any query , simple its not working >>>>> when we use any map tasks. >>>>> >>>>> and we are using hive on hadoop. >>>>> >>>>> Total MapReduce jobs = 1 >>>>> Launching Job 1 out of 1 >>>>> Number of reduce tasks not specified. Estimated from input data size: >>>>> 120 >>>>> In order to change the average load for a reducer (in bytes): >>>>> set hive.exec.reducers.bytes.per.reducer= >>>>> In order to limit the maximum number of reducers: >>>>> set hive.exec.reducers.max= >>>>> In order to set a constant number of reducers: >>>>> set mapred.reduce.tasks= >>>>> Starting Job = job_201107191711_0013, Tracking URL = >>>>> http://hadoopname:50030/jobdetails.jsp?jobid=job_201107191711_0013 >>>>> Kill Command = /home/hadoop/hadoop/bin/../bin/hadoop job >>>>> -Dmapred.job.tracker=10.0.3.28:9001 -kill job_201107191711_0013 >>>>> 2011-07-19 18:06:34,973 Stage-1 map = 100%, reduce = 100% >>>>> Ended Job = job_201107191711_0013 with errors >>>>> java.lang.RuntimeException: Error while reading from task log url >>>>> at >>>>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130) >>>>> at >>>>> org.apache.hadoop.hive.ql.exec.ExecDriver.showJobFailDebugInfo(ExecDriver.java:889) >>>>> at >>>>> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:680) >>>>> at >>>>> org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:123) >>>>> at >>>>> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130) >>>>> at >>>>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57) >>>>> at >>>>> org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:47) >>>>> Caused by: java.net.UnknownHostException: hadoopdata3 >>>>> at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:177) >>>>> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) >>>>> at java.net.Socket.connect(Socket.java:519) >>>>> at java.net.Socket.connect(Socket.java:469) >>>>> at sun.net.NetworkClient.doConnect(NetworkClient.java:163) >>>>> at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) >>>>> at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) >>>>> at sun.net.www.http.HttpClient.(HttpClient.java:233) >>>>> at sun.net.www.http.HttpClient.New(HttpClient.java:306) >>>>> at sun.net.www.http.HttpClient.New(HttpClient.java:323) >>>>> at >>>>> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:837) >>>>> at >>>>> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:778) >>>>> at >>>>> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:703) >>>>> at >>>>> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1026) >>>>> at java.net.URL.openStream(URL.java:1009) >>>>> at >>>>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120) >>>>> ... 6 more >>>>> Ended Job = job_201107191711_0013 with exception >>>>> 'java.lang.RuntimeException(Error while reading from task log url)' >>>>> FAILED: Execution Error, return code 1 from >>>>> org.apache.hadoop.hive.ql.exec.MapRedTask >>>>> >>>>> -- >>>>> With Regards >>>>> Vikas Srivastava >>>>> >>>>> DWH & Analytics Team >>>>> Mob:+91 9560885900 >>>>> One97 | Let's get talking ! >>>>> >>>>> >>>> >>>> >>>> -- >>>> With Regards >>>> Vikas Srivastava >>>> >>>> DWH & Analytics Team >>>> Mob:+91 9560885900 >>>> One97 | Let's get talking ! >>>> >>>> >>> >>> >>> -- >>> With Regards >>> Vikas Srivastava >>> >>> DWH & Analytics Team >>> Mob:+91 9560885900 >>> One97 | Let's get talking ! >>> >>> >> > > > -- > With Regards > Vikas Srivastava > > DWH & Analytics Team > Mob:+91 9560885900 > One97 | Let's get talking ! > > Try again. Caused by: java.net.UnknownHostException: hadoopdata3 This clearly indicates that some of your nodes are not able to reach each other. Check your DNS, check your systems hostname and make sure it matches DNS. Check your hostname, check your host file, check your resolver settings including your search domain. One machine is trying to contact hadoopdata3 and is not finding it in a DNS lookup. --90e6ba613b0a9d063f04a86c676d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

On Tue, Jul 19, 2011 at 9:46 AM, Vikas S= rivastava <vikas.srivastava@one97.net> wrote:
Hey Edward,

thanks for re= sponding but i try to ping all the=A0data-node=A0from=A0name-node= =A0and they all are responding..

i wo= n't be able to figure it out where the problem persist.

query is running fine when i dont use any map reduce.. but while using and= map tasks...its get stuck into that..

Regards
Vikas Srivastava
95608859= 00



<= div class=3D"gmail_quote">

On Tue, Jul 19, 2011 at 7:03 PM, Edward Capriolo <= span dir=3D"ltr"><edlinuxguru@gmail.com> wrote:
It must be a host= name or DNS problem. Use dig and ping to find out what is wrong.

On Tue, Jul 19, 2011 at 9:05 AM, Vikas Srivastava <vikas.srivasta= va@one97.net> wrote:


On Tue, Jul 19, 2011 at 6:29 PM, Vikas Srivastav= a <vikas.srivastava@one97.net> wrote:
<= div>
HI Team,


we are using 1 namenode with 11 Datanode each of (16GB ram and 1.4 tb h= dd)

i m getting this error while running any query , simple its not = working when we use any map tasks.

and we are using hive on hadoop.

Total MapReduce jobs =3D 1
Launching Job 1 out of 1
Number of red= uce tasks not specified. Estimated from input data size: 120
In order to= change the average load for a reducer (in bytes):
=A0 set hive.exec.reducers.bytes.per.reducer=3D<number>
In order t= o limit the maximum number of reducers:
=A0 set hive.exec.reducers.max= =3D<number>
In order to set a constant number of reducers:
=A0 = set mapred.reduce.tasks=3D<number>
Starting Job =3D job_201107191711_0013, Tracking URL =3D http://hadoopname:50030/jobdetails.jsp?jobid=3Djob_201107191711_0013<= /a>
Kill Command =3D /home/hadoop/hadoop/bin/../bin/hadoop job=A0 -Dmapred.job.= tracker=3D
10.0.3.28:900= 1 -kill job_201107191711_0013
2011-07-19 18:06:34,973 Stage-1 map =3D 100%,=A0 reduce =3D 100%
Ended J= ob =3D job_201107191711_0013 with errors
java.lang.RuntimeException: Err= or while reading from task log url
=A0=A0=A0=A0=A0=A0=A0 at org.apache.h= adoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:= 130)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hive.ql.exec.ExecDriver.showJobF= ailDebugInfo(ExecDriver.java:889)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.ha= doop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:680)
=A0=A0=A0=A0= =A0=A0=A0 at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.j= ava:123)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hive.ql.exec.Task.executeTask(Ta= sk.java:130)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hive.ql.exec.Tas= kRunner.runSequential(TaskRunner.java:57)
=A0=A0=A0=A0=A0=A0=A0 at org.a= pache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:47)
Caused by: java.net.UnknownHostException: hadoopdata3
=A0=A0=A0=A0=A0=A0= =A0 at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:177)
=A0=A0= =A0=A0=A0=A0=A0 at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:36= 6)
=A0=A0=A0=A0=A0=A0=A0 at java.net.Socket.connect(Socket.java:519)
=A0=A0=A0=A0=A0=A0=A0 at java.net.Socket.connect(Socket.java:469)
=A0=A0= =A0=A0=A0=A0=A0 at sun.net.NetworkClient.doConnect(NetworkClient.java:163)<= br>=A0=A0=A0=A0=A0=A0=A0 at sun.net.www.http.HttpClient.openServer(HttpClie= nt.java:394)
=A0=A0=A0=A0=A0=A0=A0 at sun.net.www.http.HttpClient.openSe= rver(HttpClient.java:529)
=A0=A0=A0=A0=A0=A0=A0 at sun.net.www.http.HttpClient.<init>(HttpClien= t.java:233)
=A0=A0=A0=A0=A0=A0=A0 at sun.net.www.http.HttpClient.New(Htt= pClient.java:306)
=A0=A0=A0=A0=A0=A0=A0 at sun.net.www.http.HttpClient.N= ew(HttpClient.java:323)
=A0=A0=A0=A0=A0=A0=A0 at sun.net.www.protocol.ht= tp.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:837)
=A0=A0=A0=A0=A0=A0=A0 at sun.net.www.protocol.http.HttpURLConnection.plainC= onnect(HttpURLConnection.java:778)
=A0=A0=A0=A0=A0=A0=A0 at sun.net.www.= protocol.http.HttpURLConnection.connect(HttpURLConnection.java:703)
=A0= =A0=A0=A0=A0=A0=A0 at sun.net.www.protocol.http.HttpURLConnection.getInputS= tream(HttpURLConnection.java:1026)
=A0=A0=A0=A0=A0=A0=A0 at java.net.URL.openStream(URL.java:1009)
=A0=A0= =A0=A0=A0=A0=A0 at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.g= etErrors(TaskLogProcessor.java:120)
=A0=A0=A0=A0=A0=A0=A0 ... 6 more
= Ended Job =3D job_201107191711_0013 with exception 'java.lang.RuntimeEx= ception(Error while reading from task log url)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.= MapRedTask

--
With Regards
Vikas Srivastava

DW= H & Analytics Team
Mob:+91 956088= 5900
One97 | Let's get talking !




--
With Regards
Vikas Srivastava=

DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !




--
With Regards
Vikas Sriv= astava

DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !





--
With Regards
Vikas Srivastava
DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !


Try again.

Caused by: java.net.UnknownHostExc= eption: hadoopdata3

This clearly indicates that some of your nodes a= re not able to reach each other. Check your DNS, check your systems hostnam= e and make sure it matches DNS. Check your hostname, check your host file, = check your resolver settings including your search domain. One machine is t= rying to contact=A0 hadoopdata3 and is not finding it in a DNS lookup. --90e6ba613b0a9d063f04a86c676d--