hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Viral Bajaria <viral.baja...@gmail.com>
Subject Re: Problem in Hadoop(0.20.2) with hive
Date Tue, 19 Jul 2011 17:23:16 GMT
Vikas,

I don't think it's the ping from name-node that is an issue here, you should
run a ping command from data-node to all data-nodes/name-node.

Thanks,
Viral

On Tue, Jul 19, 2011 at 6:50 AM, Edward Capriolo <edlinuxguru@gmail.com>wrote:

>
>
> On Tue, Jul 19, 2011 at 9:46 AM, Vikas Srivastava <
> vikas.srivastava@one97.net> wrote:
>
>> Hey Edward,
>>
>> thanks for responding but i try to ping all the *data-node* from *
>> name-node* and they all are responding..
>>
>> i won't be able to figure it out where the problem persist.
>>
>> query is running fine when i dont use any map reduce.. but while using and
>> map tasks...its get stuck into that..
>>
>> Regards
>> Vikas Srivastava
>> 9560885900
>>
>>
>>
>>
>>
>> On Tue, Jul 19, 2011 at 7:03 PM, Edward Capriolo <edlinuxguru@gmail.com>wrote:
>>
>>> It must be a hostname or DNS problem. Use dig and ping to find out what
>>> is wrong.
>>>
>>> On Tue, Jul 19, 2011 at 9:05 AM, Vikas Srivastava <
>>> vikas.srivastava@one97.net> wrote:
>>>
>>>>
>>>>
>>>> On Tue, Jul 19, 2011 at 6:29 PM, Vikas Srivastava <
>>>> vikas.srivastava@one97.net> wrote:
>>>>
>>>>>
>>>>> HI Team,
>>>>>>
>>>>>>
>>>>>> we are using 1 namenode with 11 Datanode each of (16GB ram and 1.4
tb
>>>>>> hdd)
>>>>>>
>>>>>> i m getting this error while running any query , simple its not
>>>>>> working when we use any map tasks.
>>>>>>
>>>>>> and we are using hive on hadoop.
>>>>>>
>>>>>> Total MapReduce jobs = 1
>>>>>> Launching Job 1 out of 1
>>>>>> Number of reduce tasks not specified. Estimated from input data size:
>>>>>> 120
>>>>>> In order to change the average load for a reducer (in bytes):
>>>>>>   set hive.exec.reducers.bytes.per.reducer=<number>
>>>>>> In order to limit the maximum number of reducers:
>>>>>>   set hive.exec.reducers.max=<number>
>>>>>> In order to set a constant number of reducers:
>>>>>>   set mapred.reduce.tasks=<number>
>>>>>> Starting Job = job_201107191711_0013, Tracking URL =
>>>>>> http://hadoopname:50030/jobdetails.jsp?jobid=job_201107191711_0013
>>>>>> Kill Command = /home/hadoop/hadoop/bin/../bin/hadoop job
>>>>>> -Dmapred.job.tracker=10.0.3.28:9001 -kill job_201107191711_0013
>>>>>> 2011-07-19 18:06:34,973 Stage-1 map = 100%,  reduce = 100%
>>>>>> Ended Job = job_201107191711_0013 with errors
>>>>>> java.lang.RuntimeException: Error while reading from task log url
>>>>>>         at
>>>>>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130)
>>>>>>         at
>>>>>> org.apache.hadoop.hive.ql.exec.ExecDriver.showJobFailDebugInfo(ExecDriver.java:889)
>>>>>>         at
>>>>>> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:680)
>>>>>>         at
>>>>>> org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:123)
>>>>>>         at
>>>>>> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130)
>>>>>>         at
>>>>>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>>>>>>         at
>>>>>> org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:47)
>>>>>> Caused by: java.net.UnknownHostException: hadoopdata3
>>>>>>         at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:177)
>>>>>>         at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
>>>>>>         at java.net.Socket.connect(Socket.java:519)
>>>>>>         at java.net.Socket.connect(Socket.java:469)
>>>>>>         at sun.net.NetworkClient.doConnect(NetworkClient.java:163)
>>>>>>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
>>>>>>         at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
>>>>>>         at sun.net.www.http.HttpClient.<init>(HttpClient.java:233)
>>>>>>         at sun.net.www.http.HttpClient.New(HttpClient.java:306)
>>>>>>         at sun.net.www.http.HttpClient.New(HttpClient.java:323)
>>>>>>         at
>>>>>> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:837)
>>>>>>         at
>>>>>> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:778)
>>>>>>         at
>>>>>> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:703)
>>>>>>         at
>>>>>> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1026)
>>>>>>         at java.net.URL.openStream(URL.java:1009)
>>>>>>         at
>>>>>> org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120)
>>>>>>         ... 6 more
>>>>>> Ended Job = job_201107191711_0013 with exception
>>>>>> 'java.lang.RuntimeException(Error while reading from task log url)'
>>>>>> FAILED: Execution Error, return code 1 from
>>>>>> org.apache.hadoop.hive.ql.exec.MapRedTask
>>>>>>
>>>>>> --
>>>>>> With Regards
>>>>>> Vikas Srivastava
>>>>>>
>>>>>> DWH & Analytics Team
>>>>>> Mob:+91 9560885900
>>>>>> One97 | Let's get talking !
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> With Regards
>>>>> Vikas Srivastava
>>>>>
>>>>> DWH & Analytics Team
>>>>> Mob:+91 9560885900
>>>>> One97 | Let's get talking !
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> With Regards
>>>> Vikas Srivastava
>>>>
>>>> DWH & Analytics Team
>>>> Mob:+91 9560885900
>>>> One97 | Let's get talking !
>>>>
>>>>
>>>
>>
>>
>> --
>> With Regards
>> Vikas Srivastava
>>
>> DWH & Analytics Team
>> Mob:+91 9560885900
>> One97 | Let's get talking !
>>
>>
> Try again.
>
>
> Caused by: java.net.UnknownHostException: hadoopdata3
>
> This clearly indicates that some of your nodes are not able to reach each
> other. Check your DNS, check your systems hostname and make sure it matches
> DNS. Check your hostname, check your host file, check your resolver settings
> including your search domain. One machine is trying to contact  hadoopdata3
> and is not finding it in a DNS lookup.

Mime
View raw message