hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amlan Mandal <am...@fourint.com>
Subject Re: hive on mutinode hadoop
Date Tue, 22 Feb 2011 06:25:29 GMT
My moutinode hadoop is running fine but my hive is throwing

java.net.ConnectException: Call to localhost/127.0.0.1:54310 failed on
connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.
>
> wrapException(Client.java:767)
>     at org.apache.hadoop.ipc.Client.call(Client.java:743)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>     at $Proxy4.getProtocolVersion(Unknown Source)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>     at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
> ...


error.

Can somebody please  tell which setting I need to modify?

Amlan


On Tue, Feb 22, 2011 at 10:21 AM, Amlan Mandal <amlan@fourint.com> wrote:

> Thanks Sangeetha.
>
> If I can check http://<amlan-laptop>:50070/dfsnodelist.jsp?whatNodes=LIVE
>
> amlan-laptop is my master .
>
> If it shows ALL data nodes that mean my multi node hadoop is working
> fine.(I think)
>
> Now in hive CLI I am getting
>
> java.net.ConnectException: Call to localhost/127.0.0.1:54310 failed on
> connection exception: java.net.ConnectException: Connection refused
>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
>     at org.apache.hadoop.ipc.Client.call(Client.java:743)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>     at $Proxy4.getProtocolVersion(Unknown Source)
>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
>     at
> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
> ...
>
> Which setting tells hive about HDFS URI?
> I think I need to change that setting.
>
>
> On Tue, Feb 22, 2011 at 9:41 AM, sangeetha s <sangee.sha@gmail.com> wrote:
>
>> Ya,What Jeff said is correct.
>> You should not name different ip's in a common name. Map the Ip's and host
>> name correctly and try again.
>> Cheers!
>>
>> On Mon, Feb 21, 2011 at 7:43 PM, Jeff Bean <jwfbean@cloudera.com> wrote:
>>
>>> One thing i notice is that /etc/hosts is different on each host:
>>> amlan-laptop is bound to localhost on the master and its bound to a
>>> different ip on the slave. Make the files on both macnes the same.
>>>
>>> Sent from my iPad
>>>
>>> On Feb 21, 2011, at 2:06, Amlan Mandal <amlan@fourint.com> wrote:
>>>
>>> Thanks MIS.
>>>
>>> Can somebody please tell me what is wrong with this.
>>>
>>>
>>> cat /etc/hosts (on master)
>>>
>>> 127.0.0.1   localhost   amlan-laptop
>>> 192.168.1.11    dhan
>>>
>>>
>>> cat /etc/hosts (on slave)
>>> 127.0.0.1       localhost       dhan
>>> 192.168.1.22    amlan-laptop
>>>
>>> cat conf/masters (on master)
>>> amlan-laptop
>>>
>>> cat conf/slaves
>>> amlan-laptop
>>> dhan
>>>
>>>
>>> cat conf/core-site.xml on BOTH machines
>>> <name>fs.default.name</name>
>>>                 <value>hdfs://amlan-laptop:54310</value>
>>>
>>> cat conf/mapred-site.xml on BOTH machines
>>> <name>mapred.job.tracker</name>
>>>         <value>amlan-laptop:54311</value>
>>>
>>>
>>> hostname (on master)
>>> amlan-laptop
>>>
>>> hostname (on slave)
>>> dhan
>>>
>>>
>>> passwordless SSH from master TO master works fine (ssh amlan-laptop)
>>> passwordless SSH from master TO slave works fine (ssh dhan)
>>>
>>>
>>> I see on slave datanode log when start-dfs.sh
>>>
>>>
>>> 2011-02-21 15:25:31,304 INFO org.apache.hadoop.ipc.RPC: Server at
>>> amlan-laptop/192.168.1.22:54310 not available yet, Zzzzz...
>>> 2011-02-21 15:25:33,312 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: amlan-laptop/192.168.1.22:54310. Already tried 0
>>> time(s).
>>>
>>>
>>> Why on the earth amlan-laptop/192.168.1.22:54310  ?????
>>>
>>>
>>> should NOT it be amlan-laptop:54310 ???
>>>
>>> why does it concatenate hostname/ip ???
>>>
>>> Can somebody PLEASE help me out.
>>>
>>>
>>> On Mon, Feb 21, 2011 at 2:12 PM, MIS < <misapache@gmail.com>
>>> misapache@gmail.com> wrote:
>>>
>>>> Please have the host-name and ip address mapping in the /etc/hosts file
>>>> on both the nodes that are running hadoop cluster.
>>>>
>>>> One more thing  : I hope secondary namenode is also running along
>>>> namenode but you may have forgot to mention it.
>>>>
>>>> Thanks,
>>>> MIS
>>>>
>>>>
>>>> On Mon, Feb 21, 2011 at 12:47 PM, Amlan Mandal < <amlan@fourint.com>
>>>> amlan@fourint.com> wrote:
>>>>
>>>>> Thanks Mafish.
>>>>> Can you please point me which config need to be set correctly?
>>>>>
>>>>> Amlan
>>>>>
>>>>>
>>>>> On Mon, Feb 21, 2011 at 12:45 PM, Mafish Liu < <mafish@gmail.com>
>>>>> mafish@gmail.com> wrote:
>>>>>
>>>>>> It seem you did not config your HDFS properly.
>>>>>>
>>>>>> "Caused by: java.lang.IllegalArgumentException: Wrong FS:
>>>>>> hdfs://
>>>>>> 192.168.1.22:54310/tmp/hive-hadoop/hive_2011-02-21_12-09-42_678_6107747797061030113
>>>>>> ,
>>>>>> expected: hdfs://amlan-laptop.local:54310 "
>>>>>>
>>>>>>
>>>>>>
>>>>>> 2011/2/21 Amlan Mandal < <amlan@fourint.com>amlan@fourint.com>:
>>>>>> > To give more context my multinode hadoop is working fine.
>>>>>> fs.default.name,
>>>>>> > mapred.job.tracker settings are correct.
>>>>>> > I can submit job to my multinode hadoop and see output.  (One
of the
>>>>>> node
>>>>>> > running namenode,datanode,job tracker , task tracker other running
>>>>>> task
>>>>>> > tracker and datanode)
>>>>>> >
>>>>>> > On Mon, Feb 21, 2011 at 12:24 PM, Amlan Mandal <<amlan@fourint.com>
>>>>>> amlan@fourint.com> wrote:
>>>>>> >>
>>>>>> >> Earlier I had hive running on single node hadoop which was
working
>>>>>> fine.
>>>>>> >> Now I made it 2 node hadoop cluster. When I run hive from
cli I am
>>>>>> getting
>>>>>> >> following error
>>>>>> >>
>>>>>> >>
>>>>>> >> java.lang.RuntimeException: Error while making MR scratch
directory
>>>>>> -
>>>>>> >> check filesystem config (null)
>>>>>> >>     at
>>>>>> org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:216)
>>>>>> >>     at
>>>>>> org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:292)
>>>>>> >>     at
>>>>>> >>
>>>>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:825)
>>>>>> >>     at
>>>>>> >>
>>>>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6093)
>>>>>> >>     at
>>>>>> >>
>>>>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:125)
>>>>>> >>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:304)
>>>>>> >>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:379)
>>>>>> >>     at
>>>>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
>>>>>> >>     at
>>>>>> >>
>>>>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
>>>>>> >>     at
>>>>>> org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:302)
>>>>>> >>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
>>>>>> >>     at
>>>>>> >>
>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>> >>     at
>>>>>> >>
>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>> >>     at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>> >>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>>>> >> Caused by: java.lang.IllegalArgumentException: Wrong FS:
>>>>>> >> hdfs://
>>>>>> 192.168.1.22:54310/tmp/hive-hadoop/hive_2011-02-21_12-09-42_678_6107747797061030113
>>>>>> ,
>>>>>> >> expected: hdfs://amlan-laptop.local:54310
>>>>>> >> ...
>>>>>> >>
>>>>>> >>
>>>>>> >> I can guess I need to change some config variable for hive
, can
>>>>>> somebody
>>>>>> >> please help me out?
>>>>>> >>
>>>>>> >>
>>>>>> >> Amlan
>>>>>> >
>>>>>> >
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>>
>> --
>>
>>
>>
>> Regards,
>> Sangita
>>
>
>

Mime
View raw message