hive-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gaurav Chandalia <chandal...@gmail.com>
Subject Re: Connection Refused error on using hive on Amazon EC2 cluster
Date Tue, 21 Jul 2009 02:33:24 GMT
Hi Tom,

Thanks for the reply. I had figured out what the problem was. When you
create a table, hive actually stores the location of the table (e.g.
hdfs://ip:port/user/root/...) in the SDS and DBS tables in the
metastore. So when I bring up a new cluster the master has a new IP,
but hive's metastore is still pointing to the locations within the old
cluster. I could modify the metastore to update with the new IP
everytime I bring up a cluster. But the easier and simpler solution
was to just use an elastic IP for the master. :-)

--
gaurav.

On 7/20/09, Tom White <tom@cloudera.com> wrote:
> Hi Gaurav,
>
> I'm not sure where Hive is picking up the old settings from (are they
> stored in the metastore?), but I wonder if you are hitting this bug:
> https://issues.apache.org/jira/browse/HIVE-453. You could try building
> Hive trunk to see if that helps.
>
> Cheers,
> Tom
>
> On Sat, Jul 18, 2009 at 1:49 AM, Gaurav Chandalia<chandaliag@gmail.com>
> wrote:
>> Hi,
>>
>> I am using EC2 to start a hadoop cluster (cloudera's distribution) and
>> setup
>> hive on it (specifically, the hive client is on the master/jobtracker). I
>> am
>> using the latest version of hive (with hadoop 0.18.3) and have setup a
>> mysql
>> metastore on an EBS mount for persistence. The HDFS is also persisting on
>> several EBS mounts. Everything ran fine the very first time I setup hive
>> and
>> added a couple of tables. Then I shutdown the cluster and started a new
>> one
>> and mounted the metastore EBS mount on the new cluster's master. After
>> that
>> if I try to run a hive query I get the following error:
>>
>> ----
>> Job Submission failed with exception 'java.net.ConnectException(Call to
>> domU-amazon-internal-ip.compute-1.internal/amazon-internal-ip:9000 failed
>> on
>> connection exception: java.net.ConnectException: Connection refused)'
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.ExecDriver
>> ----
>>
>> The problem is that amazon-internal-ip refers to the master of the
>> previous
>> cluster I had shutdown. For some reason, hive isn't picking up the new
>> cluster's configuration.
>>
>> I tried setting the mapred.job.tracker and fs.default.name with the new
>> ip's, but it didn't work. Similarly, adding these parameters to
>> hive-site.xml config file did not work. I also checked hadoop config files
>> and the parameters are correctly specified with the new ip's. Also,
>> running
>> a normal hadoop job works fine.
>>
>> Does anyone know where is hive picking up these parameters from?
>>
>> Any help would be appreciated. Thanks.
>>
>> --
>> gaurav.
>>
>> ----
>> For more info, here is the detailed error message from hive.log, it says
>> pretty much the same thing -
>>
>> 2009-07-17 18:40:59,081 ERROR ql.Driver
>> (SessionState.java:printError(279))
>> - FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.ExecDriver
>> 2009-07-17 18:47:01,530 ERROR exec.ExecDriver
>> (SessionState.java:printError(279)) - Job Submission failed with exception
>> 'java.net.ConnectException(Call to
>> domU-12-31-38-00-C8-32.compute-1.internal/10.252.207.192:9000 failed on
>> connection exception: java.net.ConnectException: Connection refused)'
>> java.net.ConnectException: Call to
>> domU-12-31-38-00-C8-32.compute-1.internal/10.252.207.192:9000 failed on
>> connection exception: java.net.ConnectException: Connection refused
>>     at org.apache.hadoop.ipc.Client.wrapException(Client.java:743)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:719)
>>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
>>     at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown Source)
>>     at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:348)
>>     at
>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:103)
>>     at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:172)
>>     at
>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:67)
>>     at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328)
>>     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56)
>>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1343)
>>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:213)
>>     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
>>     at
>> org.apache.hadoop.hive.ql.exec.ExecDriver.addInputPaths(ExecDriver.java:685)
>>     at
>> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:378)
>>     at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:335)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:241)
>>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:122)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:165)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:258)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
>>     at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>>     at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
>> Caused by: java.net.ConnectException: Connection refused
>>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>>     at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
>>     at
>> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:402)
>>     at
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:301)
>>     at
>> org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:178)
>>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:820)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:705)
>>     ... 27 more
>> ----
>>
>

Mime
View raw message