accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Smith, Joshua D." <Joshua.Sm...@gd-ais.com>
Subject RE: Accumulo with NameNode HA: UnknownHostException for dfs.nameservices
Date Tue, 03 Sep 2013 18:36:12 GMT
That got me further.

I had to drop the 9000
<property>
     <name>instance.dfs.uri</name>
     <value>hdfs://namenodehostname.domain</value>
</property>

And then I was able to successfully run "accumulo init"

But when I ran bin/start-here.sh I got a similar error

INFO: Zookeeper connected and initialized, attempting to talk to HDFS
Thread "org.apache.accumulo.server.master.state.SetGoalState" died null
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invite(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.accumulo.start.Main$1.run(Main.java:101)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
at org.apache.hadoop.security.SecuirityUtil.buildTokenService(SecurityUtil.java:414)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:164)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2342)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2324)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java351)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
at org.apache.accumulo.server.Accumulo.isInSafeMode(Accumulo.java:218)
at org.apche.accumulo.server.Accumulo.waitforZookeeperAndHdfs(Accumulo.java:202)
at org.apache.accumulo.server.master.state.SetGoalState.main(SetGoalState.java:45)
... 6 more
Caused by: java.netUnknownHostException: mycluster
...21 more
Starting master on hostnameofmaster


From: Eric Newton [mailto:eric.newton@gmail.com]
Sent: Tuesday, September 03, 2013 1:53 PM
To: user@accumulo.apache.org
Subject: Re: Accumulo with NameNode HA: UnknownHostException for dfs.nameservices

Try:

<property>
     <name>instance.dfs.uri</name>
     <value>hdfs://namenodehostname.domain:9000</value>
</property>

Use the port number for your configuration, of course.

-Eric

On Tue, Sep 3, 2013 at 1:33 PM, Smith, Joshua D. <Joshua.Smith@gd-ais.com<mailto:Joshua.Smith@gd-ais.com>>
wrote:
I tried adding the following property to the accumulo-site.xml file, but got the same results.
<property>
     <name>instance.dfs.uri</name>
     <value>namenodehostname.domain</value>
</property>

Josh

From: Eric Newton [mailto:eric.newton@gmail.com<mailto:eric.newton@gmail.com>]
Sent: Tuesday, September 03, 2013 1:22 PM

To: user@accumulo.apache.org<mailto:user@accumulo.apache.org>
Subject: Re: Accumulo with NameNode HA: UnknownHostException for dfs.nameservices

Accumulo generally uses the settings in the hdfs configuration files using FileSystem.get(new
Configuration()).

In 1.5 you can configure instance.dfs.uri to specify a NameNode uri.

In 1.6 you can set instance.volumes to multiple uri's, but this is not the same as HA.

-Eric

On Tue, Sep 3, 2013 at 12:43 PM, Smith, Joshua D. <Joshua.Smith@gd-ais.com<mailto:Joshua.Smith@gd-ais.com>>
wrote:
Eric-

The link you sent is directly relevant, but unfortunately it didn't resolve it.

I already had the following property set
<property>
                <name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

The discussion at the link that you sent was in a CDH forum and it looked like HA resulted
in some required changes for the hdfs command to be able to resolve the active NameNode. That
leads me to two questions:


1)      Does Accumulo know how to resolve the active NameNode?

2)      If it doesn't, is there a way to explicitly specify it like the user did for the hdfs
command as a work-around?

Josh

From: Eric Newton [mailto:eric.newton@gmail.com<mailto:eric.newton@gmail.com>]
Sent: Tuesday, September 03, 2013 12:24 PM
To: user@accumulo.apache.org<mailto:user@accumulo.apache.org>
Subject: Re: Accumulo with NameNode HA: UnknownHostException for dfs.nameservices

This discussion seems to provide some insight:

https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/I_OmKdZOjVE

Please let us know if you get it working; I would like to test this for the 1.6.0 release.

-Eric


On Tue, Sep 3, 2013 at 12:06 PM, Smith, Joshua D. <Joshua.Smith@gd-ais.com<mailto:Joshua.Smith@gd-ais.com>>
wrote:
All-

I'm installing Accumulo 1.5 on CDH 4.3. I'm running Hadoop 2.0 (yarn) with High Availability
(HA) for the NameNode. When I try to initialize Accumulo I get the following error message:

>sudo -u accumulo accumulo init

FATAL: java.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
java.lang.IllegalArgumentException: java.net.UnknownHostException: mycluster
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:414)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProzies.java:164)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:448)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:410)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:128)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2308)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:87)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2324)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:163)
at org.apache.accumulo.core.file.FileUtil.getFileSystem(FileUtil.java:550)
at org.apache.accumulo.server.util.Initialize.main(Initialize.java:485)
...

"mycluster" is from my hdfs-site.xml and is part of the HA configuration
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>

It's not a hostname and I'm not sure why Accumulo would try to resolve it as if it was a hostname.

Any idea why I would get this error or why Accumulo would have trouble running on Hadoop 2.0
with HA?

Thanks,
Josh







Mime
View raw message