hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From rongzheng yan <rongzheng....@oracle.com>
Subject Re: Apache Hadoop tests fail with UnknownHostException
Date Mon, 08 Jun 2015 21:10:52 GMT
Hi Sandeep,

Thanks for your instruction.

However since I ran the build and tests inside a docker container, and I 
am not the "root" user of the docker container, I cannot edit /etc/hosts 
directly. But I can change /etc/hosts by adding some options in the 
docker command before starting the build environment. Following is what 
I changed:

In hadoop/start_ build_env.sh, the last command is

         "docker run --rm=true -t -i ...".

I added two options for this command:

          "docker run --hostname=rzhadoop 
--add-host="rzhadoop:127.0.0.1" --add-host="localhost:172.17.0.22" 
-rm=true -t -i ...".

"--hostname" option specifies the hostname of the docker container, and 
"--add-host" option adds hostname:ip entry to /etc/hosts of the container.

Then I saved and ran hadoop/start_build_env.sh, and was directed into 
the container. I ran "cat /etc/hosts":

royan@rzhadoop:~/hadoop$ cat /etc/hosts
172.17.0.22    rzhadoop
127.0.0.1    localhost
::1    localhost ip6-localhost ip6-loopback
fe00::0    ip6-localnet
ff00::0    ip6-mcastprefix
ff02::1    ip6-allnodes
ff02::2    ip6-allrouters
127.0.0.1    rzhadoop
172.17.0.22    localhost

So instead of directly appending "rzhadoop" to "127.0.0.1 localhost", it 
appends the new entries at the end of /etc/hosts. I am not sure if this 
is equivalent to your instruction. But this approach does not work, and 
I still got the same test failures due to UnknownHostException.

Do you have any ideas?

Thanks,

Rongzheng

On 6/6/2015 1:22 AM, sandeep vura wrote:
> You can try the following steps as mentioned below
>
> Step-1 Go to /etc/hosts Step-2 Edit the "hosts" file with IP 127.0.0.1 
> [space/tab] localhost [space/tab] HostName (e.g. 
> static.98.35.ebonenet.com <http://static.98.35.ebonenet.com>) Step-3 
> Save the file and try again
>
>
>
> On Sat, Jun 6, 2015 at 2:56 AM, rongzheng yan 
> <rongzheng.yan@oracle.com <mailto:rongzheng.yan@oracle.com>> wrote:
>
>     Hello,
>
>     Have someone met the test failure of UnknownHostException when
>     building Apache Hadoop and running the tests on Linux? How did you
>     deal with the UnknownHostException?
>
>     Any suggestion is greatly appreciated!
>
>     Rongzheng
>
>
>     -------- Original Message --------
>     Subject: 	Apache Hadoop tests fail with UnknownHostException
>     Date: 	Thu, 28 May 2015 11:19:11 -0400
>     From: 	rongzheng yan <rongzheng.yan@oracle.com>
>     <mailto:rongzheng.yan@oracle.com>
>     Organization: 	Oracle Corporation
>     To: 	user@hadoop.apache.org <mailto:user@hadoop.apache.org>
>
>
>
>     Hi experts,
>
>     I tried to build apache hadoop mapreduce project on my Linux host, but
>     got some test failures on hadoop-mapreduce-client-jobclient subproject.
>     Most of these test errors are caused by UnknownHostException.  Following
>     is one of the stacktrace:
>
>     -------------------------------------------------------------------------------
>     Tests run: 12, Failures: 0, Errors: 11, Skipped: 0, Time elapsed: 26.543
>     sec <<< FAILURE! - in org.apache.hadoop.mapreduce.v2.TestUberAM
>     testFailingMapper(org.apache.hadoop.mapreduce.v2.TestUberAM)  Time
>     elapsed: 0.154 sec  <<< ERROR!
>     java.io.IOException: java.util.concurrent.ExecutionException:
>     java.net.UnknownHostException: Invalid host name: local host is:
>     (unknown); destination host is: "43d96e22e846":47575;
>     java.net.UnknownHostException; For more details see:
>     http://wiki.apache.org/hadoop/UnknownHost
>              at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:408)
>              at org.apache.hadoop.ipc.Client$1.call(Client.java:1483)
>              at org.apache.hadoop.ipc.Client$1.call(Client.java:1480)
>              at
>     com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
>              at
>     com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>              at
>     com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>              at
>     com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>              at
>     com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>              at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>              at
>     com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
>              at org.apache.hadoop.ipc.Client.getConnection(Client.java:1480)
>              at org.apache.hadoop.ipc.Client.call(Client.java:1410)
>              at org.apache.hadoop.ipc.Client.call(Client.java:1371)
>              at
>     org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>              at com.sun.proxy.$Proxy92.getNewApplication(Unknown Source)
>              at
>     org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:221)
>              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>              at
>     sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>              at
>     sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>              at java.lang.reflect.Method.invoke(Method.java:606)
>              at
>     org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>              at
>     org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
>              at com.sun.proxy.$Proxy93.getNewApplication(Unknown Source)
>              at
>     org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:220)
>              at
>     org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:228)
>              at
>     org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:188)
>              at
>     org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:231)
>              at
>     org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:153)
>              at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
>              at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
>              at java.security.AccessController.doPrivileged(Native Method)
>              at javax.security.auth.Subject.doAs(Subject.java:415)
>              at
>     org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1666)
>              at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
>              at
>     org.apache.hadoop.mapreduce.v2.TestMRJobs.runFailingMapperJob(TestMRJobs.java:564)
>              at
>     org.apache.hadoop.mapreduce.v2.TestUberAM.testFailingMapper(TestUberAM.java:110)
>
>     I followed the steps to setup dev environment introduced in
>     BUILDING.txt: First install docker, start the docker container, then run
>     ./start-build-env.sh. After that, I was directed into host 43d96e22e846,
>     which is the docker container. I think maybe it is because the docker
>     container's hostname is not recognized by the tests. Following is the
>     content of /etc/hosts of the docker container:
>
>          172.XX.X.XX    43d96e22e846
>          127.0.0.1    localhost
>          ::1    localhost ip6-localhost ip6-loopback
>          fe00::0    ip6-localnet
>          ff00::0    ip6-mcastprefix
>          ff02::1    ip6-allnodes
>          ff02::2    ip6-allrouters
>
>     I saw some suggestions online, saying that we can edit /etc/hosts, to
>     associate "43d96e22e486" with "localhost". But I cannot edit this file,
>     because it is owned by "root" and I am not the "root" user of the docker
>     container. I cannot use "sudo" to edit it either: My password does not
>     work in this container. And I do not think this is a right approach to
>     fix, because this /etc/hosts file is generated by docker.
>
>     Have you met a similar test failure before? Did I miss any steps to
>     configure the docker, or the tests?
>
>     Thanks,
>
>     Rongzheng
>
>
>
>


Mime
View raw message