hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bill Habermaas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5191) After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP.
Date Fri, 27 Mar 2009 13:26:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12689892#action_12689892
] 

Bill Habermaas commented on HADOOP-5191:
----------------------------------------

This issue probably needs to be reopened. I have discovered that map/reduce also has dependency
on how hdfs is connected (hostname as opposed to IP address).  I don't think this should be
reported as another jira but what do you think?  Guys - there has to be a cleaner way to handle
hostname/IP usage that works across the board. 

2009-03-27 04:15:45,045 WARN  [Thread-145] org.apache.hadoop.mapred.LocalJobRunner: job_local_0002
java.io.IOException: Can not get the relative path: base = hdfs://10.120.16.68:9000/mydata/2009/03/27/0bab100a-1bf1-499a-935d-bc4b4e94f44c/_temporary/_attempt_local_0002_r_000000_0
child = hdfs://p520aix61.mydomain.com:9000/mydata/2009/03/27/0bab100a-1bf1-499a-935d-bc4b4e94f44c/_temporary/_attempt_local_0002_r_000000_0/part-00000
	at org.apache.hadoop.mapred.Task.getFinalPath(Task.java:586)
	at org.apache.hadoop.mapred.Task.moveTaskOutputs(Task.java:599)
	at org.apache.hadoop.mapred.Task.moveTaskOutputs(Task.java:617)
	at org.apache.hadoop.mapred.Task.saveTaskOutput(Task.java:561)
	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:202)


> After creation and startup of the hadoop namenode on AIX or Solaris, you will only be
allowed to connect to the namenode via hostname but not IP.
> -------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5191
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5191
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.1
>         Environment: AIX 6.1 or Solaris
>            Reporter: Bill Habermaas
>            Assignee: Raghu Angadi
>            Priority: Minor
>             Fix For: 0.21.0
>
>         Attachments: 5191-1.patch, HADOOP-5191.patch, HADOOP-5191.patch, hadoop-5191.patch,
TestHadoopHDFS.java
>
>
> After creation and startup of the hadoop namenode on AIX or Solaris, you will only be
allowed to connect to the namenode via hostname but not IP.
> fs.default.name=hdfs://p520aix61.mydomain.com:9000
> Hostname for box is p520aix and the IP is 10.120.16.68
> If you use the following url, "hdfs://10.120.16.68", to connect to the namenode, the
exception that appears below occurs. You can only connect successfully if "hdfs://p520aix61.mydomain.com:9000"
is used. 
> Exception in thread "Thread-0" java.lang.IllegalArgumentException: Wrong FS: hdfs://10.120.16.68:9000/testdata,
expected: hdfs://p520aix61.mydomain.com:9000
> 	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:84)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:122)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
> 	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
> 	at TestHadoopHDFS.run(TestHadoopHDFS.java:116)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message