hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5191) After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP.
Date Wed, 04 Mar 2009 23:11:56 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12678978#action_12678978
] 

Raghu Angadi commented on HADOOP-5191:
--------------------------------------

> Accesses to a namenode with different addresses and/or hostnames should result in different
DistributedFileSystem instances.

Yes. I see two problems :
 
* HDFS should not change host name in the URI based on resolution. So the following should
result in an error: getFS("hdfs://host/").getFileStatus("hdfs://host.domain/file"). 
   ** But currently getFS("hdfs://host/").getFileStatus("hdfs://host/file") might result in
an error, then HDFS should fix it.

* TestHadoopHDFS.java might essentially be making the same mistake : {{getFS("hdfs://hostname/").getFileStatus("hdfs://ip/file");}}
It should rather do {{getFS("hdfs://ip")...}} 
    ** Where is this file located?

> After creation and startup of the hadoop namenode on AIX or Solaris, you will only be
allowed to connect to the namenode via hostname but not IP.
> -------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5191
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5191
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.1
>         Environment: AIX 6.1 or Solaris
>            Reporter: Bill Habermaas
>            Assignee: Bill Habermaas
>            Priority: Minor
>         Attachments: 5191-1.patch, hadoop-5191.patch
>
>
> After creation and startup of the hadoop namenode on AIX or Solaris, you will only be
allowed to connect to the namenode via hostname but not IP.
> fs.default.name=hdfs://p520aix61.mydomain.com:9000
> Hostname for box is p520aix and the IP is 10.120.16.68
> If you use the following url, "hdfs://10.120.16.68", to connect to the namenode, the
exception that appears below occurs. You can only connect successfully if "hdfs://p520aix61.mydomain.com:9000"
is used. 
> Exception in thread "Thread-0" java.lang.IllegalArgumentException: Wrong FS: hdfs://10.120.16.68:9000/testdata,
expected: hdfs://p520aix61.mydomain.com:9000
> 	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:84)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:122)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
> 	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
> 	at TestHadoopHDFS.run(TestHadoopHDFS.java:116)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message