hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5191) After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP.
Date Wed, 04 Mar 2009 22:35:56 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12678968#action_12678968
] 

Raghu Angadi commented on HADOOP-5191:
--------------------------------------


This does not seem like an AIX or Solaris issue. Fix should work work for ips as well as aliases
if there is one.

This goes to basics of what "fs.default.name" means. If a canonical for for comparing makes
sense according to its definition, then we should do it properly (for e.g. how are multiple
ips handled, or aliases handled as Bo Shi mentioned).

Do we have a definition or meaning of "fs.default.name"?

This issue has come up multiple times and deserves either a fix or a clarification.

regd patch : please avoid referring jiras in the code as much as possible.. it is ok to 'waste
space' with slightly longer justifications in the code.

> After creation and startup of the hadoop namenode on AIX or Solaris, you will only be
allowed to connect to the namenode via hostname but not IP.
> -------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5191
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5191
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.1
>         Environment: AIX 6.1 or Solaris
>            Reporter: Bill Habermaas
>            Assignee: Bill Habermaas
>            Priority: Minor
>         Attachments: 5191-1.patch, hadoop-5191.patch
>
>
> After creation and startup of the hadoop namenode on AIX or Solaris, you will only be
allowed to connect to the namenode via hostname but not IP.
> fs.default.name=hdfs://p520aix61.mydomain.com:9000
> Hostname for box is p520aix and the IP is 10.120.16.68
> If you use the following url, "hdfs://10.120.16.68", to connect to the namenode, the
exception that appears below occurs. You can only connect successfully if "hdfs://p520aix61.mydomain.com:9000"
is used. 
> Exception in thread "Thread-0" java.lang.IllegalArgumentException: Wrong FS: hdfs://10.120.16.68:9000/testdata,
expected: hdfs://p520aix61.mydomain.com:9000
> 	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:84)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:122)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
> 	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
> 	at TestHadoopHDFS.run(TestHadoopHDFS.java:116)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message