hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5191) After creation and startup of the hadoop namenode on AIX or Solaris, you will only be allowed to connect to the namenode via hostname but not IP.
Date Mon, 09 Mar 2009 20:13:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12680254#action_12680254
] 

Raghu Angadi commented on HADOOP-5191:
--------------------------------------

The above should work as you expect. How do I run this test?

e.g., the following works :
{{$ bin/hadoop fs -Dhadoop.default.name="hdfs://hostname:7020/" -ls hdfs://ipaddress:7020/user/rangadi/5Mb-2}}

Is this essentially what you are doing?

earlier I said :
bq. [...] But currently getFS("hdfs://host/").getFileStatus("hdfs://host/file") might result
in an error, then HDFS should fix it. [...]

I don't think that is the case. This works as expected, i.e. {{getFS("hdfs://alias1/")}},
{{getFS("hdfs://alias2")}}, and {{getFS("hdfs://ip")}} all get different instances of HDFS
and work as expected, even if all those point to same physical namenode. 

There is one odd thing inside filesystem initialization where it invokes {{NetUtils.getStaticResolution()}}
on the hosts, which seems returns null for my tests. But by default, there are no static resolutions
set.


> After creation and startup of the hadoop namenode on AIX or Solaris, you will only be
allowed to connect to the namenode via hostname but not IP.
> -------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-5191
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5191
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.1
>         Environment: AIX 6.1 or Solaris
>            Reporter: Bill Habermaas
>            Assignee: Bill Habermaas
>            Priority: Minor
>         Attachments: 5191-1.patch, hadoop-5191.patch, TestHadoopHDFS.java
>
>
> After creation and startup of the hadoop namenode on AIX or Solaris, you will only be
allowed to connect to the namenode via hostname but not IP.
> fs.default.name=hdfs://p520aix61.mydomain.com:9000
> Hostname for box is p520aix and the IP is 10.120.16.68
> If you use the following url, "hdfs://10.120.16.68", to connect to the namenode, the
exception that appears below occurs. You can only connect successfully if "hdfs://p520aix61.mydomain.com:9000"
is used. 
> Exception in thread "Thread-0" java.lang.IllegalArgumentException: Wrong FS: hdfs://10.120.16.68:9000/testdata,
expected: hdfs://p520aix61.mydomain.com:9000
> 	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:320)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:84)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:122)
> 	at org.apache.hadoop.dfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:390)
> 	at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
> 	at TestHadoopHDFS.run(TestHadoopHDFS.java:116)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message