hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ayon Sinha <ayonsi...@yahoo.com>
Subject Re: Configure NameNode to accept connection from external ips
Date Tue, 01 Feb 2011 20:22:08 GMT
//static class variables
private static final String HDFS_URI = "hdfs://sjcyyyyy.sjc.xxxxx.com:9000";
     private static final String INPUT_PATH = 
"hdfs://sjcyyyyy.sjc.xxxxx.com:9000/<HDFS path>/blahfoo.txt";
    

//method content 
try{
             InetAddress addr = 
InetAddress.getByName("sjcyyyyy01.sjc.xxxxx.com");
             Socket socket = SocketFactory.getDefault().createSocket();
             InetSocketAddress sockAddr = new InetSocketAddress(addr, 9000);
             boolean canConnectToHDFS = false;
             try {
                 socket.connect(sockAddr, 5000);
                 canConnectToHDFS = true;
            } catch (Exception e) {
                 logger.log(LogLevel.WARN, e);
            }
            
             if(canConnectToHDFS){
                 JobConf job = new JobConf(new Configuration(), 
BlahHDFSReader.class);
                 job.set("dfs.permissions", "false");
                 job.setUser("yyyyy");
                 job.set("hadoop.job.ugi", "yyyyy,yyyyy");
                 job.set("dfs.web.ugi", "webuser,webgroup");
                 job.set("ipc.client.connect.max.retries", "5"); 
//<<----------------------------------------DOES NOT WORK. THERE IS A HDFS BUG 
FILED.
                 FileSystem fs = FileSystem.get(URI.create(HDFS_URI), job);
                 Path filePath = new Path(INPUT_PATH);       
                 fis = fs.open(filePath);           
                 BlahStorageServiceHelper helper = new 
BlahStorageServiceHelper();
                 helper.parseStorageStreamReader(new 
InputStreamReader(fis,"UTF-8"), rankCacheEntries);     
            }
                 
        }catch(Exception e){
             logger.log(LogLevel.ERROR, e);
             throw e; //TODO: Exception handling 
        }finally{
             if(fis != null){
                 fis.close();
            }
        }
 -Ayon





________________________________
From: felix gao <gre1600@gmail.com>
To: hdfs-user@hadoop.apache.org
Sent: Tue, February 1, 2011 9:42:23 AM
Subject: Re: Configure NameNode to accept connection from external ips

Ayon,

Thanks for your iinformation, would you able to share your test connection code 
with me? Also, the problem I described does not seems to be a problem if I run 
the same code in the cluster, so I think there must be a configuration parameter 
or some nobs that I can turn to make namenode serve files to client from 
different network.

Felix


On Mon, Jan 31, 2011 at 2:03 PM, Ayon Sinha <ayonsinha@yahoo.com> wrote:

Also, be careful about this when you try to connect to HDFS and it doesn't 
respond. There was a place in the code where it was hard-coded to rety 45 times 
when there was a socketConnectExpectption trying every 15 secs. It was not (at 
least on 0.18 version code I looked at) honoring the configuration max connect 
retries.
>
>
>My workaround was to wrap the code in a test connection code before actually 
>giving control to HDFS to connect.
> -Ayon
>
>
>
>
>
>
________________________________
From: felix gao  <gre1600@gmail.com>
>To: hdfs-user@hadoop.apache.org
>Sent: Mon, January 31, 2011 1:54:41 PM
>Subject: Re: Configure NameNode to accept connection from external ips
>
>
>I am trying to create a client that talks to hdfs and I am having the following 
>problems
>
>ipc.Client: Retrying connect to server: hm01.xxx.xxx.com/xx.xxx.xxx.176:50001. 
>Already tried 0 time(s). 
>
>
>The hm01 is running namenode and tasktracker if connecting to it with internal 
>ip ranges from 192.168.100.1 to 192.168.100.255. However, my client sits in a 
>complete different network.  What do I need to configure to make the namenode 
>serving my client that initiates requests from different network.
>
>here is the core-site.xml is configured for namenode on my client
><property>
>                <name>fs.default.name</name>
>                <value>hdfs://hm01.xxx.xxx.com:50001</value>
>  </property>
>
>Thanks,
>
>Felix
>
>
>
>
>On Tue, Jan 25, 2011 at 2:24 PM, felix gao <gre1600@gmail.com> wrote:
>
>Hi guys, 
>>
>>I have a small cluster that each machine have two NICs one is configured with 
>>external IP and another is configured with internal IP.  Right now all the 
>>machines are communicating with each other via the internal IP.  I want to 
>>configure the namenode to also accept connection via its external ip (white 
>>listed IPs).  I am not sure how to do that.  I have a copy of the slaves's conf 
>>files in my local computer that sits outside of the cluster network and when I 
>>do hadoop fs -ls /user it does not connect to HDFS.
>>
>>Thanks,
>>
>>Felix
>>
>
>



      
Mime
View raw message