hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raghu Angadi <rang...@yahoo-inc.com>
Subject Re: Hadoop behind a Firewall
Date Tue, 11 Sep 2007 21:50:53 GMT

Namenode does not initiate any connections.

The ip address for a Datanode that Namenode gives to client is as the IP 
address that Datanode uses to connect to Namenode (i.e. Namenode just 
does getRemoteAddress() on the connection from Datanode). There is no 
option to change this.

If you just want the clients outside the firewall, do you think 
https://issues.apache.org/jira/browse/HADOOP-1822 does the job?


Stu Hood wrote:
> Hey gang,
> We're getting ready to deploy our first cluster, and while deciding on the node layout,
we ran into an interesting question.
> The cluster will be behind a firewall, and a few clients will be on the outside. We'd
like to minimize the number of external IPs we use, and provide a single IP address with forwarded
ports for each node (using iptables).
> We've used this method before with simpler "client -> server" protocols, but because
of Hadoop's "client -> namenode -> client -> datanode" protocol, I'm assuming this
will not work by default.
> Is it possible to configure the namenode to send clients a different external IP/port
for the datanodes than the one it uses when it communicates directly?
> Thanks a lot!
> Stu Hood
> Webmail.us
> "You manage your business. We'll manage your email."®

View raw message