hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Martyniak <j...@avum.com>
Subject Re: Multiple NIC Cards
Date Wed, 10 Jun 2009 01:24:45 GMT

So I setup a dns server that is for the internal network.  changed all  
of the names to duey.local, and created a master zone for .local on  
the DNS.  Put the domains server as the first one in /etc/resolv.conf  
file, added it to the interface.  I changed the hostname of the  
machine that it is running on from duey.xxxx.com to duey.local.   
Checked that the dns resolves, and it does. Ran nslookup and returns  
the name of the machine given the ip address.

changed all of the names from the IP Addresses to duey.local, in my  
hadoop-site.xml, changed the names in the masters and slaves files.

Deleted all of the logs, deleted the /tmp directory stuff.

Then restarted hadoop.  And much to my surprise.....it still didn't  
work.

I really thought that this would work as it seems to be the consensus  
that the issue is the resolution of the name.

Any other thoughts would be greatly appreciated.

-John





On Jun 9, 2009, at 3:17 PM, Raghu Angadi wrote:

>
> I still need to go through the whole thread. but we feel your pain.
>
> First, please try setting fs.default.name to namenode internal ip on  
> the datanodes. This should make NN to attach internal ip so the  
> datanodes (assuming your routing is correct). NameNode webUI should  
> list internal ips for datanode. You might have to temporarily change  
> NameNode code to listen on 0.0.0.0.
>
> That said, The issues you are facing are pretty unfortunate. As  
> Steve mentioned Hadoop is all confused about hostname/ip and there  
> is unecessary reliance on hostname and reverse DNS look ups in many  
> many places.
>
> At least fairly straight fwd set ups with multiple NICs should be  
> handled well.
>
> dfs.datanode.dns.interface should work like you expected (but not  
> very surprised it didn't).
>
> Another thing you could try is setting dfs.datanode.address to the  
> internal ip address (this might already be discussed in the thread).  
> This should at least get all the bulk datatransfers happen over  
> internal NICs. One way to make sure is to hover on the datanode node  
> on NameNode webUI.. it shows the ip address.
>
> good luck.
>
> It might be better document your pains and findings in a Jira (with  
> most of the details in one or more comments rather than in  
> description).
>
> Raghu.
>
> John Martyniak wrote:
>> So I changed all of the 0.0.0.0 on one machine to point to the  
>> 192.168.1.102 address.
>> And still it picks up the hostname and ip address of the external  
>> network.
>> I am kind of at my wits end with this, as I am not seeing a  
>> solution yet, except to take the machines off of the external  
>> network and leave them on the internal network which isn't an option.
>> Has anybody had this problem before?  What was the solution?
>> -John
>> On Jun 9, 2009, at 10:17 AM, Steve Loughran wrote:
>>> One thing to consider is that some of the various services of  
>>> Hadoop are bound to 0:0:0:0, which means every Ipv4 address, you  
>>> really want to bring up everything, including jetty services, on  
>>> the en0 network adapter, by binding them to  192.168.1.102; this  
>>> will cause anyone trying to talk to them over the other network to  
>>> fail, which at least find the problem sooner rather than later
>


Mime
View raw message