hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Wilkinson <paul.m.wilkin...@gmail.com>
Subject Re: Using NFS mounted volume for Hadoop installation/configuration
Date Mon, 18 Feb 2013 18:44:19 GMT
That requirement for 100% availability is the issue. If NFS goes down, you lose all sorts of
things that are critical. This will work for a dev cluster, but strongly isn't recommended
for production. 

As a first step, consider rsync - that way everything is local, so fewer external dependencies.
After that, consider not managing boxes by hand :)

Paul


On 18 Feb 2013, at 18:09, Chris Embree <cembree@gmail.com> wrote:

> I'm doing that currently.  No problems to report so far.   
> 
> The only pitfall I've found is around NFS stability.  If your NAS is 100% solid no problems.
 I've seen mtab get messed up and refuse to remount if NFS has any hiccups. 
> 
> If you want to really crazy, consider NFS for your datanode root fs.  See the oneSIS
project for details.  http://onesis.sourceforge.net
> 
> Enjoy.
> 
> On Mon, Feb 18, 2013 at 1:00 PM, Mehmet Belgin <mehmet.belgin@oit.gatech.edu> wrote:
>> Hi Everyone,
>> 
>> Will it be any problem if I put the hadoop executables and configuration on a NFS
volume, which is shared by all masters and slaves? This way the configuration changes will
be available for all nodes, without need for synching any files. While this looks almost like
a no-brainer, I am wondering if there are any pitfalls I need to be aware of.
>> 
>> On a related question, is there a best practices (do's and don'ts ) document that
you can suggest other than the regular documentation by Apache?
>> 
>> Thanks!
>> -Mehmet
> 

Mime
View raw message