Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 668D9E4D9 for ; Mon, 18 Feb 2013 19:31:51 +0000 (UTC) Received: (qmail 6041 invoked by uid 500); 18 Feb 2013 19:31:46 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 5724 invoked by uid 500); 18 Feb 2013 19:31:46 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 5715 invoked by uid 99); 18 Feb 2013 19:31:46 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 18 Feb 2013 19:31:46 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of cembree@gmail.com designates 209.85.128.50 as permitted sender) Received: from [209.85.128.50] (HELO mail-qe0-f50.google.com) (209.85.128.50) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 18 Feb 2013 19:31:39 +0000 Received: by mail-qe0-f50.google.com with SMTP id 7so2615101qea.23 for ; Mon, 18 Feb 2013 11:31:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:reply-to:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=f1xoqkvRjCcs2EsY/y38fdlZIq1qM47L2pWMlAXdSwk=; b=YEBo1UUdsiFMlAw1COidNrMu7KiGs0h1sEYY1nbTG9+4yIVnC2eQ2H11y992uyDD9k D6P4Qz/gaex3HFl4CfRapCJ5UIxnBwFE9Axr5WFtptGN5t8pCIIx7glV6S6aBl47i8Z5 P43QhYlTY4HMNK5kueuWTHCAQ/dgWZskCXDsLQFEjGB839NDyJzNHo3qWCM+kCJMwWKI LkjaYn8n1YiYi30gRuMTde49tFbrfrmrtyGPRmN6k89vxr5/2WAKCaAq7cZ5oa7L4/UA SivW4WWcn9A2iFN7LM2O4Mqub5Dyb66QJWDmdOtvFHY7yjRy6+wufeBTxxc7t23xHoEs JxOw== MIME-Version: 1.0 X-Received: by 10.224.209.193 with SMTP id gh1mr6050756qab.86.1361215878606; Mon, 18 Feb 2013 11:31:18 -0800 (PST) Received: by 10.229.15.198 with HTTP; Mon, 18 Feb 2013 11:31:17 -0800 (PST) Reply-To: chris@embree.us In-Reply-To: <85DD9E5D-5C53-44BF-A1EB-13CDB31E42A7@gmail.com> References: <85DD9E5D-5C53-44BF-A1EB-13CDB31E42A7@gmail.com> Date: Mon, 18 Feb 2013 14:31:17 -0500 Message-ID: Subject: Re: Using NFS mounted volume for Hadoop installation/configuration From: Chris Embree To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf300faf81e82d8504d604c4bd X-Virus-Checked: Checked by ClamAV on apache.org --20cf300faf81e82d8504d604c4bd Content-Type: text/plain; charset=ISO-8859-1 Just for clarification, we only use NFS for binaries and config files. HDFS and MarpRed write to local disk. We just don't install an OS there. :) On Mon, Feb 18, 2013 at 1:44 PM, Paul Wilkinson wrote: > That requirement for 100% availability is the issue. If NFS goes down, you > lose all sorts of things that are critical. This will work for a dev > cluster, but strongly isn't recommended for production. > > As a first step, consider rsync - that way everything is local, so fewer > external dependencies. After that, consider not managing boxes by hand :) > > Paul > > > On 18 Feb 2013, at 18:09, Chris Embree wrote: > > I'm doing that currently. No problems to report so far. > > The only pitfall I've found is around NFS stability. If your NAS is 100% > solid no problems. I've seen mtab get messed up and refuse to remount if > NFS has any hiccups. > > If you want to really crazy, consider NFS for your datanode root fs. See > the oneSIS project for details. http://onesis.sourceforge.net > > Enjoy. > > On Mon, Feb 18, 2013 at 1:00 PM, Mehmet Belgin < > mehmet.belgin@oit.gatech.edu> wrote: > >> Hi Everyone, >> >> Will it be any problem if I put the hadoop executables and configuration >> on a NFS volume, which is shared by all masters and slaves? This way the >> configuration changes will be available for all nodes, without need for >> synching any files. While this looks almost like a no-brainer, I am >> wondering if there are any pitfalls I need to be aware of. >> >> On a related question, is there a best practices (do's and don'ts ) >> document that you can suggest other than the regular documentation by >> Apache? >> >> Thanks! >> -Mehmet > > > --20cf300faf81e82d8504d604c4bd Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Just for clarification, we only use NFS for binaries and config files. =A0H= DFS and MarpRed write to local disk. =A0We just don't install an OS the= re. :)

On Mon, Feb 18, 2013 at 1:44 PM, P= aul Wilkinson <paul.m.wilkinson@gmail.com> wrote:
That requirement for 100% availability is the issue. If NFS goes do= wn, you lose all sorts of things that are critical.=A0This will work for a = dev cluster, but strongly isn't recommended for production.=A0

As a first step, consider rsync - that way everything i= s local, so fewer external dependencies. After that, consider not managing = boxes by hand :)
<= br>
Paul


On 18 Feb 2013, at 18:09, Chris Embree <cembree@gmail.com> wrote:

I'm doing that currently. =A0No problems to report so far. =A0=A0<= div>
The only pitfall I've found is around NFS stability.= =A0If your NAS is 100% solid no problems. =A0I've seen mtab get messed= up and refuse to remount if NFS has any hiccups.=A0

If you want to really crazy, consider NFS for your data= node root fs. =A0See the oneSIS project for details. =A0http://onesis.sourceforge.net<= /div>

Enjoy.

On Mon, Feb 18, 2013 at 1:00 = PM, Mehmet Belgin <mehmet.belgin@oit.gatech.edu> = wrote:
Hi Everyone,

Will it be any problem if I put the hadoop executables and configuration on= a NFS volume, which is shared by all masters and slaves? This way the conf= iguration changes will be available for all nodes, without need for synchin= g any files. While this looks almost like a no-brainer, I am wondering if t= here are any pitfalls I need to be aware of.

On a related question, is there a best practices (do's and don'ts )= document that you can suggest other than the regular documentation by Apac= he?

Thanks!
-Mehmet

<= /div>

--20cf300faf81e82d8504d604c4bd--