Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 72229 invoked from network); 2 Jun 2008 19:25:11 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 2 Jun 2008 19:25:11 -0000 Received: (qmail 76308 invoked by uid 500); 2 Jun 2008 19:25:08 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 76279 invoked by uid 500); 2 Jun 2008 19:25:08 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 76265 invoked by uid 99); 2 Jun 2008 19:25:08 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 02 Jun 2008 12:25:08 -0700 X-ASF-Spam-Status: No, hits=2.0 required=10.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of mdidomenico4@gmail.com designates 74.125.46.31 as permitted sender) Received: from [74.125.46.31] (HELO yw-out-2324.google.com) (74.125.46.31) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 02 Jun 2008 19:24:19 +0000 Received: by yw-out-2324.google.com with SMTP id 9so616911ywe.29 for ; Mon, 02 Jun 2008 12:24:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; bh=v4nY8yc6ySwkN91NRXsXs2lRfaiCXulUjjcf0XOTKyQ=; b=fwIT+QN+QY6S2rMelvCaaH7wXm329v2qyzlZ8s2lgjRVLYLxifrjHi4sfvuZ6bFXf50nDQ87NGxV6/HYAnihHk+IsxylnHUimwXec8rj0nLrd3VtRuWb6ePQr8+bDs/2VwzaZk0xGoGiakzbSl4LM3a3DK3sEr10HtgPjCoVsBE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:references; b=AQngkmZZeSEolp0uu171GQkc179i3WCKasOnSB111ts/lqolFzqvtsgR1uticCZZU25Pt1i7mHlxygyAiiVAAjMAYr2N1eq4Hd26Jr9RVltHkhpnVtPWQIrZ3TmFZ1c7kDYgiYuP+pthfqHwB6RI/hbydd08/IClqWJ9DwPs6vI= Received: by 10.150.79.32 with SMTP id c32mr14002700ybb.151.1212434671947; Mon, 02 Jun 2008 12:24:31 -0700 (PDT) Received: by 10.150.121.9 with HTTP; Mon, 2 Jun 2008 12:24:31 -0700 (PDT) Message-ID: Date: Mon, 2 Jun 2008 15:24:31 -0400 From: "Michael Di Domenico" To: core-user@hadoop.apache.org Subject: Re: Hadoop installation folders in multiple nodes Cc: hadoop-user@lucene.apache.org In-Reply-To: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_3676_11130271.1212434672021" References: <227621ad0805300135p251f465do34fd539a7d4daa71@mail.gmail.com> X-Virus-Checked: Checked by ClamAV on apache.org ------=_Part_3676_11130271.1212434672021 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline Oops, missed the part where you already tried that. On Mon, Jun 2, 2008 at 3:23 PM, Michael Di Domenico wrote: > Depending on your windows version, there is a dos command called "subst" > which you could use to virtualize a drive letter on your third machine > > > On Fri, May 30, 2008 at 4:35 AM, Sridhar Raman > wrote: > >> Should the installation paths be the same in all the nodes? Most >> documentation seems to suggest that it is _*recommended*_ to have the >> _*same >> *_ paths in all the nodes. But what is the workaround, if, because of >> some >> reason, one isn't able to have the same path? >> >> That's the problem we are facing right now. After making Hadoop work >> perfectly in a 2-node cluster, when we tried to accommodate a 3rd machine, >> we realised that this machine doesn't have a E:, which is where the >> installation of hadoop is in the other 2 nodes. All our machines are >> Windows machines. The possible solutions are: >> 1) Move the installations in M1 & M2 to a drive that is present in M3. We >> will keep this as the last option. >> 2) Map a folder in M3's D: to E:. We used the "subst" command to do this. >> But when we tried to start DFS, it wasn't able to find the hadoop >> installation. Just to verify, we tried a ssh to the localhost, and were >> unable to find the mapped drive. It's only visible as a folder of D:. >> Whereas, in the basic cygwin prompt, we are able to view E:. >> 3) Partition M3's D drive and create an E. This carries the risk of loss >> of >> data. >> >> So, what should we do? Is there any way we can specify in the NameNode >> the >> installation paths of hadoop in each of the remaining nodes? Or is there >> some environment variable that can be set, which can make the hadoop >> installation path specific to each machine? >> >> Thanks, >> Sridhar >> > > ------=_Part_3676_11130271.1212434672021--