Return-Path: Delivered-To: apmail-lucene-hadoop-user-archive@locus.apache.org Received: (qmail 87234 invoked from network); 1 Nov 2006 17:29:11 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 1 Nov 2006 17:29:11 -0000 Received: (qmail 66024 invoked by uid 500); 1 Nov 2006 17:29:20 -0000 Delivered-To: apmail-lucene-hadoop-user-archive@lucene.apache.org Received: (qmail 65987 invoked by uid 500); 1 Nov 2006 17:29:20 -0000 Mailing-List: contact hadoop-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-user@lucene.apache.org Delivered-To: mailing list hadoop-user@lucene.apache.org Received: (qmail 65967 invoked by uid 99); 1 Nov 2006 17:29:20 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Nov 2006 09:29:20 -0800 X-ASF-Spam-Status: No, hits=1.4 required=10.0 tests=DNS_FROM_RFC_ABUSE,DNS_FROM_RFC_WHOIS X-Spam-Check-By: apache.org Received-SPF: neutral (herse.apache.org: local policy) Received: from [207.126.228.150] (HELO rsmtp2.corp.yahoo.com) (207.126.228.150) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Nov 2006 09:29:04 -0800 Received: from [207.126.231.126] (sameerp2.corp.yahoo.com [207.126.231.126]) by rsmtp2.corp.yahoo.com (8.13.6/8.13.6/y.rout) with ESMTP id kA1HSWZ5030403 for ; Wed, 1 Nov 2006 09:28:32 -0800 (PST) DomainKey-Signature: a=rsa-sha1; s=serpent; d=yahoo-inc.com; c=nofws; q=dns; h=message-id:date:from:user-agent:mime-version:to:subject: references:in-reply-to:content-type:content-transfer-encoding; b=bIR/qQ+uZYxs05yK9gcHo70iHDgPS2ZlV43c+u/7h27fUvYGUv+33hNGrB74zAoH Message-ID: <4548D941.2050506@yahoo-inc.com> Date: Wed, 01 Nov 2006 09:28:33 -0800 From: Sameer Paranjpye User-Agent: Thunderbird 1.5.0.7 (Windows/20060909) MIME-Version: 1.0 To: hadoop-user@lucene.apache.org Subject: Re: Some queries on Master node References: <20061101160155.4D11D10FB00E@herse.apache.org> In-Reply-To: <20061101160155.4D11D10FB00E@herse.apache.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org Jagadeesh wrote: > Hi, > > > > I have a cluster of more than 25 servers which basically serves the purpose > for storage. However I was wondering what will happen if the master node > gets exhausted of disk space. If I add more nodes to the cluster, > > > > 1. Will Hadoop move the blocks in master node to the newly added nodes? No, HDFS will not move blocks to new nodes automatically. New files that are added will likely have their blocks placed on the new nodes. Also, removing old files will remove some blocks that are on your older nodes. One way to re-balance your cluster would be to: - Select a subset of files that take up a good percentage of your disk space - Copy them to new locations in your HDFS - Remove the *old* copies of the files - Rename the new copies to their previous names > > 2. Is there any parameter where I can specify not to write file chunks / > blocks in the master node and rather always use other nodes in the cluster? > Do you need to keep data on your master node? The simple way to not do this is to remove the master node from the 'slaves' file. > > > Since there is always a possibility for single point failure, I would like > to keep the master node as secure as possible. > To keep your filesystem metadata safe, you might consider one of the following options: - Set up a cron tab to periodically copy the 'image' and 'edits' files from under the namenode to a different location. If your master node loses its disk you can bring up the namenode with the copies. - AFAIK, the upcoming 0.8 release will include a mechanism by which the namenode writes the image and edits files to multiple locations. You can then specify multiply directories in the 'dfs.name.dir' config variable. One or more of these can be NFS mounted volumes on another node. Again, if you lose the master node, you can bring up the namenode with one of the remaining copies of the image and edits. > > > I am planning to go live with this setup by end of this week and I really > appreciate if you can send me a reply for the above queries. > Good luck! > > > Thanks > > Jugs > >