Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 887E8D478 for ; Mon, 17 Dec 2012 17:23:13 +0000 (UTC) Received: (qmail 58519 invoked by uid 500); 17 Dec 2012 17:23:08 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 58411 invoked by uid 500); 17 Dec 2012 17:23:08 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 58345 invoked by uid 99); 17 Dec 2012 17:23:06 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Dec 2012 17:23:06 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of iryndin@gmail.com designates 209.85.210.176 as permitted sender) Received: from [209.85.210.176] (HELO mail-ia0-f176.google.com) (209.85.210.176) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 17 Dec 2012 17:23:02 +0000 Received: by mail-ia0-f176.google.com with SMTP id y26so5525961iab.35 for ; Mon, 17 Dec 2012 09:22:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=nR5nIJJr+oKSOWO/HoThy1fi6wfjyfD9/oOysmis4yo=; b=FPqVkyj+Hv6zbsf5BCy1OqDjsNiO8eauboG0TXpt2yOjB0zd35xliAM6bKXPpwVFD7 ekQnSmEUrOUSvZIeGtE/QQ5bgwQ85LCH1wFfmFlOvYkDaORHeGclZQ/cQ19kDZwhDLhG 0kKkEzIIAFar9Gmquw6KM0gQ3ODlXuy/dQ6OS3h3PgxMGn/Aa0A9Xs8Du1IiGqrVrDg7 TyqJ5C6Y6zePInjLh7ZZUy/6G/ct5f0+JJnAadhGqElKkKGQFt3M3llh7jJT6slZz6lg XKGmLiUd3lTQo4m2EAV8oVpJB/50T/C5Rvjtl/0JLkeUQRt0EcbJy5xcHNfPujVtZOHZ faeA== MIME-Version: 1.0 Received: by 10.50.88.168 with SMTP id bh8mr9657859igb.71.1355764961423; Mon, 17 Dec 2012 09:22:41 -0800 (PST) Received: by 10.64.29.235 with HTTP; Mon, 17 Dec 2012 09:22:41 -0800 (PST) In-Reply-To: References: Date: Mon, 17 Dec 2012 21:22:41 +0400 Message-ID: Subject: Re: Is it necessary to run secondary namenode when starting HDFS? From: Ivan Ryndin To: user@hadoop.apache.org Cc: harsh@cloudera.com Content-Type: multipart/alternative; boundary=e89a8f3ba551ecafc904d10fa04e X-Virus-Checked: Checked by ClamAV on apache.org --e89a8f3ba551ecafc904d10fa04e Content-Type: text/plain; charset=ISO-8859-1 Thank you very much! It is now clear for me, that in development mode I'll not start secondary namenode.But in production it's better to have it. Thanks! Regards, Ivan 2012/12/17 Harsh J > The SecondaryNameNode is necessary for automatic maintenance in > long-running clusters (read: production), but is not necessary for, > nor tied into the basic functions/operations of HDFS. > > On 1.x, you can remove the script's startup of SNN by removing its > host entry from the conf/masters file. > On 2.x, you can selectively start the NN and DNs by using the > hadoop-daemon.sh script commands. > > On Mon, Dec 17, 2012 at 10:34 PM, Ivan Ryndin wrote: > > Hi all, > > > > is it necessary to run secondary namenode when starting HDFS? > > I am dealing with Hadoop 1.1.1. > > Looking at script $HADOOP_HOME/bin/start_dfs.sh > > There are next lines in this file: > > > > # start dfs daemons > > # start namenode after datanodes, to minimize time namenode is up w/o > data > > # note: datanodes will log connection errors until namenode starts > > "$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start namenode > > $nameStartOpt > > "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode > > $dataStartOpt > > "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts masters start > > secondarynamenode > > > > So, will HDFS work if I turn off starting of secondarynamenode ? > > > > I do ask this because I am playing with Hadoop on two-node cluster only > (and > > machines in cluster do not have much RAM and disk space), and thus don't > > want to run unnecessary processes. > > > > -- > > Best regards, > > Ivan P. Ryndin, > > > > > > -- > Harsh J > -- Best regards, Ivan P. Ryndin, --e89a8f3ba551ecafc904d10fa04e Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Thank you very much!

It is now clear for me,=A0that in d= evelopment mode I'll not start secondary namenode.But in production it&= #39;s better to have it.
Thanks!

Re= gards,
Ivan


2012/12/17 Harsh J <harsh@cloudera.com>
The SecondaryNameNode is necessary for automatic maintenance in
long-running clusters (read: production), but is not necessary for,
nor tied into the basic functions/operations of HDFS.

On 1.x, you can remove the script's startup of SNN by removing its
host entry from the conf/masters file.
On 2.x, you can selectively start the NN and DNs by using the
hadoop-daemon.sh script commands.

On Mon, Dec 17, 2012 at 10:34 PM, Ivan Ryndin <iryndin@gmail.com> wrote:
> Hi all,
>
> is it necessary to run secondary namenode when starting HDFS?
> I am dealing with Hadoop 1.1.1.
> Looking at script $HADOOP_HOME/bin/start_dfs.sh
> There are next lines in this file:
>
> # start dfs daemons
> # start namenode after datanodes, to minimize time namenode is up w/o = data
> # note: datanodes will log connection errors until namenode starts
> "$bin"/hadoop-daemon.sh --config $HADOOP_CONF_DIR start name= node
> $nameStartOpt
> "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR start dat= anode
> $dataStartOpt
> "$bin"/hadoop-daemons.sh --config $HADOOP_CONF_DIR --hosts m= asters start
> secondarynamenode
>
> =A0So, will HDFS work if I turn off starting of secondarynamenode ? >
> I do ask this because I am playing with Hadoop on two-node cluster onl= y (and
> machines in cluster do not have much RAM and disk space), and thus don= 't
> want to run unnecessary processes.
>
> --
> Best regards,
> Ivan P. Ryndin,
>



--
Harsh J



--
Best regards,
Ivan P. Ryndin,

--e89a8f3ba551ecafc904d10fa04e--