Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 52A0AE2D2 for ; Thu, 28 Feb 2013 08:11:08 +0000 (UTC) Received: (qmail 71822 invoked by uid 500); 28 Feb 2013 08:11:03 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 71532 invoked by uid 500); 28 Feb 2013 08:11:02 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Delivered-To: moderator for user@hadoop.apache.org Received: (qmail 35680 invoked by uid 99); 28 Feb 2013 07:59:15 -0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of project.linux.proj@gmail.com designates 209.85.128.177 as permitted sender) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=d/p1WZpoQNeFQMFbq3zVfxqSRqDzOeN9W0KWVcWhSmw=; b=wNuiDMGgnbA15iYe/XSNRqyEiU6353ZPBegGLjGYom3nmcP2LfKmlj2mmACGuakQ8u 9i5hc2w4jeeUh63BAbYR6Q2s4hJJXnEKSMZ9vJHssu+jG1BEm5XOCGxoRtx4FeEqF93r SK+uF8OhdDRseyTMb9zNdaew70fqMcyOq8YnkcKvEMPLExrHHrWw7eWkcbwcaHuNqsgG nezCHqoQz0MJP00o/fl+2A21skKjv3JzbtDKsH2iN0uiV6neTqPF4TpjOpFn9Apxq2ax a/TQLnGL6Xze8+DSluEs9v/psAraMN/djSjk9i4ChHaBrDD8e+X3aJ+po1M/6+37EcUp A9QA== MIME-Version: 1.0 X-Received: by 10.220.241.19 with SMTP id lc19mr2125750vcb.43.1362038328590; Wed, 27 Feb 2013 23:58:48 -0800 (PST) Date: Thu, 28 Feb 2013 13:28:48 +0530 Message-ID: Subject: namenode is failing From: Mohit Vadhera To: "" Content-Type: multipart/alternative; boundary=14dae9cfcbe6bf3c3704d6c442b3 X-Virus-Checked: Checked by ClamAV on apache.org --14dae9cfcbe6bf3c3704d6c442b3 Content-Type: text/plain; charset=ISO-8859-1 Hi Guys, Namenode switches into safemode when it has low disk space on the root fs / i have to manually run a command to leave it I have space on other partition. Can I change the path for cache files on other partition ? I have below properties . Can it resolve the issue ? If i change the path to other directories and restart services I get the below error while starting the service namenode. I didn't find anything in logs so far. Can you please suggest something ? hadoop.tmp.dir /var/lib/hadoop-hdfs/cache/${user.name} dfs.namenode.name.dir /var/lib/hadoop-hdfs/cache/${user.name}/dfs/name dfs.namenode.checkpoint.dir /var/lib/hadoop-hdfs/cache/${user.name }/dfs/namesecondary Service namenode is failing # for service in /etc/init.d/hadoop-hdfs-* ; do sudo $service status; done Hadoop datanode is running [ OK ] Hadoop namenode is dead and pid file exists [FAILED] Hadoop secondarynamenode is running [ OK ] Thanks, --14dae9cfcbe6bf3c3704d6c442b3 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi Guys,

Namenode switches into safemode when it has l= ow disk space on the root fs / i have to manually run a command to leave it= =A0
I have space on other partition. Can I change the path for cache files on o= ther partition ? I have below properties . Can it resolve the issue ? If i = change the path to other directories and restart services I get the below e= rror while starting the service namenode. I didn't find anything in log= s so far. =A0Can you please suggest something ?

=A0 <property>
=A0 =A0 =A0<name= >hadoop.tmp.dir</name>
=A0 =A0 =A0<value>/var/lib/= hadoop-hdfs/cache/${user.name}</value&g= t;
=A0 </property>
=A0 <property>
=A0 =A0 =A0&= lt;name>dfs.namenode.name.dir</name>
=A0 =A0 =A0<valu= e>/var/lib/hadoop-hdfs/cache/${user.name}/dfs/name</value>
=A0 </property>
=A0 <property>
=A0 =A0= =A0<name>dfs.namenode.checkpoint.dir</name>
=A0 </property>
=A0 <property>


Service namenode is failing=A0

<= div>
# for service in /etc/init.d/hadoop-hdfs-* ; do sudo $service stat= us; done
Hadoop datanode is running =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 [ =A0OK =A0]
Hadoop namenode is dead and pid= file exists =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0[FAILED]
Hadoop secon= darynamenode is running =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0[ = =A0OK =A0]

Thanks,

--14dae9cfcbe6bf3c3704d6c442b3--