Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C1CFF17B10 for ; Fri, 15 May 2015 10:35:16 +0000 (UTC) Received: (qmail 76436 invoked by uid 500); 15 May 2015 10:35:09 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 76330 invoked by uid 500); 15 May 2015 10:35:09 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 76319 invoked by uid 99); 15 May 2015 10:35:08 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 15 May 2015 10:35:08 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 54CB5C5134 for ; Fri, 15 May 2015 10:35:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 5.896 X-Spam-Level: ***** X-Spam-Status: No, score=5.896 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, KAM_BADIPHTTP=2, KAM_LINEPADDING=1.2, RCVD_IN_MSPIKE_H2=-0.205, SPF_PASS=-0.001, URIBL_BLOCKED=0.001, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id reSXoA0lmF-1 for ; Fri, 15 May 2015 10:35:03 +0000 (UTC) Received: from mail-ie0-f178.google.com (mail-ie0-f178.google.com [209.85.223.178]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 1278842B0E for ; Fri, 15 May 2015 10:35:03 +0000 (UTC) Received: by iepk2 with SMTP id k2so104047107iep.3 for ; Fri, 15 May 2015 03:34:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=pbb+7iB7QTTJiCAso6+qb1AKxkfzQokKMA7EflaX/e4=; b=TBhd9atKNp5AiRQtFtRPXID0tgV+E0/pINNrm5tUcnbeB2rdTqDJ2dfbrMTQs+/buT S7M1N8Jw9LNvuqdmKNfIoeXOcTq4CpSRfr5gNUAexRyFdpw9NfcJkSmv5cdFUtyT18Az hNY8qho+yK2MEP6f2aMMZ+2oSivuaVAUfOFOd7+nkvu5vSjzVsUZ8If8nsgIKcNZmQRe dVCQDDPBZ3vuhP3sJldG2uLJCqNRwUzDQdsG3REDK9b0mYNx5+lU/5FzZ32Pk/9t5s6w ys35LuWyCG6IRlwbaIOjAWhiwRp70QDfzuOl1/6Soct4qBAoOXTxUVP9z1Ia3vVnUMyc Vgsw== MIME-Version: 1.0 X-Received: by 10.43.141.196 with SMTP id jf4mr20016274icc.80.1431686057579; Fri, 15 May 2015 03:34:17 -0700 (PDT) Received: by 10.36.156.68 with HTTP; Fri, 15 May 2015 03:34:17 -0700 (PDT) In-Reply-To: <408443960.160972.1431682732333.JavaMail.yahoo@mail.yahoo.com> References: <408443960.160972.1431682732333.JavaMail.yahoo@mail.yahoo.com> Date: Fri, 15 May 2015 16:04:17 +0530 Message-ID: Subject: Re: Unable to start Hive From: Vikas Parashar To: user@hadoop.apache.org, Anand Murali Content-Type: multipart/alternative; boundary=001a11c2d33ae45bd805161c625e --001a11c2d33ae45bd805161c625e Content-Type: text/plain; charset=UTF-8 Give me o/p of below command #hadoop fs -mkdir /tmp/abc #hadoop fs -ls /tmp/abc On Fri, May 15, 2015 at 3:08 PM, Anand Murali wrote: > Vikas: > > Find below > > anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report > Error: Could not find or load main class dfsamin > anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report > DEPRECATED: Use of this script to execute hdfs command is deprecated. > Instead use the hdfs command for it. > > Configured Capacity: 179431981056 (167.11 GB) > Present Capacity: 142666625024 (132.87 GB) > DFS Remaining: 142665678848 (132.87 GB) > DFS Used: 946176 (924 KB) > DFS Used%: 0.00% > Under replicated blocks: 0 > Blocks with corrupt replicas: 0 > Missing blocks: 0 > > ------------------------------------------------- > Live datanodes (1): > > Name: 127.0.0.1:50010 (localhost) > Hostname: Latitude-E5540 > Decommission Status : Normal > Configured Capacity: 179431981056 (167.11 GB) > DFS Used: 946176 (924 KB) > Non DFS Used: 36765356032 (34.24 GB) > DFS Remaining: 142665678848 (132.87 GB) > DFS Used%: 0.00% > DFS Remaining%: 79.51% > Configured Cache Capacity: 0 (0 B) > Cache Used: 0 (0 B) > Cache Remaining: 0 (0 B) > Cache Used%: 100.00% > Cache Remaining%: 0.00% > Xceivers: 1 > Last contact: Fri May 15 15:07:53 IST 2015 > > > Anand Murali > 11/7, 'Anand Vihar', Kandasamy St, Mylapore > Chennai - 600 004, India > Ph: (044)- 28474593/ 43526162 (voicemail) > > > > On Friday, May 15, 2015 2:52 PM, Vikas Parashar > wrote: > > > please send me o/p of below command > > # hadoop dfsadmin -report > > > On Fri, May 15, 2015 at 2:43 PM, Anand Murali > wrote: > > Vikas > > Can you be more specific. What to check for in Hive logs. > > Thanks > > Regards > > Anand > > Sent from my iPhone > > On 15-May-2015, at 2:41 pm, Vikas Parashar wrote: > > Hi Anand, > > It seems your namenode is working fine. I can't see any "safemode" related > logs in your namenode file. Kindly check it hive logs as well. > > On Fri, May 15, 2015 at 12:40 PM, Anand Murali > wrote: > > Vikas: > > Please find attached. At this time I would like to tell you that with the > current installation, I am able to run mapreduce jobs and pig scripts > without any installation errors. So please, any suggestions made should not > break and cascade other installations. > > Thanks > > Regards, > > Anand Murali > 11/7, 'Anand Vihar', Kandasamy St, Mylapore > Chennai - 600 004, India > Ph: (044)- 28474593/ 43526162 (voicemail) > > > > On Friday, May 15, 2015 12:31 PM, Kiran Dangeti < > kirandkumar2013@gmail.com> wrote: > > > Anand, > Sometimes it will error out due some resources are not available. So stop > and start the hadoop cluster and see > On May 15, 2015 12:24 PM, "Anand Murali" wrote: > > Dear All: > > I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to > connect Hive to it after installation. I run . .hadoop as start-up script > which contain environment variables setting. Find below > > *. ,hadoop* > export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0 > export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/ > export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0 > export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0 > export PIG_HOME=/home/anand_vihar/pig-0.14.0 > export PIG_INSTALL=/home/anand_vihar/pig-0.14.0 > export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/ > export HIVE_HOME=/home/anand_vihar/hive-1.1.0 > export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0 > export > PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin > echo $HADOOP_HOME > echo $JAVA_HOME > echo $HADOOP_INSTALL > echo $PIG_HOME > echo $PIG_INSTALL > echo $PIG_CLASSPATH > echo $HIVE_HOME > echo $PATH > > > *Error* > > anand_vihar@Latitude-E5540:~$ hive > > Logging initialized using configuration in > jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties > SLF4J: Class path contains multiple SLF4J bindings. > SLF4J: Found binding in > [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: Found binding in > [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an > explanation. > SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] > Exception in thread "main" java.lang.RuntimeException: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): > Cannot create directory > /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in > safe mode. > The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. > The number of live datanodes 1 has reached the minimum number 0. In safe > mode extension. Safe mode will be turned off automatically in 6 seconds. > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): > Cannot create directory > /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in > safe mode. > The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. > The number of live datanodes 1 has reached the minimum number 0. In safe > mode extension. Safe mode will be turned off automatically in 6 seconds. > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033) > > at org.apache.hadoop.ipc.Client.call(Client.java:1468) > at org.apache.hadoop.ipc.Client.call(Client.java:1399) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) > at com.sun.proxy.$Proxy13.mkdirs(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) > at com.sun.proxy.$Proxy14.mkdirs(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753) > at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724) > at > org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870) > at > org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866) > at > org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859) > at > org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584) > at > org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526) > at > org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458) > ... 8 more > > Can somebody advise. > > Thanks > > Anand Murali > 11/7, 'Anand Vihar', Kandasamy St, Mylapore > Chennai - 600 004, India > Ph: (044)- 28474593/ 43526162 (voicemail) > > > > > > > > --001a11c2d33ae45bd805161c625e Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Give me o/p of below command

#hadoop fs= -mkdir /tmp/abc

#hadoop fs -ls /tmp/abc

On Fri, May 15,= 2015 at 3:08 PM, Anand Murali <anand_vihar@yahoo.com> w= rote:
Vikas:

Find below

anand_vihar@Latitude-E554= 0:~$ hadoop dfsamin -report
Error: Could not find or load main class dfs= amin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED= : Use of this script to execute hdfs command is deprecated.
Instead use = the hdfs command for it.

Configured Capacity: 179431981056 (167.11 G= B)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 14266567= 8848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under = replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: = 0

-------------------------------------------------
Live datanode= s (1):

Name: 12= 7.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission S= tatus : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used= : 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining:= 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
= Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remainin= g: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: = 1
Last contact: Fri May 15 15:07:53 IST 2015

=C2=A0
Anand Murali=C2=A0=C2=A0
11= /7, 'Anand Vihar', Kandasamy St, Mylapore
Chennai - 600 004, India
Ph: (04= 4)- 28474593/=C2=A043526162 (voicemail)


On Friday, May 15, 2015 2:52 P= M, Vikas Parashar <para.vikas@gmail.com> wrote:


=
please send me o/p of below command

# hadoop dfsadmin -report


= On Fri, May 15, 2015 at 2:43 PM, Anand Murali <anand_vihar@yahoo.com> wrote:
<= blockquote style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-le= ft:1ex">
Vikas

Can you be = more specific. What to check for in Hive logs.

Thanks

Regards

Anand

= Sent from my iPhone

On 15-May-2015, = at 2:41 pm, Vikas Parashar <para.vikas@gmail.com> = wrote:

Hi Anand,=C2=A0

It= seems your namenode is working fine. I can't see any "safemode&qu= ot; related logs in your namenode file. Kindly check it hive logs as well.<= /div>

On Fri, May 15, 2015 at 12:40 PM, A= nand Murali <anand_vihar@yahoo.com> wrote:
Vikas:

Please find attached. At this time I would like to= tell you that with the current installation, I am able to run mapreduce jo= bs and pig scripts without any installation errors. So please, any suggesti= ons made should not break and cascade other installations.

Thanks

Rega= rds,
=C2= =A0
Anand Murali=C2=A0=C2= =A0
11/7, 'Anand Vihar', Kandasam= y St, Mylapore
Chennai - 600 004, India
Ph: (044)- 28474593/=C2=A043526162 (voicem= ail)



On Friday, May 15, 2015 12:31 PM, Kiran = Dangeti <kirandkumar2013@gmail.com> wrote:


Anand,
Sometimes it will error out due some resources are not ava= ilable. So stop and start the hadoop cluster and see
On May 15, 2015 12:24 PM, "Anand Murali" <anand_vihar@yahoo.com> wrote:
=
= Dear All:

I am running = Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it = after installation. I run . .hadoop as start-up script which contain enviro= nment variables setting. Find below

. ,hadoop
export HADO= OP_HOME=3D/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME= =3D/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX= =3D/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL= =3D/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=3D/home= /anand_vihar/pig-0.14.0
export PIG_INSTALL=3D/home/anand_= vihar/pig-0.14.0
export PIG_CLASSPATH=3D/home/anand_vihar= /hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=3D/home/anand_= vihar/hive-1.1.0
export HIVE_INSTALL=3D/home/anand_vihar/= hive-1.1.0
export PATH=3D$PATH:$HADOOP_INSTALL/bin:$HADOO= P_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIV= E_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo= $PIG_CLASSPATH
echo $HIVE_HOME
echo $P= ATH


Error

anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration i= n jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log= 4j.properties
SLF4J: Class path contains multiple SLF4J b= indings.
SLF4J: Found binding in [jar:file:/home/anand_vi= har/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j= /impl/StaticLoggerBinder.class]
SLF4J: Found binding in [= jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/o= rg/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bin= dings for an explanation.
SLF4J: Actual binding is of= type [org.slf4j.impl.Log4jLoggerFactory]
Exception in th= read "main" java.lang.RuntimeException: org.apache.hadoop.ipc.Rem= oteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Can= not create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9= e9d7. Name node is in safe mode.
The reported blocks 2 ha= s reached the threshold 0.9990 of total blocks 2. The number of live datano= des 1 has reached the minimum number 0. In safe mode extension. Safe mode w= ill be turned off automatically in 6 seconds.
=C2=A0=C2= =A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameN= odeSafeMode(FSNamesystem.java:1364)
=C2=A0=C2=A0=C2=A0 at= org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem= .java:4216)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.= server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpc= Server.mkdirs(NameNodeRpcServer.java:813)
=C2=A0=C2=A0=C2= =A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTr= anslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.protocol.proto= .ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(C= lientNamenodeProtocolProtos.java)
=C2=A0=C2=A0=C2=A0 at o= rg.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(Proto= bufRpcEngine.java:619)
=C2=A0=C2=A0=C2=A0 at org.apache.h= adoop.ipc.RPC$Server.call(RPC.java:962)
=C2=A0=C2=A0=C2= =A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Server$Handler$1.run= (Server.java:2035)
=C2=A0=C2=A0=C2=A0 at java.security.Ac= cessController.doPrivileged(Native Method)
=C2=A0=C2=A0= =C2=A0 at javax.security.auth.Subject.doAs(Subject.java:415)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.security.UserGroupInformation.d= oAs(UserGroupInformation.java:1628)
=C2=A0=C2=A0=C2=A0 at= org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.ql.sessi= on.SessionState.start(SessionState.java:472)
=C2=A0=C2=A0= =C2=A0 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.cli.CliDriver.ma= in(CliDriver.java:615)
=C2=A0=C2=A0=C2=A0 at sun.reflect.= NativeMethodAccessorImpl.invoke0(Native Method)
=C2=A0=C2= =A0=C2=A0 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccess= orImpl.java:57)
=C2=A0=C2=A0=C2=A0 at sun.reflect.Delegat= ingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
=C2=A0=C2=A0=C2=A0 at java.lang.reflect.Method.invoke(Method.java= :606)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.util.RunJar= .run(RunJar.java:221)
=C2=A0=C2=A0=C2=A0 at org.apache.ha= doop.util.RunJar.main(RunJar.java:136)
Caused by: org.apa= che.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeM= odeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-= 4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The re= ported blocks 2 has reached the threshold 0.9990 of total blocks 2. The num= ber of live datanodes 1 has reached the minimum number 0. In safe mode exte= nsion. Safe mode will be turned off automatically in 6 seconds.
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSNames= ystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
=C2= =A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdi= rsInt(FSNamesystem.java:4216)
=C2=A0=C2=A0=C2=A0 at org.a= pache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:419= 1)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.na= menode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodePr= otocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTransla= torPB.java:600)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.h= dfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.ca= llBlockingMethod(ClientNamenodeProtocolProtos.java)
=C2= =A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufR= pcInvoker.call(ProtobufRpcEngine.java:619)
=C2=A0=C2=A0= =C2=A0 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Server$Handler$1.run(Serve= r.java:2039)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.= Server$Handler$1.run(Server.java:2035)
=C2=A0=C2=A0=C2=A0= at java.security.AccessController.doPrivileged(Native Method)
=C2=A0=C2=A0=C2=A0 at javax.security.auth.Subject.doAs(Subject.java:4= 15)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.security.User= GroupInformation.doAs(UserGroupInformation.java:1628)
=C2= =A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:203= 3)

=C2=A0=C2=A0=C2=A0 at org.apache.ha= doop.ipc.Client.call(Client.java:1468)
=C2=A0=C2=A0=C2=A0= at org.apache.hadoop.ipc.Client.call(Client.java:1399)
= =C2=A0=C2=A0=C2=A0 at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invok= e(ProtobufRpcEngine.java:232)
=C2=A0=C2=A0=C2=A0 at com.s= un.proxy.$Proxy13.mkdirs(Unknown Source)
=C2=A0=C2=A0=C2= =A0 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB= .mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
=C2= =A0=C2=A0=C2=A0 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Meth= od)
=C2=A0=C2=A0=C2=A0 at sun.reflect.NativeMethodAccesso= rImpl.invoke(NativeMethodAccessorImpl.java:57)
=C2=A0=C2= =A0=C2=A0 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMeth= odAccessorImpl.java:43)
=C2=A0=C2=A0=C2=A0 at java.lang.r= eflect.Method.invoke(Method.java:606)
=C2=A0=C2=A0=C2=A0 = at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvo= cationHandler.java:187)
=C2=A0=C2=A0=C2=A0 at org.apache.= hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:1= 02)
=C2=A0=C2=A0=C2=A0 at com.sun.proxy.$Proxy14.mkdirs(U= nknown Source)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hd= fs.DFSClient.primitiveMkdir(DFSClient.java:2753)
=C2=A0= =C2=A0=C2=A0 at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724= )
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.Distribute= dFileSystem$17.doCall(DistributedFileSystem.java:870)
=C2= =A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(D= istributedFileSystem.java:866)
=C2=A0=C2=A0=C2=A0 at org.= apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java= :81)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.Distrib= utedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.DistributedFileSystem.mkd= irs(DistributedFileSystem.java:859)
=C2=A0=C2=A0=C2=A0 at= org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.jav= a:584)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.ql.se= ssion.SessionState.createSessionDirs(SessionState.java:526)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hive.ql.session.SessionState.sta= rt(SessionState.java:458)
=C2=A0=C2=A0=C2=A0 ... 8 more

Can somebody advise.

Thanks
<= div>=C2=A0
Anand Murali= =C2=A0=C2=A0
11/7, 'Anand Vihar',= Kandasamy St, Mylapore
Chennai - 600 004= , India
Ph: (044)- 28474593/=C2=A04352616= 2 (voicemail)
=




=



--001a11c2d33ae45bd805161c625e--