Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E09D517BE4 for ; Fri, 31 Oct 2014 16:40:34 +0000 (UTC) Received: (qmail 78119 invoked by uid 500); 31 Oct 2014 16:40:34 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 78067 invoked by uid 500); 31 Oct 2014 16:40:34 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 78055 invoked by uid 99); 31 Oct 2014 16:40:34 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 31 Oct 2014 16:40:34 +0000 Date: Fri, 31 Oct 2014 16:40:34 +0000 (UTC) From: "Kihwal Lee (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-7299) Hadoop Namenode failing because of negative value in fsimage MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-7299?page=3Dcom.atlassian.= jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D14192= 030#comment-14192030 ]=20 Kihwal Lee commented on HDFS-7299: ---------------------------------- The file has one block and the max block size is 128MB. What is the actual= size of the block on the datanode? > Hadoop Namenode failing because of negative value in fsimage > ------------------------------------------------------------ > > Key: HDFS-7299 > URL: https://issues.apache.org/jira/browse/HDFS-7299 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 2.0.0-alpha > Reporter: Vishnu Ganth > > Hadoop Namenode is getting failed because of some unexpected value of blo= ck size in fsimage. > Stack trace: > {code} > 2014-10-27 16:22:12,107 INFO org.apache.hadoop.hdfs.server.namenode.NameN= ode: STARTUP_MSG:=20 > /************************************************************ > STARTUP_MSG: Starting NameNode > STARTUP_MSG: host =3D / > STARTUP_MSG: args =3D [] > STARTUP_MSG: version =3D 2.0.0-cdh4.4.0 > STARTUP_MSG: classpath =3D /var/run/cloudera-scm-agent/process/12726-hd= fs-NAMENODE:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib= /hue-plugins-2.5.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.= p0.39/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.c= dh4.4.0.p0.39/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/opt/cloudera/parc= els/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jersey-core-1.8.jar:/opt/clou= dera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-xc-1.8.8.jar= :/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jasper-com= piler-5.5.23.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoo= p/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib= /hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1= .cdh4.4.0.p0.39/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-= 4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/par= cels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/par= cels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-beanutils-core-1.8.0= .jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/common= s-cli-1.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/l= ib/jetty-util-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.= 4.0.p0.39/lib/hadoop/lib/zookeeper-3.4.5-cdh4.4.0.jar:/opt/cloudera/parcels= /CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/op= t/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-jaxrs-= 1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/s= lf4j-api-1.6.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/had= oop/lib/stax-api-1.0.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39= /lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.c= dh4.4.0.p0.39/lib/hadoop/lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-4.4= .0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/opt/clouder= a/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cl= oudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jsr305-1.3.9.jar:/= opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-logg= ing-1.1.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/l= ib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.3= 9/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0= .p0.39/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-= 4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcel= s/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/opt/cl= oudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jersey-server-1.8.= jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/servlet= -api-2.5.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/li= b/jettison-1.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/had= oop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4= .0.p0.39/lib/hadoop/lib/commons-math-2.1.jar:/opt/cloudera/parcels/CDH-4.4.= 0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jets3t-0.6.1.jar:/opt/cloudera/parcels/CD= H-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/pa= rcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/avro-1.7.4.jar:/opt/clouder= a/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-codec-1.4.jar:/= opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-lang= -2.5.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/je= rsey-json-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hado= op/lib/kfs-0.3.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/had= oop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.= 0.p0.39/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/= CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/junit-4.8.2.jar:/opt/cloudera/par= cels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/c= loudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-io-2.1.ja= r:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/protobuf-= java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop= /lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/h= adoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4= .4.0.p0.39/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-4.4.= 0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parce= ls/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parc= els/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-common.jar:/opt/clouder= a/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-annotations.jar:/= opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hive-serdes-1= .0-SNAPSHOT.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop= /.//hadoop-annotations-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1= .cdh4.4.0.p0.39/lib/hadoop/.//hadoop-common-2.0.0-cdh4.4.0-tests.jar:/opt/c= loudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-auth.jar:/o= pt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-auth-2.= 0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoo= p/.//hadoop-common-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh= 4.4.0.p0.39/lib/hadoop-hdfs/./:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p= 0.39/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/= CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/opt/clo= udera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/guava-11.0.2.j= ar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/com= mons-cli-1.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoo= p-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.4.0= -1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.4.0.jar:/opt/clo= udera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jackson-mapper= -asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-= hdfs/lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/= lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.= 4.0.p0.39/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-4.= 4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/opt/clo= udera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/xmlenc-0.52.ja= r:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jack= son-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib= /hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.= 0.p0.39/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH= -4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloude= ra/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-daemon-1.= 0.3.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/li= b/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/h= adoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4= .4.0.p0.39/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/opt/cloudera/parcels/C= DH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/op= t/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-i= o-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/= lib/protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.= 39/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4= .0.p0.39/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-4.4.0= -1.cdh4.4.0.p0.39/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.4.0-tests.jar:/= opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/.//hadoop-h= dfs-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib= /hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.= 4.0.p0.39/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH= -4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/paranamer-2.3.jar:/opt/cloudera= /parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/netty-3.2.4.Final.j= ar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/xz-= 1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/li= b/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p= 0.39/lib/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-= 4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/= cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/log4j-1.2.1= 7.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/= jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/= hadoop-yarn/lib/jersey-guice-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4= .4.0.p0.39/lib/hadoop-yarn/lib/avro-1.7.4.jar:/opt/cloudera/parcels/CDH-4.4= .0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels= /CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cl= oudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/guice-servlet= -3.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/l= ib/commons-io-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/= hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.4.0-1.= cdh4.4.0.p0.39/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera= /parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cl= oudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-se= rver-tests-2.0.0-cdh4.4.0-tests.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.= 4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.= 0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-= yarn/.//hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-4.4.0-1.= cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-lau= ncher-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/l= ib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.0-cdh4.4.0.jar:/opt/cl= oudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-se= rver-resourcemanager.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/l= ib/hadoop-yarn/.//hadoop-yarn-site.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cd= h4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/opt/cloudera/parcel= s/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-applications-un= managed-am-launcher.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/li= b/hadoop-yarn/.//hadoop-yarn-site-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/= CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.= 0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-= yarn/.//hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p= 0.39/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.0-cdh4.4.0.j= ar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hado= op-yarn-server-web-proxy-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0= -1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/opt/clou= dera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-clie= nt-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/= hadoop-yarn/.//hadoop-yarn-server-common-2.0.0-cdh4.4.0.jar:/opt/cloudera/p= arcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-applicatio= ns-distributedshell.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/li= b/hadoop-yarn/.//hadoop-yarn-common-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcel= s/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/opt/cl= oudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-se= rver-nodemanager.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/h= adoop-yarn/.//hadoop-yarn-api-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-= 4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/op= t/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/java= x.inject-1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-= mapreduce/lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.= 0.p0.39/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CD= H-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/netty-3.2.4.Final.jar:/op= t/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/xz-1= .0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduc= e/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4= .0.p0.39/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/par= cels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/jackson-core-asl-1= .8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapred= uce/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/l= ib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-4.4= .0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/opt/clou= dera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/avro-1.7.4= .jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/= lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hado= op-mapreduce/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4= .4.0.p0.39/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/opt/cloudera/par= cels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/commons-io-2.1.jar= :/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/= protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/l= ib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CD= H-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/asm-3.2.jar:/opt/cloudera= /parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-streamin= g.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce= /.//hadoop-mapreduce-client-hs-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH= -4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/opt/clou= dera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapr= educe-client-shuffle.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/l= ib/hadoop-mapreduce/.//hadoop-archives.jar:/opt/cloudera/parcels/CDH-4.4.0-= 1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-archives-2.0.0-cdh4.4.0.jar= :/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//h= adoop-gridmix-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0= .p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.0-cdh4.4.= 0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce= /.//hadoop-datajoin.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/li= b/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/opt/cloudera/parcels/= CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/opt/cl= oudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-ma= preduce-client-hs-plugins.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0= .39/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.0-cdh4.4.0.jar:/opt/cloudera/= parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-extras-2.= 0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoo= p-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.0-cdh4.4.0.jar:/opt/c= loudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-r= umen-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/li= b/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/opt/cloudera/parcel= s/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.0= -cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-m= apreduce/.//hadoop-mapreduce-client-app.jar:/opt/cloudera/parcels/CDH-4.4.0= -1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2= .0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hado= op-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/opt/cloudera/parcels= /CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-exampl= es-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/= hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/opt/cloudera/parcels/CDH= -4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/opt/clo= udera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-map= reduce-client-jobclient-2.0.0-cdh4.4.0-tests.jar:/opt/cloudera/parcels/CDH-= 4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core= -2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/ha= doop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.0-cdh4.4.0.jar:/opt= /cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop= -mapreduce-client-app-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.= cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.0-cdh4.4.0.jar:/= opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//had= oop-mapreduce-client-common.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.= p0.39/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/share/cmf/lib/plugins/= event-publish-4.7.2-shaded.jar:/usr/share/cmf/lib/plugins/navigator-plugin-= 4.7.2-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-4.7.2.jar > STARTUP_MSG: build =3D file:///data/1/jenkins/workspace/generic-package= -rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.4.0/src/hadoop-common-project/ha= doop-common -r c0eba6cd38c984557e96a16ccd7356b7de835e79; compiled by 'jenki= ns' on Tue Sep 3 19:33:17 PDT 2013 > STARTUP_MSG: java =3D 1.7.0_45 > ************************************************************/ > 2014-10-27 16:22:12,129 INFO org.apache.hadoop.hdfs.server.namenode.NameN= ode: registered UNIX signal handlers for [TERM, HUP, INT] > 2014-10-27 16:22:12,695 INFO org.apache.hadoop.metrics2.impl.MetricsConfi= g: loaded properties from hadoop-metrics2.properties > 2014-10-27 16:22:12,725 INFO org.apache.hadoop.metrics2.impl.MetricsSinkA= dapter: Sink ganglia started > 2014-10-27 16:22:12,823 INFO org.apache.hadoop.metrics2.impl.MetricsSyste= mImpl: Scheduled snapshot period at 10 second(s). > 2014-10-27 16:22:12,823 INFO org.apache.hadoop.metrics2.impl.MetricsSyste= mImpl: NameNode metrics system started > 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Addi= ng to the list of included hosts from /var/run/cloudera-scm-agent/pro= cess/12726-hdfs-NAMENODE/dfs_hosts_allow.txt > 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Addi= ng to the list of included hosts from /var/run/cloudera-scm-agent/pro= cess/12726-hdfs-NAMENODE/dfs_hosts_allow.txt > 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Addi= ng to the list of included hosts from /var/run/cloudera-scm-agent/pro= cess/12726-hdfs-NAMENODE/dfs_hosts_allow.txt > 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Addi= ng to the list of included hosts from /var/run/cloudera-scm-agent/pro= cess/12726-hdfs-NAMENODE/dfs_hosts_allow.txt > 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Addi= ng to the list of included hosts from /var/run/cloudera-scm-agent/pro= cess/12726-hdfs-NAMENODE/dfs_hosts_allow.txt > 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Addi= ng to the list of included hosts from /var/run/cloudera-scm-agent/pro= cess/12726-hdfs-NAMENODE/dfs_hosts_allow.txt > 2014-10-27 16:22:13,115 INFO org.apache.hadoop.util.HostsFileReader: Addi= ng to the list of included hosts from /var/run/cloudera-scm-agent/pro= cess/12726-hdfs-NAMENODE/dfs_hosts_allow.txt > 2014-10-27 16:22:13,115 INFO org.apache.hadoop.util.HostsFileReader: Addi= ng to the list of included hosts from /var/run/cloudera-scm-agent/pro= cess/12726-hdfs-NAMENODE/dfs_hosts_allow.txt > 2014-10-27 16:22:13,116 INFO org.apache.hadoop.hdfs.server.namenode.HostF= ileManager: read includes: > HostSet( > =09->Entry{, port=3D0, ipAddress=3D} > =09->Entry{, port=3D0, ipAddress=3D} > =09->Entry{, port=3D0, ipAddress=3D} > =09->Entry{, port=3D0, ipAddress=3D} > =09->Entry{, port=3D0, ipAddress=3D} > =09->Entry{, port=3D0, ipAddress=3D} > =09->Entry{, port=3D0, ipAddress=3D} > =09->Entry{, port=3D0, ipAddress=3D} > ) > 2014-10-27 16:22:13,116 INFO org.apache.hadoop.hdfs.server.namenode.HostF= ileManager: read excludes: > HostSet( > ) > 2014-10-27 16:22:13,144 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.DatanodeManager: dfs.block.invalidate.limit=3D1000 > 2014-10-27 16:22:13,186 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: dfs.block.access.token.enable=3Dfalse > 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: defaultReplication =3D 3 > 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: maxReplication =3D 512 > 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: minReplication =3D 1 > 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: maxReplicationStreams =3D 2 > 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: shouldCheckForEnoughRacks =3D true > 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: replicationRecheckInterval =3D 3000 > 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: encryptDataTransfer =3D false > 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagemen= t.BlockManager: maxNumBlocksToLog =3D 1000 > 2014-10-27 16:22:13,192 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: fsOwner =3D hdfs (auth:SIMPLE) > 2014-10-27 16:22:13,192 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: supergroup =3D supergroup > 2014-10-27 16:22:13,192 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: isPermissionEnabled =3D true > 2014-10-27 16:22:13,193 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: HA Enabled: false > 2014-10-27 16:22:13,197 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: Append Enabled: true > 2014-10-27 16:22:13,421 INFO org.apache.hadoop.hdfs.server.namenode.NameN= ode: Caching file names occuring more than 10 times > 2014-10-27 16:22:13,422 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: dfs.namenode.safemode.threshold-pct =3D 0.9990000128746033 > 2014-10-27 16:22:13,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: dfs.namenode.safemode.min.datanodes =3D 0 > 2014-10-27 16:22:13,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNam= esystem: dfs.namenode.safemode.extension =3D 30000 > 2014-10-27 16:22:13,675 INFO org.apache.hadoop.hdfs.server.common.Storage= : Lock on /opt1/dfs/nn/in_use.lock acquired by nodename 13026@ > 2014-10-27 16:22:14,134 INFO org.apache.hadoop.hdfs.server.common.Storage= : Lock on /data/dfs/nn/in_use.lock acquired by nodename 13026@ > 2014-10-27 16:22:14,268 INFO org.apache.hadoop.hdfs.server.common.Storage= : Lock on /opt2/dfs/nn/in_use.lock acquired by nodename 13026@ > 2014-10-27 16:22:14,361 INFO org.apache.hadoop.hdfs.server.namenode.FileJ= ournalManager: Recovering unfinalized segments in /opt1/dfs/nn/current > 2014-10-27 16:22:14,440 INFO org.apache.hadoop.hdfs.server.namenode.FileJ= ournalManager: Recovering unfinalized segments in /data/dfs/nn/current > 2014-10-27 16:22:14,475 INFO org.apache.hadoop.hdfs.server.namenode.FileJ= ournalManager: Recovering unfinalized segments in /opt2/dfs/nn/current > 2014-10-27 16:22:14,854 INFO org.apache.hadoop.hdfs.server.namenode.FSIma= ge: Loading image file /opt1/dfs/nn/current/fsimage_0000000000023479779 usi= ng no compression > 2014-10-27 16:22:14,854 INFO org.apache.hadoop.hdfs.server.namenode.FSIma= ge: Number of files =3D 247160 > 2014-10-27 16:22:16,428 ERROR org.apache.hadoop.hdfs.server.namenode.FSIm= age: Failed to load image from FSImageFile(file=3D/opt1/dfs/nn/current/fsim= age_0000000000023479779, cpktTxId=3D0000000000023479779) > java.io.IOException: Unexpected block size: -1945969516689645797 > =09at org.apache.hadoop.hdfs.protocol.Block.readHelper(Block.java:187) > =09at org.apache.hadoop.hdfs.protocol.Block.readFields(Block.java:173) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadINo= de(FSImageFormat.java:379) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadDir= ectory(FSImageFormat.java:310) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadLoc= alNameINodes(FSImageFormat.java:283) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FS= ImageFormat.java:224) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:786) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:775) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSIm= age.java:677) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:647) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRea= d(FSImage.java:274) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSN= amesystem.java:639) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FS= Namesystem.java:476) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(Name= Node.java:403) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode= .java:437) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.jav= a:613) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.jav= a:598) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(Name= Node.java:1169) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:= 1233) > 2014-10-27 16:22:16,442 INFO org.apache.hadoop.hdfs.server.namenode.FSIma= ge: Loading image file /data/dfs/nn/current/fsimage_0000000000023479779 usi= ng no compression > 2014-10-27 16:22:16,442 INFO org.apache.hadoop.hdfs.server.namenode.FSIma= ge: Number of files =3D 247160 > 2014-10-27 16:22:16,945 ERROR org.apache.hadoop.hdfs.server.namenode.FSIm= age: Failed to load image from FSImageFile(file=3D/data/dfs/nn/current/fsim= age_0000000000023479779, cpktTxId=3D0000000000023479779) > java.io.IOException: Unexpected block size: -1945969516689645797 > =09at org.apache.hadoop.hdfs.protocol.Block.readHelper(Block.java:187) > =09at org.apache.hadoop.hdfs.protocol.Block.readFields(Block.java:173) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadINo= de(FSImageFormat.java:379) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadDir= ectory(FSImageFormat.java:310) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadLoc= alNameINodes(FSImageFormat.java:283) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FS= ImageFormat.java:224) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:786) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:775) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSIm= age.java:677) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:647) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRea= d(FSImage.java:274) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSN= amesystem.java:639) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FS= Namesystem.java:476) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(Name= Node.java:403) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode= .java:437) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.jav= a:613) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.jav= a:598) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(Name= Node.java:1169) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:= 1233) > 2014-10-27 16:22:16,949 INFO org.apache.hadoop.hdfs.server.namenode.FSIma= ge: Loading image file /opt2/dfs/nn/current/fsimage_0000000000023479779 usi= ng no compression > 2014-10-27 16:22:16,949 INFO org.apache.hadoop.hdfs.server.namenode.FSIma= ge: Number of files =3D 247160 > 2014-10-27 16:22:17,407 ERROR org.apache.hadoop.hdfs.server.namenode.FSIm= age: Failed to load image from FSImageFile(file=3D/opt2/dfs/nn/current/fsim= age_0000000000023479779, cpktTxId=3D0000000000023479779) > java.io.IOException: Unexpected block size: -1945969516689645797 > =09at org.apache.hadoop.hdfs.protocol.Block.readHelper(Block.java:187) > =09at org.apache.hadoop.hdfs.protocol.Block.readFields(Block.java:173) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadINo= de(FSImageFormat.java:379) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadDir= ectory(FSImageFormat.java:310) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadLoc= alNameINodes(FSImageFormat.java:283) > =09at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FS= ImageFormat.java:224) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:786) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:775) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSIm= age.java:677) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:647) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRea= d(FSImage.java:274) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSN= amesystem.java:639) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FS= Namesystem.java:476) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(Name= Node.java:403) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode= .java:437) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.jav= a:613) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.jav= a:598) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(Name= Node.java:1169) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:= 1233) > 2014-10-27 16:22:17,410 INFO org.apache.hadoop.metrics2.impl.MetricsSyste= mImpl: Stopping NameNode metrics system... > 2014-10-27 16:22:17,411 INFO org.apache.hadoop.metrics2.impl.MetricsSinkA= dapter: ganglia thread interrupted. > 2014-10-27 16:22:17,411 INFO org.apache.hadoop.metrics2.impl.MetricsSyste= mImpl: NameNode metrics system stopped. > 2014-10-27 16:22:17,411 INFO org.apache.hadoop.metrics2.impl.MetricsSyste= mImpl: NameNode metrics system shutdown complete. > 2014-10-27 16:22:17,411 FATAL org.apache.hadoop.hdfs.server.namenode.Name= Node: Exception in namenode join > java.io.IOException: Failed to load an FSImage file! > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.= java:658) > =09at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRea= d(FSImage.java:274) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSN= amesystem.java:639) > =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FS= Namesystem.java:476) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(Name= Node.java:403) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode= .java:437) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.jav= a:613) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.jav= a:598) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(Name= Node.java:1169) > =09at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:= 1233) > 2014-10-27 16:22:17,413 INFO org.apache.hadoop.util.ExitUtil: Exiting wit= h status 1 > 2014-10-27 16:22:17,415 INFO org.apache.hadoop.hdfs.server.namenode.NameN= ode: SHUTDOWN_MSG:=20 > /************************************************************ > SHUTDOWN_MSG: Shutting down NameNode at / > ************************************************************/ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)