hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hu Liu, (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7299) Hadoop Namenode failing because of negative value in fsimage
Date Tue, 28 Oct 2014 12:28:33 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14186756#comment-14186756
] 

Hu Liu, commented on HDFS-7299:
-------------------------------

If you can get the correct directory structure without any error, the fsimage should be ok.

> Hadoop Namenode failing because of negative value in fsimage
> ------------------------------------------------------------
>
>                 Key: HDFS-7299
>                 URL: https://issues.apache.org/jira/browse/HDFS-7299
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.0.0-alpha
>            Reporter: Vishnu Ganth
>
> Hadoop Namenode is getting failed because of some unexpected value of block size in fsimage.
> Stack trace:
> {code}
> 2014-10-27 16:22:12,107 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = <mastermachine-hostname>/<ip>
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 2.0.0-cdh4.4.0
> STARTUP_MSG:   classpath = /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/hue-plugins-2.5.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/zookeeper-3.4.5-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/slf4j-api-1.6.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/stax-api-1.0.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-logging-1.1.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-math-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jets3t-0.6.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/avro-1.7.4.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-lang-2.5.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jersey-json-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/kfs-0.3.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/junit-4.8.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-io-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-common.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-annotations.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hive-serdes-1.0-SNAPSHOT.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-common-2.0.0-cdh4.4.0-tests.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-auth.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop/.//hadoop-common-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/./:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.4.0-tests.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/netty-3.2.4.Final.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/jersey-guice-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/avro-1.7.4.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/commons-io-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.0-cdh4.4.0-tests.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-site.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-site-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-client-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-common-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-api-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/netty-3.2.4.Final.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/jersey-guice-1.8.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/avro-1.7.4.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/commons-io-2.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-archives.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-archives-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-gridmix-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-distcp-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-extras-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-rumen-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-streaming-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.0-cdh4.4.0-tests.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-datajoin-2.0.0-cdh4.4.0.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop-mapreduce/.//hadoop-extras.jar:/usr/share/cmf/lib/plugins/event-publish-4.7.2-shaded.jar:/usr/share/cmf/lib/plugins/navigator-plugin-4.7.2-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-4.7.2.jar
> STARTUP_MSG:   build = file:///data/1/jenkins/workspace/generic-package-rhel64-6-0/topdir/BUILD/hadoop-2.0.0-cdh4.4.0/src/hadoop-common-project/hadoop-common
-r c0eba6cd38c984557e96a16ccd7356b7de835e79; compiled by 'jenkins' on Tue Sep  3 19:33:17
PDT 2013
> STARTUP_MSG:   java = 1.7.0_45
> ************************************************************/
> 2014-10-27 16:22:12,129 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered
UNIX signal handlers for [TERM, HUP, INT]
> 2014-10-27 16:22:12,695 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
from hadoop-metrics2.properties
> 2014-10-27 16:22:12,725 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Sink
ganglia started
> 2014-10-27 16:22:12,823 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
snapshot period at 10 second(s).
> 2014-10-27 16:22:12,823 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
metrics system started
> 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Adding <IP1>
to the list of included hosts from /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE/dfs_hosts_allow.txt
> 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Adding <IP2>
to the list of included hosts from /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE/dfs_hosts_allow.txt
> 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Adding <IP3>
to the list of included hosts from /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE/dfs_hosts_allow.txt
> 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Adding <IP4>
to the list of included hosts from /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE/dfs_hosts_allow.txt
> 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Adding <IP5>
to the list of included hosts from /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE/dfs_hosts_allow.txt
> 2014-10-27 16:22:13,114 INFO org.apache.hadoop.util.HostsFileReader: Adding <IP6>
to the list of included hosts from /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE/dfs_hosts_allow.txt
> 2014-10-27 16:22:13,115 INFO org.apache.hadoop.util.HostsFileReader: Adding <IP7>
to the list of included hosts from /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE/dfs_hosts_allow.txt
> 2014-10-27 16:22:13,115 INFO org.apache.hadoop.util.HostsFileReader: Adding <IP8>
to the list of included hosts from /var/run/cloudera-scm-agent/process/12726-hdfs-NAMENODE/dfs_hosts_allow.txt
> 2014-10-27 16:22:13,116 INFO org.apache.hadoop.hdfs.server.namenode.HostFileManager:
read includes:
> HostSet(
> 	<IP1>->Entry{<IP1>, port=0, ipAddress=<IP1>}
> 	<IP2>->Entry{<IP2>, port=0, ipAddress=<IP2>}
> 	<IP3>->Entry{<IP3>, port=0, ipAddress=<IP3>}
> 	<IP4>->Entry{<IP4>, port=0, ipAddress=<IP4>}
> 	<IP5>->Entry{<IP5>, port=0, ipAddress=<IP5>}
> 	<IP6>->Entry{<IP6>, port=0, ipAddress=<IP6>}
> 	<IP7>->Entry{<IP7>, port=0, ipAddress=<IP7>}
> 	<IP8>->Entry{<IP8>, port=0, ipAddress=<IP8>}
> )
> 2014-10-27 16:22:13,116 INFO org.apache.hadoop.hdfs.server.namenode.HostFileManager:
read excludes:
> HostSet(
> )
> 2014-10-27 16:22:13,144 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
dfs.block.invalidate.limit=1000
> 2014-10-27 16:22:13,186 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
dfs.block.access.token.enable=false
> 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
defaultReplication         = 3
> 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
maxReplication             = 512
> 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
minReplication             = 1
> 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
maxReplicationStreams      = 2
> 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
shouldCheckForEnoughRacks  = true
> 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
replicationRecheckInterval = 3000
> 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
encryptDataTransfer        = false
> 2014-10-27 16:22:13,187 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
maxNumBlocksToLog          = 1000
> 2014-10-27 16:22:13,192 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner
            = hdfs (auth:SIMPLE)
> 2014-10-27 16:22:13,192 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup
         = supergroup
> 2014-10-27 16:22:13,192 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled
= true
> 2014-10-27 16:22:13,193 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA
Enabled: false
> 2014-10-27 16:22:13,197 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append
Enabled: true
> 2014-10-27 16:22:13,421 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching
file names occuring more than 10 times
> 2014-10-27 16:22:13,422 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct
= 0.9990000128746033
> 2014-10-27 16:22:13,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes
= 0
> 2014-10-27 16:22:13,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension
    = 30000
> 2014-10-27 16:22:13,675 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt1/dfs/nn/in_use.lock
acquired by nodename 13026@<mastermachine-hostname>
> 2014-10-27 16:22:14,134 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data/dfs/nn/in_use.lock
acquired by nodename 13026@<mastermachine-hostname>
> 2014-10-27 16:22:14,268 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt2/dfs/nn/in_use.lock
acquired by nodename 13026@<mastermachine-hostname>
> 2014-10-27 16:22:14,361 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:
Recovering unfinalized segments in /opt1/dfs/nn/current
> 2014-10-27 16:22:14,440 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:
Recovering unfinalized segments in /data/dfs/nn/current
> 2014-10-27 16:22:14,475 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager:
Recovering unfinalized segments in /opt2/dfs/nn/current
> 2014-10-27 16:22:14,854 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loading
image file /opt1/dfs/nn/current/fsimage_0000000000023479779 using no compression
> 2014-10-27 16:22:14,854 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Number of
files = 247160
> 2014-10-27 16:22:16,428 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed
to load image from FSImageFile(file=/opt1/dfs/nn/current/fsimage_0000000000023479779, cpktTxId=0000000000023479779)
> java.io.IOException: Unexpected block size: -1945969516689645797
> 	at org.apache.hadoop.hdfs.protocol.Block.readHelper(Block.java:187)
> 	at org.apache.hadoop.hdfs.protocol.Block.readFields(Block.java:173)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadINode(FSImageFormat.java:379)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadDirectory(FSImageFormat.java:310)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadLocalNameINodes(FSImageFormat.java:283)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:224)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:786)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:775)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:677)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:647)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:274)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:437)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:613)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:598)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)
> 2014-10-27 16:22:16,442 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loading
image file /data/dfs/nn/current/fsimage_0000000000023479779 using no compression
> 2014-10-27 16:22:16,442 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Number of
files = 247160
> 2014-10-27 16:22:16,945 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed
to load image from FSImageFile(file=/data/dfs/nn/current/fsimage_0000000000023479779, cpktTxId=0000000000023479779)
> java.io.IOException: Unexpected block size: -1945969516689645797
> 	at org.apache.hadoop.hdfs.protocol.Block.readHelper(Block.java:187)
> 	at org.apache.hadoop.hdfs.protocol.Block.readFields(Block.java:173)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadINode(FSImageFormat.java:379)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadDirectory(FSImageFormat.java:310)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadLocalNameINodes(FSImageFormat.java:283)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:224)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:786)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:775)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:677)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:647)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:274)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:437)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:613)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:598)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)
> 2014-10-27 16:22:16,949 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loading
image file /opt2/dfs/nn/current/fsimage_0000000000023479779 using no compression
> 2014-10-27 16:22:16,949 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Number of
files = 247160
> 2014-10-27 16:22:17,407 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: Failed
to load image from FSImageFile(file=/opt2/dfs/nn/current/fsimage_0000000000023479779, cpktTxId=0000000000023479779)
> java.io.IOException: Unexpected block size: -1945969516689645797
> 	at org.apache.hadoop.hdfs.protocol.Block.readHelper(Block.java:187)
> 	at org.apache.hadoop.hdfs.protocol.Block.readFields(Block.java:173)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadINode(FSImageFormat.java:379)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadDirectory(FSImageFormat.java:310)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.loadLocalNameINodes(FSImageFormat.java:283)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImageFormat$Loader.load(FSImageFormat.java:224)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:786)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:775)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:677)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:647)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:274)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:437)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:613)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:598)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)
> 2014-10-27 16:22:17,410 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping
NameNode metrics system...
> 2014-10-27 16:22:17,411 INFO org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: ganglia
thread interrupted.
> 2014-10-27 16:22:17,411 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
metrics system stopped.
> 2014-10-27 16:22:17,411 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
metrics system shutdown complete.
> 2014-10-27 16:22:17,411 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception
in namenode join
> java.io.IOException: Failed to load an FSImage file!
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:658)
> 	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:274)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:403)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:437)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:613)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:598)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1169)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1233)
> 2014-10-27 16:22:17,413 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
> 2014-10-27 16:22:17,415 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at <mastermachine-hostname>/<IP>
> ************************************************************/
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message