Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8BBB7DC55 for ; Mon, 18 Feb 2013 10:59:31 +0000 (UTC) Received: (qmail 19380 invoked by uid 500); 18 Feb 2013 10:59:26 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 19200 invoked by uid 500); 18 Feb 2013 10:59:25 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 19160 invoked by uid 99); 18 Feb 2013 10:59:25 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 18 Feb 2013 10:59:25 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of yypvsxf19870706@gmail.com designates 209.85.128.42 as permitted sender) Received: from [209.85.128.42] (HELO mail-qe0-f42.google.com) (209.85.128.42) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 18 Feb 2013 10:59:16 +0000 Received: by mail-qe0-f42.google.com with SMTP id 2so2390458qeb.29 for ; Mon, 18 Feb 2013 02:58:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:date:message-id:subject:from:to :content-type; bh=o4Jpmdn4I2FHYLEe/r75WWlkqBiawGMLHmBFHIe7eWw=; b=TZwjQtlKpaTn9vFmNWzykhtvixgFLVIa/Lgymq6AC6ogrVb0EGxtbRG+xUafgf+Kke AjlPuhH630BJbCQ/KPGEJO791BZCMKNeEIErV2WiAPnDQPrJXXT23MUU2myckdL9dr9u dgrvirlqrNb272WHV7fJKqOYUZvXpg6a2KZTBSL5wIqyx27TMkphibduIW3DR+F5sNt3 sKTrJN+RX8jRHkd7pxE8mDP3JpT6pZaRNoEkXtWyN8Jsg/pc6reYXbOyZCT862H/cUWe fwgolQLh9AuM+jFaZmxtz4TF9dJoE25Adbtw6pqbolwSDZV9JfMiZEL4GsGBPY8CYfjo AQMA== MIME-Version: 1.0 X-Received: by 10.49.24.135 with SMTP id u7mr4953626qef.4.1361185135792; Mon, 18 Feb 2013 02:58:55 -0800 (PST) Received: by 10.49.82.202 with HTTP; Mon, 18 Feb 2013 02:58:55 -0800 (PST) Date: Mon, 18 Feb 2013 18:58:55 +0800 Message-ID: Subject: ERROR rised during backup node starting From: YouPeng Yang To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b6dc3bc7e10aa04d5fd9c06 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b6dc3bc7e10aa04d5fd9c06 Content-Type: text/plain; charset=ISO-8859-1 Hi ALL I am testing my backup node(CDH4.1.2).After I setup the neccessory settings,i started the backup node. Everything went well,except for an ERROR had risen[1]. What is the problem, and what is the lib.MethodMetric used for ? I also noticed the Bad state: UNINITIALIZED.I wondered when the UNINITIALIZED state could come out? Any help will be appreciated. [1] 13/02/18 18:11:28 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 13/02/18 18:11:28 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 13/02/18 18:11:28 INFO impl.MetricsSystemImpl: BackupNode metrics system started 13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/namedir should be specified as a URI in configuration files. Please update hdfs configuration. 13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/namedir should be specified as a URI in configuration files. Please update hdfs configuration. 13/02/18 18:11:29 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list 13/02/18 18:11:29 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 13/02/18 18:11:29 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 13/02/18 18:11:29 INFO blockmanagement.BlockManager: defaultReplication = 3 13/02/18 18:11:29 INFO blockmanagement.BlockManager: maxReplication = 512 13/02/18 18:11:29 INFO blockmanagement.BlockManager: minReplication = 1 13/02/18 18:11:29 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 13/02/18 18:11:29 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 13/02/18 18:11:29 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 13/02/18 18:11:29 INFO blockmanagement.BlockManager: encryptDataTransfer = false 13/02/18 18:11:29 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE) 13/02/18 18:11:29 INFO namenode.FSNamesystem: supergroup = supergroup 13/02/18 18:11:29 INFO namenode.FSNamesystem: isPermissionEnabled = true 13/02/18 18:11:29 INFO namenode.FSNamesystem: HA Enabled: false 13/02/18 18:11:29 INFO namenode.FSNamesystem: Append Enabled: true 13/02/18 18:11:29 INFO namenode.NameNode: Caching file names occuring more than 10 times 13/02/18 18:11:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 13/02/18 18:11:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 13/02/18 18:11:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 13/02/18 18:11:29 INFO common.Storage: Lock on /home/hadoop/namedir/in_use.lock acquired by nodename 19850@Hadoop-database 13/02/18 18:11:29 INFO ipc.Server: Starting Socket Reader #1 for port 50100 13/02/18 18:11:29 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean 13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/namedir should be specified as a URI in configuration files. Please update hdfs configuration. 13/02/18 18:11:29 INFO namenode.FSNamesystem: Number of blocks under construction: 0 13/02/18 18:11:29 INFO namenode.FSNamesystem: initializing replication queues 13/02/18 18:11:29 INFO blockmanagement.BlockManager: Total number of blocks = 0 13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of invalid blocks = 0 13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0 13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of over-replicated blocks = 0 13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of blocks being written = 0 13/02/18 18:11:29 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 8 msec 13/02/18 18:11:29 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs. 13/02/18 18:11:29 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 13/02/18 18:11:29 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 13/02/18 18:11:29 ERROR lib.MethodMetric: Error invoking method getTransactionsSinceLastLogRoll java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111) at org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144) at org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:387) at org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:78) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:171) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:150) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServerInterceptor.java:321) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:307) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482) at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:57) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:220) at org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.java:95) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:244) at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:222) at org.apache.hadoop.metrics2.MetricsSystem.register(MetricsSystem.java:54) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:601) at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:479) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:443) at org.apache.hadoop.hdfs.server.namenode.BackupNode.initialize(BackupNode.java:144) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:608) at org.apache.hadoop.hdfs.server.namenode.BackupNode.(BackupNode.java:84) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1132) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1204) Caused by: java.lang.IllegalStateException: Bad state: UNINITIALIZED at com.google.common.base.Preconditions.checkState(Preconditions.java:172) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.getCurSegmentTxId(FSEditLog.java:452) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getTransactionsSinceLastLogRoll(FSNamesystem.java:3521) ... 28 more Regards --047d7b6dc3bc7e10aa04d5fd9c06 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Hi ALL

=A0I am testing my ba= ckup node(CDH4.1.2).After I setup the neccessory settings,i started the bac= kup node.
=A0Everything went well,except for an ERROR had risen[1= ].
=A0
=A0What is the problem, and what is the lib.MethodMetric= =A0used for ?
=A0I also noticed the Bad state: UNINITIALIZED.I w= ondered =A0when the UNINITIALIZED =A0state could come out?

=A0Any help will be appreciated.

=A0

[1]
13/02/18 18:11:28 INFO impl.Metric= sConfig: loaded properties from hadoop-metrics2.properties
13/02/= 18 18:11:28 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 se= cond(s).
13/02/18 18:11:28 INFO impl.MetricsSystemImpl: BackupNode metrics syst= em started
13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/= namedir should be specified as a URI in configuration files. Please update = hdfs configuration.
13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/namedir should b= e specified as a URI in configuration files. Please update hdfs configurati= on.
13/02/18 18:11:29 INFO util.HostsFileReader: Refreshing hosts= (include/exclude) list
13/02/18 18:11:29 INFO blockmanagement.DatanodeManager: dfs.block.inva= lidate.limit=3D1000
13/02/18 18:11:29 INFO blockmanagement.BlockM= anager: dfs.block.access.token.enable=3Dfalse
13/02/18 18:11:29 I= NFO blockmanagement.BlockManager: defaultReplication =A0 =A0 =A0 =A0 =3D 3<= /div>
13/02/18 18:11:29 INFO blockmanagement.BlockManager: maxReplication = =A0 =A0 =A0 =A0 =A0 =A0 =3D 512
13/02/18 18:11:29 INFO blockmanag= ement.BlockManager: minReplication =A0 =A0 =A0 =A0 =A0 =A0 =3D 1
= 13/02/18 18:11:29 INFO blockmanagement.BlockManager: maxReplicationStreams = =A0 =A0 =A0=3D 2
13/02/18 18:11:29 INFO blockmanagement.BlockManager: shouldCheckForEno= ughRacks =A0=3D false
13/02/18 18:11:29 INFO blockmanagement.Bloc= kManager: replicationRecheckInterval =3D 3000
13/02/18 18:11:29 I= NFO blockmanagement.BlockManager: encryptDataTransfer =A0 =A0 =A0 =A0=3D fa= lse
13/02/18 18:11:29 INFO namenode.FSNamesystem: fsOwner =A0 =A0 =A0 =A0 = =A0 =A0 =3D hadoop (auth:SIMPLE)
13/02/18 18:11:29 INFO namenode.= FSNamesystem: supergroup =A0 =A0 =A0 =A0 =A0=3D supergroup
13/02/= 18 18:11:29 INFO namenode.FSNamesystem: isPermissionEnabled =3D true
13/02/18 18:11:29 INFO namenode.FSNamesystem: HA Enabled: false
<= div>13/02/18 18:11:29 INFO namenode.FSNamesystem: Append Enabled: true
13/02/18 18:11:29 INFO namenode.NameNode: Caching file names occuring= more than 10 times=A0
13/02/18 18:11:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.th= reshold-pct =3D 0.9990000128746033
13/02/18 18:11:29 INFO namenod= e.FSNamesystem: dfs.namenode.safemode.min.datanodes =3D 0
13/02/1= 8 18:11:29 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension =A0 = =A0 =3D 30000
13/02/18 18:11:29 INFO common.Storage: Lock on /home/hadoop/namedir/in= _use.lock acquired by nodename 19850@Hadoop-database
13/02/18 18:= 11:29 INFO ipc.Server: Starting Socket Reader #1 for port 50100
13/02/18 18:11:29 INFO namenode.FSNamesystem: Registered FSNamesystemState = MBean
13/02/18 18:11:29 WARN common.Util: Path /home/hadoop/named= ir should be specified as a URI in configuration files. Please update hdfs = configuration.
13/02/18 18:11:29 INFO namenode.FSNamesystem: Number of blocks under c= onstruction: 0
13/02/18 18:11:29 INFO namenode.FSNamesystem: init= ializing replication queues
13/02/18 18:11:29 INFO blockmanagemen= t.BlockManager: Total number of blocks =A0 =A0 =A0 =A0 =A0 =A0=3D 0
13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of invalid= blocks =A0 =A0 =A0 =A0 =A0=3D 0
13/02/18 18:11:29 INFO blockmana= gement.BlockManager: Number of under-replicated blocks =3D 0
13/0= 2/18 18:11:29 INFO blockmanagement.BlockManager: Number of =A0over-replicat= ed blocks =3D 0
13/02/18 18:11:29 INFO blockmanagement.BlockManager: Number of blocks = being written =A0 =A0=3D 0
13/02/18 18:11:29 INFO hdfs.StateChang= e: STATE* Replication Queue initialization scan for invalid, over- and unde= r-replicated blocks completed in 8 msec
13/02/18 18:11:29 INFO hdfs.StateChange: STATE* Leaving safe mode afte= r 0 secs.
13/02/18 18:11:29 INFO hdfs.StateChange: STATE* Network= topology has 0 racks and 0 datanodes
13/02/18 18:11:29 INFO hdfs= .StateChange: STATE* UnderReplicatedBlocks has 0 blocks
13/02/18 18:11:29 ERROR lib.MethodMetric: Error invoking method getTra= nsactionsSinceLastLogRoll
java.lang.reflect.InvocationTargetExcep= tion
at sun.re= flect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.Nati= veMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMe= thodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflec= t.Method.invoke(Method.java:597)
at org.apache.hadoop.metrics2.lib.MethodMetric$2.snapsho= t(MethodMetric.java:111)
at org.apache.hadoo= p.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
at org.apache.hadoop.metrics2= .lib.MetricsRegistry.snapshot(MetricsRegistry.java:387)
at org.apache.hadoo= p.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:= 78)
at org.apa= che.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapt= er.java:194)
at org.apache.hadoo= p.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.ja= va:171)
at org= .apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourc= eAdapter.java:150)
at com.sun.jmx.inte= rceptor.DefaultMBeanServerInterceptor.getNewMBeanClassName(DefaultMBeanServ= erInterceptor.java:321)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.register= MBean(DefaultMBeanServerInterceptor.java:307)
at com.sun.jmx.mbea= nserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
at org.apache.hadoop.metric= s2.util.MBeans.register(MBeans.java:57)
at org.apache.hadoo= p.metrics2.impl.MetricsSourceAdapter.startMBeans(MetricsSourceAdapter.java:= 220)
at org.ap= ache.hadoop.metrics2.impl.MetricsSourceAdapter.start(MetricsSourceAdapter.j= ava:95)
at org.apache.hadoo= p.metrics2.impl.MetricsSystemImpl.registerSource(MetricsSystemImpl.java:244= )
at org.apach= e.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:22= 2)
at org.apache.hadoo= p.metrics2.MetricsSystem.register(MetricsSystem.java:54)
at org.apache.hadoop.hdfs.server= .namenode.FSNamesystem.startCommonServices(FSNamesystem.java:601)
at org.apache.hadoo= p.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:479)
at org.apache.hado= op.hdfs.server.namenode.NameNode.initialize(NameNode.java:443)
at org.apache.hadoo= p.hdfs.server.namenode.BackupNode.initialize(BackupNode.java:144)
at org.apache.hadoop.hd= fs.server.namenode.NameNode.<init>(NameNode.java:608)
at org.apache.hadoo= p.hdfs.server.namenode.BackupNode.<init>(BackupNode.java:84)
at org.apache.hadoop.h= dfs.server.namenode.NameNode.createNameNode(NameNode.java:1132)
at org.apache.hadoo= p.hdfs.server.namenode.NameNode.main(NameNode.java:1204)
Caused b= y: java.lang.IllegalStateException: Bad state: UNINITIALIZED
at com.google.common.base.Pr= econditions.checkState(Preconditions.java:172)
at org.apache.hadoo= p.hdfs.server.namenode.FSEditLog.getCurSegmentTxId(FSEditLog.java:452)
at org.apache.hado= op.hdfs.server.namenode.FSNamesystem.getTransactionsSinceLastLogRoll(FSName= system.java:3521)
... 28 more


Regards=A0

--047d7b6dc3bc7e10aa04d5fd9c06--