hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Himanshu Vashishtha <hvash...@cs.ualberta.ca>
Subject Re: Namenode restart giving IllegalArgumentException
Date Wed, 04 May 2011 20:53:13 GMT
Not that much I believe!

dfsadmin -report:
Configured Capacity: 972726308864 (905.92 GB)
Present Capacity: 565613993984 (526.77 GB)
DFS Remaining: 557784580096 (519.48 GB)
DFS Used: 7829413888 (7.29 GB)
DFS Used%: 1.38%
Under replicated blocks: 185
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 129.128.184.45:50010
Decommission Status : Normal
Configured Capacity: 972726308864 (905.92 GB)
DFS Used: 7829413888 (7.29 GB)
Non DFS Used: 407112314880 (379.15 GB)
DFS Remaining: 557784580096(519.48 GB)
DFS Used%: 0.8%
DFS Remaining%: 57.34%
Last contact: Wed May 04 14:51:26 MDT 2011

Thanks,
Himanshu

On Wed, May 4, 2011 at 2:13 PM, Joey Echeverria <joey@cloudera.com> wrote:

> How much data do you have? It takes some time for all of the datanodes
> to report that all blocks are accounted for.
>
> -Joey
>
> On Wed, May 4, 2011 at 4:05 PM, Himanshu Vashishtha
> <hvashish@cs.ualberta.ca> wrote:
> > Hey,
> > Every thing comes up for good.
> > Why this delay of 6 minutes I wonder? And I see that this delay has
> nothing
> > to do with the accidental switch-off.
> > Do I need to edit my conf? I am using the machine-name assigned to this
> > machine for its entry in conf/slaves file. and host command also confirms
> > that name-ip pair is working good.
> >
> > Thanks,
> > Himanshu
> >
> > On Wed, May 4, 2011 at 12:59 PM, Harsh J <harsh@cloudera.com> wrote:
> >>
> >> Hello,
> >>
> >> On Thu, May 5, 2011 at 12:04 AM, Himanshu Vashishtha <
> hv.csuoa@gmail.com>
> >> wrote:
> >> > Hello all,
> >> > I was running HDFS in standalone mode where the machine got accidently
> >> > turned off. On resuming it back to normal, I got this exception:
> >> > ===================================================
> >> > 2011-05-04 11:27:54,441 INFO org.apache.hadoop.ipc.Server: IPC Server
> >> > handler 9 on 54310: starting
> >> > 2011-05-04 11:27:54,470 INFO org.apache.hadoop.ipc.Server: Error
> >> > register
> >> > getProtocolVersion
> >> > java.lang.IllegalArgumentException: Duplicate
> >> > metricsName:getProtocolVersion
> >> >     at
> >> >
> >> >
> org.apache.hadoop.metrics.util.MetricsRegistry.add(MetricsRegistry.java:53)
> >> >     at
> >> >
> >> >
> org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:89)
> >> >     at
> >> >
> >> >
> org.apache.hadoop.metrics.util.MetricsTimeVaryingRate.<init>(MetricsTimeVaryingRate.java:99)
> >> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
> >> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:961)
> >> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:957)
> >> >     at java.security.AccessController.doPrivileged(Native Method)
> >> >     at javax.security.auth.Subject.doAs(Subject.java:396)
> >> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:955)
> >> >
> >>
> >> Is something not working with your Hadoop cluster? It would seem that
> >> your NN came back up just fine, which is a good thing (it is just
> >> waiting now for the DNs to come back, as per the remaining part of the
> >> log -- ensure that all DNs are up as well).
> >>
> >> The logged exception is an INFO level one about something trying to
> >> add a dupe metric name. It should be investigated (has cropped up for
> >> many before), but it is pretty harmless I'd think.
> >>
> >> --
> >> Harsh J
> >
> >
>
>
>
> --
> Joseph Echeverria
> Cloudera, Inc.
> 443.305.9434
>

Mime
View raw message