hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From baran cakici <barancak...@gmail.com>
Subject Re: Lost Task Tracker because of no heartbeat
Date Wed, 16 Mar 2011 20:09:48 GMT
Hello,

Thank you for your rapid Answers Marcos and Harsh,

@Marcos

This is a notice that you are doing something wrong with HDFS.
Can you provide the output of:
    hadoop dfsadmin -report
on the NameNode?
report:
Configured Capacity: 139118768128 (129.56 GB)
Present Capacity: 57004627290 (53.09 GB)
DFS Remaining: 52573196288 (48.96 GB)
DFS Used: 4431431002 (4.13 GB)
DFS Used%: 7.77%
Under replicated blocks: 659
Blocks with corrupt replicas: 0
Missing blocks: 0
---------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 127.0.0.1:50010
Decommission Status : Normal
Configured Capacity: 139118768128 (129.56 GB)
DFS Used: 4431431002 (4.13 GB)
Non DFS Used: 82114140838 (76.47 GB)
DFS Remaining: 52573196288(48.96 GB)
DFS Used%: 3.19%
DFS Remaining%: 37.79%
Last contact: Wed Mar 16 21:00:11 CET 2011

that seems ok actually...

@Harsh

I start daemons with start-dfs.sh and then start-mapred-dfs.sh. do you mean
this Exception(org.apache.hadoop.ipc.RemoteException) is normal?

Thanks,

Baran


2011/3/16 Marcos Ortiz <mlortiz@uci.cu>

>  On Thu, 2011-03-17 at 00:19 +0530, Harsh J wrote:
> > On Thu, Mar 17, 2011 at 12:42 AM, Marcos Ortiz <mlortiz@uci.cu> wrote:
> > > 2011-03-15 01:18:44,468 INFO org.apache.hadoop.mapred.JobTracker:
> > > problem cleaning system directory:
> > >
> hdfs://localhost:9000/cygwin/usr/local/hadoop-datastore/hadoop-Baran/mapred/system
> > > org.apache.hadoop.ipc.RemoteException:
> > > org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot
> > > delete /cygwin/usr/local/hadoop-datastore/hadoop-Baran/mapred/system.
> > > Name node is in safe mode.
> >
> > Marcos, the JT keeps attempting to clear the mapred.system.dir on the
> > DFS at startup, and fails because the NameNode wasn't ready when it
> > tried (and thereby reattempts after a time, and passes later when NN
> > is ready for some editing action). This is mostly because Baran is
> > issuing a start-all/stop-all instead of a simple start/stop of mapred
> > components.
> >
> Thanks a lot, Harsh for the response.
> I think that's a good entry to add to the Problems/Solutions section on
> the Hadoop Wiki.
>
> Regards
> --
>  Marcos Luís Ortíz Valmaseda
>  Software Engineer
>  Centro de Tecnologías de Gestión de Datos (DATEC)
>  Universidad de las Ciencias Informáticas
>  http://uncubanitolinuxero.blogspot.com
>  http://www.linkedin.com/in/marcosluis2186
>
>
>

Mime
View raw message