hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From 周俊清 <2ho...@163.com>
Subject Re:RE: Re: what happen in my hadoop cluster?
Date Thu, 28 Jul 2011 02:18:31 GMT
Hello Devaraj K,
   Thank you for your help,I have fixed the problem,which is caused by the option of dfs.dir.data
newly without add the value ${hadoop.tmp.dir}/dfs/data ,at the same time.So the DNs fail to
report file info.

Thanks again.
----------------------------
周俊清
2houjq@163.com

At 2011-07-27 16:26:14,"Devaraj K" <devaraj.k@huawei.com> wrote:


Can you check the name node logs what is going on with name node?

 

When we start the name node, it will be in the while initializing and after some time it will
be turned off automatically. If it going to safe mode with any other reason, we can find out
from the name node logs.

 

Devaraj K 

From:周俊清 [mailto:2houjq@163.com]
Sent: Wednesday, July 27, 2011 1:08 PM
To:mapreduce-user@hadoop.apache.org
Subject: Re:Re: what happen in my hadoop cluster?

 

Yes,I can see all the data node from web page:http://dn224.pengyun.org:50070/dfsnodelist.jsp?

--
----------------------------

周俊清

2houjq@163.com

 


在 2011-07-27 15:30:37,"Harsh J" <harsh@cloudera.com> 写道:
>Are all your DataNodes up?
> 
>2011/7/27 周俊清 <2houjq@163.com>:
>> hello everyone,
>>     I got an exception from my jobtracker's log file as follow:
>> 2011-07-27 01:58:04,197 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
>> up the system directory
>> 2011-07-27 01:58:04,230 INFO org.apache.hadoop.mapred.JobTracker: problem
>> cleaning system directory:
>> hdfs://dn224.pengyun.org:56900/home/hadoop/hadoop-tmp203/mapred/system
>> org.apache.hadoop.ipc.RemoteException:
>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
>> /home/hadoop/hadoop-tmp203/mapred/system. Name node is in safe mode.
>> The ratio of reported blocks 0.2915 has not reached the threshold 0.9990.
>> Safe mode will be turned off automatically.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1851)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1831)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:691)
>>     ……
>> and
>>    the log message of namenode:
>> 2011-07-27 00:00:00,219 INFO org.apache.hadoop.ipc.Server: IPC Server
>> handler 1 on 56900, call delete(/home/hadoop/hadoop-tmp203/mapred/system,
>> true) from 192.168.1.224:5131
>> 2: error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot
>> delete /home/hadoop/hadoop-tmp203/mapred/system. Name node is in safe mode.
>> The ratio of reported blocks 0.2915 has not reached the threshold 0.9990.
>> Safe mode will be turned off automatically.
>> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete
>> /home/hadoop/hadoop-tmp203/mapred/system. Name node is in safe mode.
>> The ratio of reported blocks 0.2915 has not reached the threshold 0.9990.
>> Safe mode will be turned off automatically.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:1851)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:1831)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.delete(NameNode.java:691)
>>     at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:523)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1383)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1379)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:396)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1377)
>> 
>>  It means,I think,that the namenode is always being in the safe mode,so what
>> can i do about these exception.Anyone who can tell me why?I don't find the
>> file "/home/hadoop/hadoop-tmp203/mapred/system" in my system.The exception
>> upon which appearing in the log file are repeated,even when I restart my
>> hadoop.
>>    thanks for your concern.
>> 
>> 
>> ----------------------------
>> Junqing Zhou
>> 2houjq@163.com
>> 
>> 
>> 
>> 
> 
> 
> 
>-- 
>Harsh J




Mime
View raw message