hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jason Yang <lin.yang.ja...@gmail.com>
Subject Re: Not able to place enough replicas in Reduce
Date Mon, 24 Sep 2012 16:37:18 GMT
Hey, Bertrand

here is the situation of my pseudo-distributed cluster:
-----
NameNode 'localhost:8020'
 Started: Mon Sep 24 03:11:05 CST 2012 Version: 0.20.2, r911707 Compiled: Fri
Feb 19 08:07:34 UTC 2010 by chrisdo Upgrades: There are no upgrades in
progress.

*Browse the filesystem <http://localhost:50070/nn_browsedfscontent.jsp>*
*Namenode Logs <http://localhost:50070/logs/>*
------------------------------
Cluster Summary***169 files and directories, 292 blocks = 461 total. Heap
Size is 54.5 MB / 888.94 MB (6%)
*
 Configured Capacity : 18.98 GB DFS Used : 56.7 MB Non DFS Used :
14.61 GBDFS Remaining:4.32 GB DFS
Used% : 0.29 % DFS Remaining% : 22.75 % Live
Nodes<http://localhost:50070/dfsnodelist.jsp?whatNodes=LIVE>
: 1 Dead Nodes <http://localhost:50070/dfsnodelist.jsp?whatNodes=DEAD>  : 0

------------------------------
NameNode Storage:
*Storage Directory**Type**State*
/home/yanglin/Programs/hadoop_tmp_dir/hadoop-yanglin/dfs/nameIMAGE_AND_EDITSActive


and here is the configuration :
----
<!-- core-site.xml -->
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
    <value>/home/yanglin/Programs/hadoop_tmp_dir/hadoop-${user.name}
    </value>
  </property>
</configuration>

<!-- hdfs-site.xml -->
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>

<!-- mapred-site.xml -->
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:8021</value>
  </property>
</configuration>

yanglin@ubuntu:~$ cat /home/yanglin/Programs/hadoop-0.20.2/conf/masters
localhost
yanglin@ubuntu:~$ cat /home/yanglin/Programs/hadoop-0.20.2/conf/slaves
localhost

-----

2012/9/24 Bertrand Dechoux <dechouxb@gmail.com>

> And do you have any remaining space in your HDFS? (Or do you have quota?
> But the message should be different, I guess.)
> What are the metrics you get from the namenode? Are all datanodes (you
> have only one?) live?
>
> http://localhost:50070/dfshealth.jsp
>
> As long as you consume (map) you don't need much space in HDFS but when
> you produce (reduce) you definitely need some.
> As Ted pointed out, your error is a standard one when hadoop is unable to
> replicate a block. It should not be related to the reduce itself and even
> less related about your specific logic.
>
> Regards
>
> Bertrand
>
>
> On Mon, Sep 24, 2012 at 5:41 PM, Jason Yang <lin.yang.jason@gmail.com>wrote:
>
>> Hi, Ted
>>
>> here is the result of jps:
>> yanglin@ubuntu:~$ jps
>> 3286 TaskTracker
>> 14053 Jps
>> 2623 DataNode
>> 2996 JobTracker
>> 2329 NameNode
>> 2925 SecondaryNameNode
>> ---
>> It seems that the DN is working.
>>
>> And it is not failed immediately when enter the reduce phase, actually it
>> always failed after processing some data
>>
>>
>> 2012/9/24 Steve Loughran <stevel@hortonworks.com>
>>
>>>
>>>
>>> On 24 September 2012 15:47, Ted Reynolds <tedr@hortonworks.com> wrote:
>>>
>>>> Jason,
>>>>
>>>> The line in the JobTracker log - "Could only be replicated to 0 nodes,
>>>> instead of 1" points to a problem with your data node.  I generally means
>>>> that your DataNode is either down or not functioning correctly.  What is
>>>> the output of the "jps" command?  ("jps" is found in <JAVA_HOME>/bin).
>>>>
>>>>
>>>
>>> see also: http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo
>>>
>>> -steve
>>>
>>
>>
>>
>> --
>> YANG, Lin
>>
>>
>
>
> --
> Bertrand Dechoux
>



-- 
YANG, Lin

Mime
View raw message