hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Phan, Truong Q" <Troung.P...@team.telstra.com>
Subject RE: how to free up space of the old Data Node
Date Thu, 20 Mar 2014 04:58:01 GMT
Thanks for the reply.
This Hadoop cluster is our POC and the node has less space compare to the other two nodes.
How do I change the Replication Factore (RF) from 3 down to 2?
Is this controlled by this parameter (dfs.datanode.handler.count)?

Thanks and Regards,
Truong Phan

P    + 61 2 8576 5771
M   + 61 4 1463 7424
E    troung.phan@team.telstra.com
W  www.telstra.com

From: Brahma Reddy Battula [mailto:brahmareddy.battula@huawei.com]
Sent: Thursday, 20 March 2014 3:27 PM
To: user@hadoop.apache.org
Subject: RE: how to free up space of the old Data Node

Please check my inline comments which are in blue color...

From: Phan, Truong Q [Troung.Phan@team.telstra.com]
Sent: Thursday, March 20, 2014 8:04 AM
To: user@hadoop.apache.org<mailto:user@hadoop.apache.org>
Subject: how to free up space of the old Data Node

I have 3 nodes Hadoop cluster in which I created 3 Data Nodes.
However, I don't have enough space in one of the node to cater other projects' log. So I decommissioned
this node from a Data node list but I could not re-claimed the space from it.
>>> is your Replication is 3..? If it is 3 and as you have 3 datanodes,ideally disk
space occupied by all nodes should be same(47G, should be present in all the DN's)..
>>>And if you RF=3,Decommission will not be success as you've only 3 DN's..you need
to add another DN to cluster,,then only decommission will be success..
Hence please mention the replication factor of the file..

Is there a way to get this node to release space?
>>> ways are there,,but you need to mention, why only this node disk is full..why
not other nodes..?  is it because,this node is having less space compared to other nodes
>>> If RF=3, then make RF=2(decrease the replication factor)..then do decommission
of this node
[root@nsda3dmsrpt02] /data/dfs/dn# du -sh /data/dfs/dn/*
47G     /data/dfs/dn/current
>>> try to give the following output also
      sudo -u hdfs hadoop fsck /
     sudo -u hdfs hadoop dfsadmin -report

$ sudo -u hdfs hadoop fsck /data
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Total size:    7186453688 B
Total dirs:    11
Total files:   62
Total symlinks:                0
Total blocks (validated):      105 (avg. block size 68442416 B)
Minimally replicated blocks:   105 (100.0 %)
Over-replicated blocks:        0 (0.0 %)
Under-replicated blocks:       105 (100.0 %)
Mis-replicated blocks:         0 (0.0 %)
Default replication factor:    3
Average block replication:     2.0
Corrupt blocks:                0
Missing replicas:              105 (33.333332 %)
Number of data-nodes:          2
Number of racks:               1
FSCK ended at Thu Mar 20 13:30:03 EST 2014 in 22 milliseconds

The filesystem under path '/data' is HEALTHY

Thanks and Regards,
Truong Phan
Senior Technology Specialist
Database Engineering
Transport & Routing Engineering | Networks | Telstra Operations


P    + 61 2 8576 5771
M   + 61 4 1463 7424
E    troung.phan@team.telstra.com<mailto:troung.phan@team.telstra.com>
W  www.telstra.com<https://email-cn.huawei.com/owa/UrlBlockedError.aspx>

Love the movies? Telstra takes you there with $10 movie tickets, just to say thanks. Available
now at telstra.com/movies<http://www.telstra.com/movies>

This communication may contain confidential or copyright information of Telstra Corporation
Limited (ABN 33 051 775 556). If you are not an intended recipient, you must not keep, forward,
copy, use, save or rely on this communication, and any such action is unauthorised and prohibited.
If you have received this communication in error, please reply to this email to notify the
sender of its incorrect delivery, and then delete both it and your reply.

View raw message