hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chef Win2er <win2erc...@gmail.com>
Subject Re: FW: Trash data after upgrade from 2.7.1 to 2.7.2
Date Wed, 17 Feb 2016 11:08:14 GMT
Hi Vinay,


If you can share your namenode and datanode logs that would be helpful.
>Sadly for some reason I cannot upload the logs.

I found this(https://issues.apache.org/jira/browse/HDFS-7645) ticket.(Yes,
you commented it. :) )
May this concerns?


I tried the steps you mentioned but it didn't work.
So I took a backup of the trash folder and removed it.
By now the hadoop cluster works fine.

Thanks for your advice again.
-MA

2016-02-15 16:13 GMT+09:00 Vinayakumar B <vinayakumar.ba@huawei.com>:

> Hi Chef,
>
>
>
> If you trying to understand, why trash is still not cleared in your case.
>
> If you can share your namenode and datanode logs that would be helpful.
>
>
>
> If you just want to clear the trash, without worrying about why it
> happened. Can you try below steps.
>
>
>
> On current 2.7.2 cluster, without any restarts,  repeat the rolling
> upgrade process and finalize it again.
>
> 1.       Call “rollingUpgrade –start”
>
> 2.       Wait for sometime ( may be 2-3 min).
>
> 3.       And call “rollingUpgrade –finalize”
>
> 4.       Check whether trash is getting cleared at datanodes
>
>
>
> -vinay
>
>
>
> *From:* Vinayakumar B
> *Sent:* 15 February 2016 12:29
> *To:* 'Chef Win2er'
> *Subject:* RE: Trash data after upgrade from 2.7.1 to 2.7.2
>
>
>
> By Any  chance, you did the below sequence.?
>
>
>
> 1.       Stop all datanodes,
>
> 2.       Issue “hdfs dfsadmin -rollingUpgrade finalize”
>
> 3.       Restart Namenode
>
> 4.       Start all Datanodes?
>
>
>
> -vinay
>
>
>
> *From:* Chef Win2er [mailto:win2erchef@gmail.com <win2erchef@gmail.com>]
> *Sent:* 15 February 2016 12:16
> *To:* Vinayakumar B
> *Subject:* Re: Trash data after upgrade from 2.7.1 to 2.7.2
>
>
>
> Hi Vinay
>
> Thanks for your reply.
>
> 1)      Did you upgrade all datanodes to 2.7.2?
>
> > Yes, I ran this(hdfs dfsadmin -getDatanodeInfo <DATANODE_HOST:IPC_PORT>)
> command and got the results below.
>
> Uptime: 271200, Software version: 2.7.2, Config version:
> core-0.23.0,hdfs-1
> Uptime: 271211, Software version: 2.7.2, Config version:
> core-0.23.0,hdfs-1
> Uptime: 271216, Software version: 2.7.2, Config version:
> core-0.23.0,hdfs-1
> Uptime: 271222, Software version: 2.7.2, Config version: core-0.23.0,hdfs-1
>
>
>
> 2)      Did you finalized the upgrade using the following command?
>
> >Yes, I finished the upgrade. By run the command again I got this.
>
> hdfs dfsadmin -rollingUpgrade finalize
> FINALIZE rolling upgrade ...
> There is no rolling upgrade in progress or rolling upgrade has already
> been finalized.
>
> And the same result by run "hdfs dfsadmin -rollingUpgrade query".
>
> Somewhat I am sure the data in trash folder is for backup or rollingback,
> is there any official command to delete them?
>
>
>
> -MA
>
>
>
>
>
> 2016-02-15 14:46 GMT+09:00 Vinayakumar B <vinayakumar.ba@huawei.com>:
>
> Hi Chef,
>
>
>
>    Can you confirm the below points?
>
>
>
> 1)      Did you upgrade all datanodes to 2.7.2?
>
> 2)      Did you finalized the upgrade using the following command?
>
> Run "hdfs dfsadmin -rollingUpgrade finalize
> <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade>"
> to finalize the rolling upgrade.
>
> If the finalize is not executed, all the blocks which were present before
> upgrade, will be moved to trash on deletion.
>
>  So to save the space, if you are trying to delete old files on upgraded (
> but not finalized) cluster, will not actually save anything on disk.
>
> -vinay
>
>
>
> *From:* Chef Win2er [mailto:win2erchef@gmail.com]
> *Sent:* 12 February 2016 11:31
> *To:* user@hadoop.apache.org
> *Subject:* Trash data after upgrade from 2.7.1 to 2.7.2
>
>
>
> Hi Hadoop users,
>
> I have hadoop-2.7.1 installed on my cluster with HA, 4 data nodes and 3
> journal nodes.
> I upgraded it to hadoop2.7.2 a a few days ago following the steps below.
>
>
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#Upgrade_without_Downtime
>
> But today I realized that there's trash fold created in data node's data
> directory and took a lot of space.
>
> $ hdfs dfs -du -s -h
> /
>
> 11.5 G  /
>
> I set replication 2 so the disk usage may be 30G or 40G.
> But actually it is 144GB.
>
> $ hdfs dfsadmin -report
> Configured Capacity: 422185762816 (393.19 GB)
> Present Capacity: 415469745432 (386.94 GB)
> DFS Remaining: 260712565164 (242.81 GB)
> DFS Used: 154757180268 (144.13 GB)
> DFS Used%: 37.25%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> Missing blocks (with replication factor 1): 0
>
>
>
> By 'du -h' commnand I got the result below.
>
> ......
> 11G     ./datanode/current/BP-606697376-<datanode
> ip>-1452599640542/current/finalized/subdir0
> 11G     ./datanode/current/BP-606697376-<datanode
> ip>-1452599640542/current/finalized
> 11G     ./datanode/current/BP-606697376-<datanode ip>-1452599640542/current
> ...
> 38G     ./datanode/current/BP-606697376-<datanode
> ip>-1452599640542/trash/finalized/subdir0
> 38G     ./datanode/current/BP-606697376-<datanode
> ip>-1452599640542/trash/finalized
> 38G     ./datanode/current/BP-606697376-<datanode ip>-1452599640542/trash
>
> ...
>
> Could anyone help me with this?
>
>
>
> Thanks
>
> MA
>
>
>

Mime
View raw message