hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wellington Chevreuil (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12618) fsck -includeSnapshots reports wrong amount of total blocks
Date Thu, 07 Dec 2017 13:01:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16281811#comment-16281811
] 

Wellington Chevreuil commented on HDFS-12618:
---------------------------------------------

bq. How is FSDirectory.getINodesInPath(filePath, FSDirectory.DirOp.READ) broken?
I wouldn't say it's broken, as the "filePath" being passed to it in this case does not actually
exist. When dealing with appended or truncated files, the original file path may still exist
(if the file has never been deleted), but the file version on the snapshot folders may have
different blocks. That adds the need to check if the original file path still exists out of
snapshots. Thats the reason behind the *dir.getINodesInPath(inodeFile.getName(),FSDirectory.DirOp.READ).validate();*,
as *inodeFile.getName()* returns the original file path. So if the file has been deleted,
*inodeFile.getName()* actually refers to an invalid path, that's why *dir.getINodesInPath(inodeFile.getName(),FSDirectory.DirOp.READ).validate();*.
An alternative that I thought then was to go through the INodes array from this IIP, comparing
it with the *inodeFile.getName()*, since usage of *validate()* is discouraged.

> fsck -includeSnapshots reports wrong amount of total blocks
> -----------------------------------------------------------
>
>                 Key: HDFS-12618
>                 URL: https://issues.apache.org/jira/browse/HDFS-12618
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: tools
>    Affects Versions: 3.0.0-alpha3
>            Reporter: Wellington Chevreuil
>            Assignee: Wellington Chevreuil
>            Priority: Minor
>         Attachments: HDFS-121618.initial, HDFS-12618.001.patch, HDFS-12618.002.patch,
HDFS-12618.003.patch, HDFS-12618.004.patch
>
>
> When snapshot is enabled, if a file is deleted but is contained by a snapshot, *fsck*
will not reported blocks for such file, showing different number of *total blocks* than what
is exposed in the Web UI. 
> This should be fine, as *fsck* provides *-includeSnapshots* option. The problem is that
*-includeSnapshots* option causes *fsck* to count blocks for every occurrence of a file on
snapshots, which is wrong because these blocks should be counted only once (for instance,
if a 100MB file is present on 3 snapshots, it would still map to one block only in hdfs).
This causes fsck to report much more blocks than what actually exist in hdfs and is reported
in the Web UI.
> Here's an example:
> 1) HDFS has two files of 2 blocks each:
> {noformat}
> $ hdfs dfs -ls -R /
> drwxr-xr-x   - root supergroup          0 2017-10-07 21:21 /snap-test
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/file2
> drwxr-xr-x   - root supergroup          0 2017-05-13 13:03 /test
> {noformat} 
> 2) There are two snapshots, with the two files present on each of the snapshots:
> {noformat}
> $ hdfs dfs -ls -R /snap-test/.snapshot
> drwxr-xr-x   - root supergroup          0 2017-10-07 21:21 /snap-test/.snapshot/snap1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/.snapshot/snap1/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/.snapshot/snap1/file2
> drwxr-xr-x   - root supergroup          0 2017-10-07 21:21 /snap-test/.snapshot/snap2
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:16 /snap-test/.snapshot/snap2/file1
> -rw-r--r--   1 root supergroup  209715200 2017-10-07 20:17 /snap-test/.snapshot/snap2/file2
> {noformat}
> 3) *fsck -includeSnapshots* reports 12 blocks in total (4 blocks for the normal file
path, plus 4 blocks for each snapshot path):
> {noformat}
> $ hdfs fsck / -includeSnapshots
> FSCK started by root (auth:SIMPLE) from /127.0.0.1 for path / at Mon Oct 09 15:15:36
BST 2017
> Status: HEALTHY
>  Number of data-nodes:	1
>  Number of racks:		1
>  Total dirs:			6
>  Total symlinks:		0
> Replicated Blocks:
>  Total size:	1258291200 B
>  Total files:	6
>  Total blocks (validated):	12 (avg. block size 104857600 B)
>  Minimally replicated blocks:	12 (100.0 %)
>  Over-replicated blocks:	0 (0.0 %)
>  Under-replicated blocks:	0 (0.0 %)
>  Mis-replicated blocks:		0 (0.0 %)
>  Default replication factor:	1
>  Average block replication:	1.0
>  Missing blocks:		0
>  Corrupt blocks:		0
>  Missing replicas:		0 (0.0 %)
> {noformat}
> 4) Web UI shows the correct number (4 blocks only):
> {noformat}
> Security is off.
> Safemode is off.
> 5 files and directories, 4 blocks = 9 total filesystem object(s).
> {noformat}
> I would like to work on this solution, will propose an initial solution shortly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message