hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "feng xu (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-13476) HDFS (Hadoop/HDP 2.7.3.2.6.4.0-91) reports CORRUPT files
Date Wed, 18 Apr 2018 18:45:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-13476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

feng xu updated HDFS-13476:
---------------------------
    Description: 
We have a security software runs on local file system(ext4), and the security software denies
some particular users to access some {color:#333333}particular {color}HDFS folders based on
security policy. For example, the security policy always gives the user hdfs full permission,
and denies the user yarn to access /dir1.  If the user yarn tries to access a file under
HDFS folder {color:#333333}/dir1{color}, the security software denies the access and returns
EACCES from file system call through errno. This used to work because the data corruption
was determined by block scanner([https://blog.cloudera.com/blog/2016/12/hdfs-datanode-scanners-and-disk-checker-explained/).]

On HDP 2.7.3.2.6.4.0-91, HDFS reports a lot data corruptions because of the security policy
to deny file access in HDFS from local file system. We debugged HDFS and found out BlockSender()
directly calls the following statements and may cause the problem:

datanode.notifyNamenodeDeletedBlock(block, replica.getStorageUuid());
 datanode.data.invalidate(block.getBlockPoolId(), new Block[]\{block.getLocalBlock()});

In the mean time, the block scanner is not triggered because of the undocumented property
{color:#333333}dfs.datanode.disk.check.min.gap. However the problem is still there if we disable dfs.datanode.disk.check.min.gap{color}
by setting it to 0. . 

  was:
We have a security software runs on local file system(ext4), and the security software denies
some particular users to access some {color:#333333}particular {color}HDFS folders based on
security policy. For example, the security policy always gives the user hdfs full permission,
and denies the user yarn to access /dir1.  If the user yarn tries to access a file under
HDFS folder {color:#333333}/dir1{color}, the security software denies the access and returns
EACCES from file system call through errno. This used to work because the data corruption
was determined by block scanner([https://blog.cloudera.com/blog/2016/12/hdfs-datanode-scanners-and-disk-checker-explained/).]

On HDP 2.7.3.2.6.4.0-91, HDFS reports a lot data corruptions because of the security policy
to deny file access in HDFS from local file system. We debugged HDFS and found out BlockSender()
directly calls the following statements and causes the problem:

datanode.notifyNamenodeDeletedBlock(block, replica.getStorageUuid());
datanode.data.invalidate(block.getBlockPoolId(), new Block[]\{block.getLocalBlock()});

In the mean time, the block scanner is not triggered because of the undocumented property
{color:#333333}dfs.datanode.disk.check.min.gap. However the problem is still there if we disable {color:#333333}dfs.datanode.disk.check.min.gap{color}
by setting it to 0. .{color} 


> HDFS (Hadoop/HDP 2.7.3.2.6.4.0-91) reports CORRUPT files
> --------------------------------------------------------
>
>                 Key: HDFS-13476
>                 URL: https://issues.apache.org/jira/browse/HDFS-13476
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.7.4
>            Reporter: feng xu
>            Priority: Critical
>
> We have a security software runs on local file system(ext4), and the security software
denies some particular users to access some {color:#333333}particular {color}HDFS folders
based on security policy. For example, the security policy always gives the user hdfs full
permission, and denies the user yarn to access /dir1.  If the user yarn tries to access a
file under HDFS folder {color:#333333}/dir1{color}, the security software denies the access
and returns EACCES from file system call through errno. This used to work because the data
corruption was determined by block scanner([https://blog.cloudera.com/blog/2016/12/hdfs-datanode-scanners-and-disk-checker-explained/).]
> On HDP 2.7.3.2.6.4.0-91, HDFS reports a lot data corruptions because of the security
policy to deny file access in HDFS from local file system. We debugged HDFS and found out BlockSender()
directly calls the following statements and may cause the problem:
> datanode.notifyNamenodeDeletedBlock(block, replica.getStorageUuid());
>  datanode.data.invalidate(block.getBlockPoolId(), new Block[]\{block.getLocalBlock()});
> In the mean time, the block scanner is not triggered because of the undocumented property
{color:#333333}dfs.datanode.disk.check.min.gap. However the problem is still there if we disable dfs.datanode.disk.check.min.gap{color}
by setting it to 0. . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message