hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11357) Secure Delete
Date Mon, 23 Jan 2017 03:09:27 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15833847#comment-15833847
] 

Hadoop QA commented on HDFS-11357:
----------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  0s{color} | {color:blue}
Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} | {color:red}
HDFS-11357 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-11357 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12848743/0001-HDFS-secure-delete.patch
|
| Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/18240/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Secure Delete
> -------------
>
>                 Key: HDFS-11357
>                 URL: https://issues.apache.org/jira/browse/HDFS-11357
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Andrew Purtell
>            Assignee: Andrew Purtell
>            Priority: Minor
>         Attachments: 0001-HDFS-secure-delete.patch
>
>
> Occasionally for compliance or other legal/process reasons it is necessary to attest
that data has been deleted in such a way that it cannot be retrieved even through low level
forensics (for some reasonable definition of this that typically excludes the resources a
state actor can bring to data recovery). HDFS at-rest encryption offers one way to achieve
this, if the data keying strategy is highly granular. One simply "forgets" a key corresponding
to a given set of files and the data becomes irretrievable. However if HDFS at-rest encryption
is not enabled or a fine grained keying strategy is not possible, another simple strategy
can be employed. 
> The objective is to ensure once a block is deleted no trace of the data within the block
exists on disk in unallocated regions, for all blocks, providing assurance deleted data cannot
be recovered at any time through reasonable effort even with low level access. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message