hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Larry McCay (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10150) Hadoop cryptographic file system
Date Tue, 10 Dec 2013 16:01:08 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13844371#comment-13844371
] 

Larry McCay commented on HADOOP-10150:
--------------------------------------

Hi Yi -
I am a bit confused by this latest comment.
Can you please clarify "hadoop-crypto component was removed from latest patch as a result
of Diceros emerging. "?

Are you saying that initially you had a cipher provider implementation but have decided not
to provide one since there is one available in yet another non-apache project? I don't believe
that these sorts of external references are really appropriate. Neither Rhino or Diceros are
a TLP or incubation project in Apache. Since it appears to be an intel specific implementation,
it seems appropriate to remove it from the patch though.

Do you plan to provide an all java implementation for this work?


> Hadoop cryptographic file system
> --------------------------------
>
>                 Key: HADOOP-10150
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10150
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: security
>    Affects Versions: 3.0.0
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>              Labels: rhino
>             Fix For: 3.0.0
>
>         Attachments: CryptographicFileSystem.patch, HADOOP cryptographic file system.pdf
>
>
> There is an increasing need for securing data when Hadoop customers use various upper
layer applications, such as Map-Reduce, Hive, Pig, HBase and so on.
> HADOOP CFS (HADOOP Cryptographic File System) is used to secure data, based on HADOOP
“FilterFileSystem” decorating DFS or other file systems, and transparent to upper layer
applications. It’s configurable, scalable and fast.
> High level requirements:
> 1.	Transparent to and no modification required for upper layer applications.
> 2.	“Seek”, “PositionedReadable” are supported for input stream of CFS if the
wrapped file system supports them.
> 3.	Very high performance for encryption and decryption, they will not become bottleneck.
> 4.	Can decorate HDFS and all other file systems in Hadoop, and will not modify existing
structure of file system, such as namenode and datanode structure if the wrapped file system
is HDFS.
> 5.	Admin can configure encryption policies, such as which directory will be encrypted.
> 6.	A robust key management framework.
> 7.	Support Pread and append operations if the wrapped file system supports them.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

Mime
View raw message