Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 67FE5F179 for ; Mon, 2 Sep 2013 01:25:25 +0000 (UTC) Received: (qmail 92049 invoked by uid 500); 2 Sep 2013 01:25:20 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 91970 invoked by uid 500); 2 Sep 2013 01:25:20 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 91963 invoked by uid 99); 2 Sep 2013 01:25:20 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 02 Sep 2013 01:25:20 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of renderaid@gmail.com designates 74.125.82.45 as permitted sender) Received: from [74.125.82.45] (HELO mail-wg0-f45.google.com) (74.125.82.45) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 02 Sep 2013 01:25:15 +0000 Received: by mail-wg0-f45.google.com with SMTP id y10so2913673wgg.12 for ; Sun, 01 Sep 2013 18:24:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:from:date:message-id:subject:to:content-type; bh=bZtHMPrd+vC3jc2tFhnBLC4Dh1LYHcyB9WVOwoCeSHQ=; b=OhLvqYigXv+KJbGDicT0vqR0aoIx2yLHl36DjHZxJ1/OATqXENoMjvauWg051e2C4/ nJLIUKruY3k36irPZfZpWqHJJg8wL8r0XMMAeENmod2F/OsmhS+065Edo7gNUk4gBX1F 0Muotrzc9Ey/yzeq/5xU6nEfn2sap0Yy1LSNpekT0+nkxnU6loTH81Ic6+dwmRSqYL75 Sxq8nl6Iz7cBmeK8gPu+6IKpzq+n+26qOkYBhKBtSlztSHYs4TYHi3BRozWcp9dcY1PF yIRuYJxB1GAuH/F0jcN09AVPI2j49CmvMzCIQAOdd3FxeAaS1mIjsJdzu2JHWb1IlEGT 68Vw== X-Received: by 10.180.185.10 with SMTP id ey10mr11669901wic.29.1378085094797; Sun, 01 Sep 2013 18:24:54 -0700 (PDT) MIME-Version: 1.0 Received: by 10.194.123.162 with HTTP; Sun, 1 Sep 2013 18:24:34 -0700 (PDT) From: Seonyoung Park Date: Mon, 2 Sep 2013 10:24:34 +0900 Message-ID: Subject: HDFS en/decryption module To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c3657e8bb70204e55c70e0 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c3657e8bb70204e55c70e0 Content-Type: text/plain; charset=UTF-8 Dear all I have implemented a HDFS en/decryption module using JCA(Java Cryptography Architecture) to easily develop cryptographic module. You can easily understand my code because it is implemented based on the CompressionCodec The en/decryption module are performed on a client side due to lack of multiple HDFS writing. Also the decryption module can be performed by multiple HDFS tasktrackers in parallel. That is, every block is processed by a map task. The code is located at: https://github.com/delipark/encrypted-hdfs More detailed information can be found here: https://sites.google.com/a/networks.cnu.ac.kr/dnlab/members/seonyoung-park/encrypted-hdfs There are no unit tests currently implemented, but I can provide them in the near future if necessary. I know that there are similar approach for HDFS encryption such as HADOOP-9331 . If possible, I want to contribute to it after I understand the code structure. Best regards, SY Park. --001a11c3657e8bb70204e55c70e0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Dear all
<= br>
I have = implemented a HDFS en/decryption module using JCA(Java Cryptography Archite= cture) to easily develop cryptographic module.=C2=A0
You can easily u= nderstand my code because it is implemented based on the CompressionCodec

The en/decryption module are performed on a client side due to lack of mult= iple HDFS writing.
Also the decryption module can be performed by multiple HDFS tasktr= ackers in parallel.
That is, every b= lock is processed by a map task.

The code is located at:

More d= etailed information can be found here:

There are no unit test= s currently implemented, but I can provide them in the near future if neces= sary.

I know that there are = similar approach for HDFS encryption such as=C2=A0HADOOP-9331.
If possible, I w= ant to contribute to it after I understand the code structure.

Best regards,

SY = Park.

--001a11c3657e8bb70204e55c70e0--