hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jerry Chen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11343) Overflow is not properly handled in caclulating final iv for AES CTR
Date Thu, 04 Dec 2014 01:38:13 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14233786#comment-14233786
] 

Jerry Chen commented on HADOOP-11343:
-------------------------------------

{quote}Let me see if I can state this problem differently.{quote}
Exactly. 

{quote}This actually isn't a problem if the files are read back exactly as they are written.
But if you try to read file A in two steps, or read file B in one step, the second block will
look corrupted. {quote}
Yes. Logically, if the read follow exact the same pattern of calling caculateIV at exactly
the same positions as write, it still can be decrypted. But that is very hard assumption and
rarely the case in real application. 

{quote}Have you tested with both the openssl and java crypto implementations?{quote}
Currently the test case use JceAesCtrCryptoCodec instance to test calculateIV. The implementation
of calculateIV is common in the base abstract class AesCtrCryptoCodec and shared by the two
CryptoCodec implementations. When a CryptoCodec has the real need to override calculateIV
and it might be question the override version should be tested with the assumed of test cases.
 I think we can make it simple here and keep it is.

{quote}I do believe that you still need to provide an upgrade path. This means defining a
new crypto SUITE and make it the default. Existing files will use the old SUITE; the upgrade
path is to simply copy all the files in an EZ; when writing the new files the new SUITE will
be used and everything will work out.{quote}
I would think this is not a normal upgrade situation as we think. The existing encrypted data
already has problem to be decrypted with the old SUITE if the overflow condition descried
above happens during encryption. 

It is an illusion to expect the old SUITE to deal the problem more properly than the fixed
one. The calling pattern of calculateIV when writing has no logical relation to the pattern
on reading. Just described, unless following the same pattern, the decryption problem will
happen for sure when reading even with the old SUITE. So I would tend to consider this is
a fix of "existing problem", other than "a revised or improved version".

One benefit to create a new SUITE is that we can distinguish the two types of data: the data
has potential of this problem or the data without. But just as Andrew pointing, it is a little
sticky on this as we are creating two SUITEs for the same configuration of AES-CTR. 

I would prefer another way if we want to distinguish the two type of data: Add a version attribute
to Encrytion file info. This can be addressed with a separate JIRA. 

Happy that we are coming closer on the solution. Thanks you all for review and comments.












> Overflow is not properly handled in caclulating final iv for AES CTR
> --------------------------------------------------------------------
>
>                 Key: HADOOP-11343
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11343
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 2.6.0
>            Reporter: Jerry Chen
>            Assignee: Jerry Chen
>            Priority: Blocker
>         Attachments: HADOOP-11343.patch
>
>
> In the AesCtrCryptoCodec calculateIV, as the init IV is a random generated 16 bytes,

> final byte[] iv = new byte[cc.getCipherSuite().getAlgorithmBlockSize()];
>       cc.generateSecureRandom(iv);
> Then the following calculation of iv and counter on 8 bytes (64bit) space would easily
cause overflow and this overflow gets lost.  The result would be the 128 bit data block was
encrypted with a wrong counter and cannot be decrypted by standard aes-ctr.
> /**
>    * The IV is produced by adding the initial IV to the counter. IV length 
>    * should be the same as {@link #AES_BLOCK_SIZE}
>    */
>   @Override
>   public void calculateIV(byte[] initIV, long counter, byte[] IV) {
>     Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
>     Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
>     
>     System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
>     long l = 0;
>     for (int i = 0; i < 8; i++) {
>       l = ((l << 8) | (initIV[CTR_OFFSET + i] & 0xff));
>     }
>     l += counter;
>     IV[CTR_OFFSET + 0] = (byte) (l >>> 56);
>     IV[CTR_OFFSET + 1] = (byte) (l >>> 48);
>     IV[CTR_OFFSET + 2] = (byte) (l >>> 40);
>     IV[CTR_OFFSET + 3] = (byte) (l >>> 32);
>     IV[CTR_OFFSET + 4] = (byte) (l >>> 24);
>     IV[CTR_OFFSET + 5] = (byte) (l >>> 16);
>     IV[CTR_OFFSET + 6] = (byte) (l >>> 8);
>     IV[CTR_OFFSET + 7] = (byte) (l);
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message