hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11343) Overflow is not properly handled in caclulating final iv for AES CTR
Date Thu, 04 Dec 2014 18:29:13 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14234443#comment-14234443
] 

Andrew Wang commented on HADOOP-11343:
--------------------------------------

Hi all,

I'd like to try and quantify the likelihood of hitting this overflow situation (hat tip to
[~yoderme] who I discussed this with first).

We're only in the danger zone when the random+counter overflows. The maximum HDFS file size
(by default) is 64TB, or 2^46. Divided by 16B (2^4), which is the AES block size, we get 2^42.
Now we divide by our random number 2^64, and get a 1 in 2^22 chance of hitting this, or 1
in 4 million.

64TB is a quite pessimistic file size though, something more typical might be 1GB, or 2^30.
Doing the same calculation (30-4-64), we get 1 in 2^38, or about 1 in 274 billion, which feels
pretty remote.

Given that even with the old calculateIV, the odds of hitting this are quite small, I think
it might be okay to just fix it in place, without even a new CipherSuite.

Yi's right that it'd be good to reject old DFSClients. We could even only reject if it's in
a potential overflow situation, since we know the IV, file length, and a likely max file size.
We could also detect old clients via an optional {{version}} PB field at read and create than
hacking on a new CipherSuite. However, even this seems kind of optional if we're okay with
the above odds.

Thoughts?

> Overflow is not properly handled in caclulating final iv for AES CTR
> --------------------------------------------------------------------
>
>                 Key: HADOOP-11343
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11343
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 2.6.0
>            Reporter: Jerry Chen
>            Assignee: Jerry Chen
>            Priority: Blocker
>         Attachments: HADOOP-11343.patch
>
>
> In the AesCtrCryptoCodec calculateIV, as the init IV is a random generated 16 bytes,

> final byte[] iv = new byte[cc.getCipherSuite().getAlgorithmBlockSize()];
>       cc.generateSecureRandom(iv);
> Then the following calculation of iv and counter on 8 bytes (64bit) space would easily
cause overflow and this overflow gets lost.  The result would be the 128 bit data block was
encrypted with a wrong counter and cannot be decrypted by standard aes-ctr.
> {code}
> /**
>    * The IV is produced by adding the initial IV to the counter. IV length 
>    * should be the same as {@link #AES_BLOCK_SIZE}
>    */
>   @Override
>   public void calculateIV(byte[] initIV, long counter, byte[] IV) {
>     Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
>     Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
>     
>     System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
>     long l = 0;
>     for (int i = 0; i < 8; i++) {
>       l = ((l << 8) | (initIV[CTR_OFFSET + i] & 0xff));
>     }
>     l += counter;
>     IV[CTR_OFFSET + 0] = (byte) (l >>> 56);
>     IV[CTR_OFFSET + 1] = (byte) (l >>> 48);
>     IV[CTR_OFFSET + 2] = (byte) (l >>> 40);
>     IV[CTR_OFFSET + 3] = (byte) (l >>> 32);
>     IV[CTR_OFFSET + 4] = (byte) (l >>> 24);
>     IV[CTR_OFFSET + 5] = (byte) (l >>> 16);
>     IV[CTR_OFFSET + 6] = (byte) (l >>> 8);
>     IV[CTR_OFFSET + 7] = (byte) (l);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message