hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf
Date Thu, 09 Feb 2017 17:49:42 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859891#comment-15859891

Daryn Sharp commented on HDFS-11026:

I don't feel super strongly.  What I meant – and maybe this is #2 from Chris – I'd suggest
read the varlong expiry as we do today.  If the long (I know, I said byte) is a magic number
like min or max long, decode rest as PB which will include the real expiry, else continue
decoding as writable.  Ex. something like this:

  public void readFields(DataInput in) throws IOException {
    this.cache = null;
    expiryDate = WritableUtils.readVLong(in)
    if (expiryDate == MAGIC_PB_VALUE) {
    keyId = WritableUtils.readVInt(in);

Thanks for the stack traces.  Something looks concerning... Oracle jdk blows up in WritableUtils.readVInt
which decodes a long and throws if outside the range of an int.  OpenJDK somehow made it past
that check and blew up in WritableUtils.readString while reading the next field.  Huh? How
is that possible?  Do we have a byte ordering issue?

> Convert BlockTokenIdentifier to use Protobuf
> --------------------------------------------
>                 Key: HDFS-11026
>                 URL: https://issues.apache.org/jira/browse/HDFS-11026
>             Project: Hadoop HDFS
>          Issue Type: Task
>          Components: hdfs, hdfs-client
>    Affects Versions: 2.9.0, 3.0.0-alpha1
>            Reporter: Ewan Higgs
>            Assignee: Ewan Higgs
>             Fix For: 3.0.0-alpha3
>         Attachments: blocktokenidentifier-protobuf.patch, HDFS-11026.002.patch, HDFS-11026.003.patch,
HDFS-11026.004.patch, HDFS-11026.005.patch
> {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} (basically a {{byte[]}})
and manual serialization to get data into and out of the encrypted buffer (in {{BlockKeyProto}}).
Other TokenIdentifiers (e.g. {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf.
The {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded more easily
and will be consistent with the rest of the system.
> NB: Release of this will require a version update since 2.8.x won't be able to decipher
{{BlockKeyProto.keyBytes}} from 2.8.y.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message