Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 56A68200C16 for ; Thu, 9 Feb 2017 21:20:51 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 553B0160B50; Thu, 9 Feb 2017 20:20:51 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id A3FC6160B4B for ; Thu, 9 Feb 2017 21:20:50 +0100 (CET) Received: (qmail 76401 invoked by uid 500); 9 Feb 2017 20:20:49 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 76389 invoked by uid 99); 9 Feb 2017 20:20:49 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 09 Feb 2017 20:20:49 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 1FA9A1A01B9 for ; Thu, 9 Feb 2017 20:20:49 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -1.999 X-Spam-Level: X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31 tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id 0UFOyotlnQaR for ; Thu, 9 Feb 2017 20:20:48 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id CF3DC5F4ED for ; Thu, 9 Feb 2017 20:20:47 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 963F6E05BB for ; Thu, 9 Feb 2017 20:20:42 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id D19D321D6E for ; Thu, 9 Feb 2017 20:20:41 +0000 (UTC) Date: Thu, 9 Feb 2017 20:20:41 +0000 (UTC) From: "Chris Douglas (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HDFS-11026) Convert BlockTokenIdentifier to use Protobuf MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 09 Feb 2017 20:20:51 -0000 [ https://issues.apache.org/jira/browse/HDFS-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15860127#comment-15860127 ] Chris Douglas commented on HDFS-11026: -------------------------------------- bq. If the long (I know, I said byte) is a magic number like min or max long, decode rest as PB which will include the real expiry, else continue decoding as writable. In #2, the magic number matches the {{field number << 3 | type}} from PB (0x08 in the current patch, IIRC). Using the {{WritableUtils}} parsing machinery is a nice touch, though we still need the marker byte at the head of the stream when we call protobuf. We're still prevented from removing the expiryTime field while we support 3.x. I suppose it can evolve to #3, where we include the expiryTime field to support 3.x clients. Implementors not targeting 2.x get a normal PB record. What you describe is closer to #3, where the first byte is an optional PB field that we strip away before handing the record to protobuf. The protobuf parser doesn't know or care if we strip the first, optional field. It'll be vestigial in future implementations that don't support backwards compatibility with 2.x. Parsing is simplified because we don't need mark/reset (or similar), but we add a (trivial) overhead to each record and a weird compatibility case. Also the indignity of adding a version field to a DDL whose purpose was to manage versioning. Again, we're splitting hairs, here. It's very unlikely we'll remove/replace expiryTime, or notice the overhead of an explicit version field. I'm +1 on the current patch (with a note in the .proto to help future maintainers), but would be fine with either variant. > Convert BlockTokenIdentifier to use Protobuf > -------------------------------------------- > > Key: HDFS-11026 > URL: https://issues.apache.org/jira/browse/HDFS-11026 > Project: Hadoop HDFS > Issue Type: Task > Components: hdfs, hdfs-client > Affects Versions: 2.9.0, 3.0.0-alpha1 > Reporter: Ewan Higgs > Assignee: Ewan Higgs > Fix For: 3.0.0-alpha3 > > Attachments: blocktokenidentifier-protobuf.patch, HDFS-11026.002.patch, HDFS-11026.003.patch, HDFS-11026.004.patch, HDFS-11026.005.patch > > > {{BlockTokenIdentifier}} currently uses a {{DataInput}}/{{DataOutput}} (basically a {{byte[]}}) and manual serialization to get data into and out of the encrypted buffer (in {{BlockKeyProto}}). Other TokenIdentifiers (e.g. {{ContainerTokenIdentifier}}, {{AMRMTokenIdentifier}}) use Protobuf. The {{BlockTokenIdenfitier}} should use Protobuf as well so it can be expanded more easily and will be consistent with the rest of the system. > NB: Release of this will require a version update since 2.8.x won't be able to decipher {{BlockKeyProto.keyBytes}} from 2.8.y. -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org