hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-15638) Shade protobuf
Date Wed, 13 Apr 2016 18:50:25 GMT

     [ https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

stack updated HBASE-15638:
    Attachment: 15638v2.patch

Progress. I got over the FanOutOneBLockAsyncDFSOutput... issue where an HDFS method expected
a com.google.protobuf.ByteString by explicitly referencing c.g.p.ByteString; the HDFS pb2.5
will be transitively included so this explicit reference will be satisfied (we'll have two
pbs on our CLASSPATH; hadoops pb2.5 transitively included and our own relocated pb3).

This patch excludes hbase-rest and hbase-spark for now.  These both fail because they have
.proto files local to the modules. I intend to move the protos out and back into hbase-protocol
module. This breaks all-spark-related-code is in hbase-spark but at the expense of keeping
our pb'ing mess all in one place.

If folks are fine w/ above approach, I'll proceed. Will let this percolate a while in case
better suggestions or objection.

[~sergey.soldatov] and [~busbey] FYI


> Shade protobuf
> --------------
>                 Key: HBASE-15638
>                 URL: https://issues.apache.org/jira/browse/HBASE-15638
>             Project: HBase
>          Issue Type: Bug
>          Components: Protobufs
>            Reporter: stack
>         Attachments: 15638v2.patch, as.far.as.server.patch
> Shade protobufs so we can move to a different version without breaking the world. We
want to get up on pb3 because it has unsafe methods that allow us save on copies; it also
has some means of dealing with BBs so we can pass it offheap DBBs. We'll probably want to
change PB3 to open it up some more too so we can stay offheap as we traverse PB. This issue
comes of [~anoop.hbase] and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>       if (payload != null) {
>         builder.setPayload(ByteString.copyFrom(payload));
>       }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as pb. Test
at least.
> Let me raise this one on the dev list too.

This message was sent by Atlassian JIRA

View raw message