hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-15638) Shade protobuf
Date Wed, 13 Apr 2016 17:33:25 GMT

    [ https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15239659#comment-15239659

stack commented on HBASE-15638:

bq. we could make a module that creates a shaded/relocated version of the protobuf version
we want to use. Then we could use those relocated packages for our internal use.

Is that what this patch does [~busbey]? hbase-protocol is the only place we have a dependency
on protobuf and it does the relocation that downstream modules then make use of.

bq. That way, FanOutOneBlockAsyncDFSOutputSaslHelper can use the protobuf version from hadoop
like normal. 

Hmm... Thanks for this. The 'fix' then would be to use the pb that hdfs has pulled in... which
would be com.google.protobuf. That'd work.

bq. It's probably still worth putting it in a module, just so the code is more isolated (presuming
we eventually want it in HDFS).

I could do this. I always thought that we could have a WAL module... but we'd be talking of
an explicit asyncwal module in this case?

> Shade protobuf
> --------------
>                 Key: HBASE-15638
>                 URL: https://issues.apache.org/jira/browse/HBASE-15638
>             Project: HBase
>          Issue Type: Bug
>          Components: Protobufs
>            Reporter: stack
>         Attachments: as.far.as.server.patch
> Shade protobufs so we can move to a different version without breaking the world. We
want to get up on pb3 because it has unsafe methods that allow us save on copies; it also
has some means of dealing with BBs so we can pass it offheap DBBs. We'll probably want to
change PB3 to open it up some more too so we can stay offheap as we traverse PB. This issue
comes of [~anoop.hbase] and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>       if (payload != null) {
>         builder.setPayload(ByteString.copyFrom(payload));
>       }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as pb. Test
at least.
> Let me raise this one on the dev list too.

This message was sent by Atlassian JIRA

View raw message