hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-13825) Get operations on large objects fail with protocol errors
Date Tue, 04 Aug 2015 03:02:05 GMT

     [ https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Andrew Purtell updated HBASE-13825:
-----------------------------------
    Attachment: HBASE-13825-branch-1.patch
                HBASE-13825-0.98.patch
                HBASE-13825.patch

I followed the reference to HBASE-14076 over to HBASE-13230. The solution there is to use
the static helper ProtobufUtil#mergeDelimitedFrom wherever we've written a delimited message
and would use mergeDelmitedFrom to read it back in, since the delimited message format begins
with the total message size encoded in vint32. We use the encoded size to adjust the CodedInputStream
limit as needed. 

Patches here also address relevant uses of <builder>#mergeFrom. We use Integer.MAX_VALUE
as the size limit for CodedInputStream where it is not known. In some places it's unlikely
a message processed there will exceed 64 MB, but I made a change anyway. It is harmless and
consistent to use ProtobufUtil#mergeFrom.

branch-1 and 0.98 patches also incorporate HBASE-14076.

Reviewboard: https://reviews.apache.org/r/37062/

/cc [~stack] Touched a lot of your code here.

> Get operations on large objects fail with protocol errors
> ---------------------------------------------------------
>
>                 Key: HBASE-13825
>                 URL: https://issues.apache.org/jira/browse/HBASE-13825
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 1.0.0, 1.0.1
>            Reporter: Dev Lakhani
>            Assignee: Andrew Purtell
>             Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
>         Attachments: HBASE-13825-0.98.patch, HBASE-13825-branch-1.patch, HBASE-13825.patch
>
>
> When performing a get operation on a column family with more than 64MB of data, the operation
fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local exception:
com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be
malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
>         at org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
>         at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
>         at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
>         at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
>         at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
>         at org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
>         at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
>         at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
>         at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
>         at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
>         at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
>         at org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but that issue
is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message