avro-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeremy Lewi <jer...@lewi.us>
Subject Re: Possible bug: byteBuffer limit not respected when copying
Date Wed, 14 Mar 2012 17:30:12 GMT
I filed a bug and attached a patch
https://issues.apache.org/jira/browse/AVRO-1045

J

On Sun, Mar 11, 2012 at 3:38 PM, Jeremy Lewi <jeremy@lewi.us> wrote:

> Hi,
>
> In org.apache.avro.generic.GenericData.deepCopy - the code for copying a
> ByteBuffer is
>         ByteBuffer byteBufferValue = (ByteBuffer) value;
>         byte[] bytesCopy = new byte[byteBufferValue.capacity()];
>         byteBufferValue.rewind();
>         byteBufferValue.get(bytesCopy);
>         byteBufferValue.rewind();
>         return ByteBuffer.wrap(bytesCopy);
>
> I think this is problematic because it will cause an UnderFlow exception
> to be thrown if the ByteBuffer limit is less than the capacity of the byte
> buffer.
>
> My use case is as follows. I have ByteBuffer's backed by large arrays so I
> can avoid resizing the array every time I write data. So limit < capacity.
> When the data is written, or copied
> I think avro should respect this. When data is serialized, avro should
> automatically use the minimum number of bytes.
> When an object is copied, I think it makes sense to preserve the capacity
> of the underlying buffer as opposed to compacting it.
>
> So I think the code could be fixed by replacing get with
> byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());
>
> Before I file a bug is there anything I'm missing?
>
> J
>
>
>

Mime
View raw message