avro-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeremy Lewi <jer...@lewi.us>
Subject Possible bug: byteBuffer limit not respected when copying
Date Sun, 11 Mar 2012 22:38:45 GMT

In org.apache.avro.generic.GenericData.deepCopy - the code for copying a
ByteBuffer is
        ByteBuffer byteBufferValue = (ByteBuffer) value;
        byte[] bytesCopy = new byte[byteBufferValue.capacity()];
        return ByteBuffer.wrap(bytesCopy);

I think this is problematic because it will cause an UnderFlow exception to
be thrown if the ByteBuffer limit is less than the capacity of the byte

My use case is as follows. I have ByteBuffer's backed by large arrays so I
can avoid resizing the array every time I write data. So limit < capacity.
When the data is written, or copied
I think avro should respect this. When data is serialized, avro should
automatically use the minimum number of bytes.
When an object is copied, I think it makes sense to preserve the capacity
of the underlying buffer as opposed to compacting it.

So I think the code could be fixed by replacing get with
byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());

Before I file a bug is there anything I'm missing?


View raw message