avro-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Scott Carey (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AVRO-1045) deepCopy of BYTES underflow exception
Date Thu, 15 Mar 2012 17:07:39 GMT

    [ https://issues.apache.org/jira/browse/AVRO-1045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13230303#comment-13230303
] 

Scott Carey commented on AVRO-1045:
-----------------------------------


There are two choices that make sense to me:
* Copy the whole buffer and set all positions in the destination to the same thing as the
source (limit, pos, mark, capacity), being cognizant of arrayOffset in case the source buffer
was a slice of a larger array.
* Assume that the data of intereste is between pos and limit, copy that into a new byte buffer
starting at index 0 with the new limit set to (limit - pos). 
In both cases, the original buffer needs to be returned to its original state.

Avro isn't currently doing either.
                
> deepCopy of BYTES underflow exception
> -------------------------------------
>
>                 Key: AVRO-1045
>                 URL: https://issues.apache.org/jira/browse/AVRO-1045
>             Project: Avro
>          Issue Type: Bug
>          Components: java
>    Affects Versions: 1.6.2
>            Reporter: Jeremy Lewi
>            Priority: Minor
>             Fix For: 1.6.3
>
>         Attachments: AVRO-1045.patch
>
>
> In org.apache.avro.generic.GenericData.deepCopy - the code for copying a ByteBuffer is
>         ByteBuffer byteBufferValue = (ByteBuffer) value;
>         byte[] bytesCopy = new byte[byteBufferValue.capacity()];
>         byteBufferValue.rewind();
>         byteBufferValue.get(bytesCopy);
>         byteBufferValue.rewind();
>         return ByteBuffer.wrap(bytesCopy);
> I think this is problematic because it will cause an UnderFlow exception to be thrown
if the ByteBuffer limit is less than the capacity of the byte buffer.
> My use case is as follows. I have ByteBuffer's backed by large arrays so I can avoid
resizing the array every time I write data. So limit < capacity. When the data is written,
or copied
> I think avro should respect this. When data is serialized, avro should automatically
use the minimum number of bytes.
> When an object is copied, I think it makes sense to preserve the capacity of the underlying
buffer as opposed to compacting it.
> So I think the code could be fixed by replacing get with 
> byteBufferValue.get(bytesCopy, 0 , byteBufferValue.limit());

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message