avro-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Scott Carey (JIRA)" <j...@apache.org>
Subject [jira] Commented: (AVRO-27) Consistent Overhead Byte Stuffing (COBS) encoded block format for Object Container Files
Date Mon, 11 May 2009 21:04:45 GMT

    [ https://issues.apache.org/jira/browse/AVRO-27?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12708219#action_12708219
] 

Scott Carey commented on AVRO-27:
---------------------------------

Todd: I think that many of the JDK 7 enhancements have been backported to JDK 1.6.0_u14. 
I'll run some experiments later.


Matt:
Great stuff!  Your results make sense to me based on previous experience.  I went and made
some modifications myself to try out doing this 4 bytes at a time.

Unfortunately, this just made things more confusing for now.  

First, on your results:
* 75MB/sec is somewhat slow.  If anything else is roughly as expensive (say, the Avro serialization
itself) then the max rate one client can encode and stream to another will be ~half that.
 The decode rate is good.
* As a microbenchmark of sorts, we'll want to make sure the JVM warms up, run an iteration
or two of the test, garbage collect, then measure.  
* Apple's JVM is going to be a bit off.  I'll run some tests on a Linux server with Sun's
JVM later, and try it with the 1.6.0_14 improvements as well.
* There is a bug -- the max interval between 0 byte occurances is 256 -- which is probably
why the results behaved like they did.
 
I ran the same tests on my machine using Apple's 1.5 JVM with similar results.  With Apple's
(64 bit) 1.6 JVM, the results are much higher.

One 0 byte per 1000 (actually less due to the bug).
Encoding at 224.48262 MB/sec
Decoding at 1233.1406 MB/sec

All 0 bytes:
Encoding at 122.69939 MB/sec
Decoding at 62.184223 MB/sec

one in 10 0's:
Encoding at 143.20877 MB/sec
Decoding at 405.06326 MB/sec

So there is quite the potential for the latest Sun JVM to be fast ... or slow.

I wrote a "COWSCodec" to try this out with 4 byte chunks.  The initial encoding results were
good ... up to 300MB/sec with all 0 bytes.
However, that implementation uses ByteBuffer.asIntBuffer().  And those IntBuffer views do
not support the .array() method, so I had to use the IntBuffer.put(IntBuffer) signature for
bulk copies.
To do that cleanly, it made most sense to refactor the whole thing to use Java nio.Buffer
style method signatures (set position, limit before a copy, use mark(), flip(), etc).  After
doing so, it turns out that the IntBuffer views created by ByteBuffer.asIntBuffer do not really
support bulk get/put operations.  The max decode speed is about 420MB/sec.

So, there is one other way to do larger chunk encodings out of a ByteBuffer source and destination
-- use the ByteBuffer.getInt() and raw copy stuff rather than an intermediate IntBuffer wrapper.
 
I can also test out a 'real' IntBuffer which is backed by an int[] rather than a byte[] which
should be the fastest -- but not applicable to reading/writing from network or file.

Both of those should be fairly simple -- I'll clean up what I have, add that stuff, and put
it up here in a day or two.
Linux tests and variations with the latest/greatest JVM will be informative as well.


> Consistent Overhead Byte Stuffing (COBS) encoded block format for Object Container Files
> ----------------------------------------------------------------------------------------
>
>                 Key: AVRO-27
>                 URL: https://issues.apache.org/jira/browse/AVRO-27
>             Project: Avro
>          Issue Type: New Feature
>          Components: spec
>            Reporter: Matt Massie
>         Attachments: COBSCodec.java
>
>
> Object Container Files could use a 1 byte sync marker (set to zero) using zig-zag and
COBS encoding within blocks to efficiently escape zeros from the record data.
> h4. Zig-Zag encoding
> With zig-zag encoding only the value of 0 (zero) gets encoded into a value with a single
zero byte.  This property means that we can write any non-zero zig-zag long inside a block
within concern for creating an unintentional sync byte. 
> h4. COBS encoding
> We'll use COBS encoding to ensure that all zeros are escaped inside the block payload.
 You can read http://www.sigcomm.org/sigcomm97/papers/p062.pdf for the details about COBS
encoding.
> h1. Block Format
> All blocks start and end with a sync byte (set to zero) with a type-length-value format
internally as follows:
> || name || format || length in bytes || value || meaning ||
> | sync | byte | 1 | always 0 (zero) | The sync byte serves as a clear marker for the
start of a block |
> | type | zig-zag long | variable | must be non-zero | The type field expresses whether
the block is for _metadata_ or _normal_ data. |
> | length | zig-zag long | variable | must be non-zero | The length field expresses the
number of bytes until the next record (including the cobs code and sync byte).  Useful for
skipping ahead to the next block. |
> | cobs_code | byte | 1 | see COBS code table below | Used in escaping zeros from the
block payload |
> | payload | cobs-encoded | Greater than or equal to zero | all non-zero bytes | The payload
of the block |
> | sync | byte | 1 | always 0 (zero) | The sync byte serves as a clear marker for the
end of the block |
> h2. COBS code table 
> || Code || Followed by || Meaning | 
> | 0x00 | (not applicable) | (not allowed ) |
> | 0x01 | nothing | Empty payload followed by the closing sync byte |
> | 0x02 | one data byte | The single data byte, followed by the closing sync byte | 
> | 0x03 | two data bytes | The pair of data bytes, followed by the closing sync byte |
> | 0x04 | three data bytes | The three data bytes, followed by the closing sync byte |
> | n | (n-1) data bytes | The (n-1) data bytes, followed by the closing sync byte |
> | 0xFD | 252 data bytes | The 252 data bytes, followed by the closing sync byte |
> | 0xFE | 253 data bytes | The 253 data bytes, followed by the closing sync byte |
> | 0xFF | 254 data bytes | The 254 data bytes *not* followed by a zero. |
> (taken from http://www.sigcomm.org/sigcomm97/papers/p062.pdf)
> h1. Encoding
> Only the block writer needs to perform byte-by-byte processing to encode the block. 
The overhead for COBS encoding is very small in terms of the in-memory state required.
> h1. Decoding
> Block readers are not required to do as much byte-by-byte processing as a writer.  The
reader could (for example) find a _metadata_ block by doing the following:
> # Search for a zero byte in the file which marks the start of a record
> # Read and zig-zag decode the _type_ of the block
> #* If the block is _normal_ data, read the _length_, seek ahead to the next block and
goto step #2 again
> #* If the block is a _metadata_ block, cobs decode the data

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message