couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Dionne <>
Subject Re: [jira] Commented: (COUCHDB-1092) Storing documents bodies as raw JSON binaries instead of serialized JSON terms
Date Fri, 18 Mar 2011 18:22:10 GMT

On Mar 18, 2011, at 2:08 PM, Randall Leeds (JIRA) wrote:

>    [
> Randall Leeds commented on COUCHDB-1092:
> ----------------------------------------
> I love a good bike shed more than most, but I've stayed pretty quiet since my first comment
because I wanted to think hard about what Paul was saying.
> In the end, I agree with the last comment. I would be happy to trust the md5 and not
validate on the way out _only_ so long as we close the API for manipulating docs and validate
on the way in. Paul, if I understand correctly, this sort of change should make you rest easy.

I've also been watching this thread with no comment, but would +1 your proposal if I understand
it correctly. I think the main concern is summarized in Paul's last post (Paul tell me to
shut up if I'm wrong):

"The concern I want to see addressed is avoiding the requirement that we rely on JSON data
being specifically formatted while exposing that value as editable to client code."  -- davisp

Essentially the code isn't architected properly to support this change without adding the
risk of data corruption, and any amount of that is bad. Your proposal Randall is to go forward
with it subject to the constraint that more refactoring is done to clean up the APIs before
it's published. If so then I'd say go for it. More frequent releases and more progress would
be valuable. I've seen a lot of forks and good ideas on github and would love to see more
of it on trunk, .eg. Paul's btree cleanup.

> The internal API change would mean more code refactoring, but we shouldn't be afraid
of that.
> The agile way forward, if people agree that this solution is prudent, would be to commit
to trunk and open a blocking ticket to close down the document body API before release.
> Trunk is trunk, lets iterate on it. We haven't even shipped 1.1 yet! We could even branch
a feature frozen trunk for 1.2 and drop this on trunk targeted for 1.3.
> I'd love to see the 1.2 cycle stay short and in general to have more frequent releases.
It's something I feel we talk about a lot but then we sit around and comment on tickets like
this without taking the dive and committing. I don't mean that to sound like a rant. <3.
>> Storing documents bodies as raw JSON binaries instead of serialized JSON terms
>> ------------------------------------------------------------------------------
>>                Key: COUCHDB-1092
>>                URL:
>>            Project: CouchDB
>>         Issue Type: Improvement
>>         Components: Database Core
>>           Reporter: Filipe Manana
>>           Assignee: Filipe Manana
>> Currently we store documents as Erlang serialized (via the term_to_binary/1 BIF)
>> The proposed patch changes the database file format so that instead of storing serialized
>> EJSON document bodies, it stores raw JSON binaries.
>> The github branch is at:
>> Advantages:
>> * what we write to disk is much smaller - a raw JSON binary can easily get up to
50% smaller
>>  (at least according to the tests I did)
>> * when serving documents to a client we no longer need to JSON encode the document
>>  read from the disk - this applies to individual document requests, view queries
>>  ?include_docs=true, pull and push replications, and possibly other use cases.
>>  We just grab its body and prepend the _id, _rev and all the necessary metadata fields
>>  (this is via simple Erlang binary operations)
>> * we avoid the EJSON term copying between request handlers and the db updater processes,
>>  between the work queues and the view updater process, between replicator processes,
>> * before sending a document to the JavaScript view server, we no longer need to convert
>>  from EJSON to JSON
>> The changes done to the document write workflow are minimalist - after JSON decoding
>> document's JSON into EJSON and removing the metadata top level fields (_id, _rev,
etc), it
>> JSON encodes the resulting EJSON body into a binary - this consumes CPU of course
but it
>> brings 2 advantages:
>> 1) we avoid the EJSON copy between the request process and the database updater process
>>   for any realistic document size (4kb or more) this can be very expensive, specially
>>   when there are many nested structures (lists inside objects inside lists, etc)
>> 2) before writing anything to the file, we do a term_to_binary([Len, Md5, TheThingToWrite])
>>   and then write the result to the file. A term_to_binary call with a binary as the
>>   is very fast compared to a term_to_binary call with EJSON as input (or some other
>>   structure)
>> I think both compensate the JSON encoding after the separation of meta data fields
and non-meta data fields.
>> The following relaximation graph, for documents with sizes of 4Kb, shows a significant
>> performance increase both for writes and reads - especially reads.   
>> I've also made a few tests to see how much the improvement is when querying a view,
for the
>> first time, without ?stale=ok. The size difference of the databases (after compaction)
>> also very significant - this change can reduce the size at least 50% in common cases.
>> The test databases were created in an instance built from that experimental branch.
>> Then they were replicated into a CouchDB instance built from the current trunk.
>> At the end both databases were compacted (to fairly compare their final sizes).
>> The databases contain the following view:
>> {
>>    "_id": "_design/test",
>>    "language": "javascript",
>>    "views": {
>>        "simple": {
>>            "map": "function(doc) { emit(doc.float1, doc.strings[1]); }"
>>        }
>>    }
>> }
>> ## Database with 500 000 docs of 2.5Kb each
>> Document template is at:
>> Sizes (branch vs trunk):
>> $ du -m couchdb/tmp/lib/disk_json_test.couch 
>> 1996	couchdb/tmp/lib/disk_json_test.couch
>> $ du -m couchdb-trunk/tmp/lib/disk_ejson_test.couch 
>> 2693	couchdb-trunk/tmp/lib/disk_ejson_test.couch
>> Time, from a user's perpective, to build the view index from scratch:
>> $ time curl http://localhost:5984/disk_json_test/_design/test/_view/simple?limit=1
>> {"total_rows":500000,"offset":0,"rows":[
>> {"id":"0000076a-c1ae-4999-b508-c03f4d0620c5","key":null,"value":"wfxuF3N8XEK6"}
>> ]}
>> real	6m6.740s
>> user	0m0.016s
>> sys	0m0.008s
>> $ time curl http://localhost:5985/disk_ejson_test/_design/test/_view/simple?limit=1
>> {"total_rows":500000,"offset":0,"rows":[
>> {"id":"0000076a-c1ae-4999-b508-c03f4d0620c5","key":null,"value":"wfxuF3N8XEK6"}
>> ]}
>> real	15m41.439s
>> user	0m0.012s
>> sys	0m0.012s
>> ## Database with 100 000 docs of 11Kb each
>> Document template is at:
>> Sizes (branch vs trunk):
>> $ du -m couchdb/tmp/lib/disk_json_test_11kb.couch
>> 1185	couchdb/tmp/lib/disk_json_test_11kb.couch
>> $ du -m couchdb-trunk/tmp/lib/disk_ejson_test_11kb.couch
>> 2202	couchdb-trunk/tmp/lib/disk_ejson_test_11kb.couch
>> Time, from a user's perpective, to build the view index from scratch:
>> $ time curl http://localhost:5984/disk_json_test_11kb/_design/test/_view/simple?limit=1
>> {"total_rows":100000,"offset":0,"rows":[
>> {"id":"00001511-831c-41ff-9753-02861bff73b3","key":null,"value":"2fQUbzRUax4A"}
>> ]}
>> real	4m19.306s
>> user	0m0.008s
>> sys	0m0.004s
>> $ time curl http://localhost:5985/disk_ejson_test_11kb/_design/test/_view/simple?limit=1
>> {"total_rows":100000,"offset":0,"rows":[
>> {"id":"00001511-831c-41ff-9753-02861bff73b3","key":null,"value":"2fQUbzRUax4A"}
>> ]}
>> real	18m46.051s
>> user	0m0.008s
>> sys	0m0.016s
>> All in all, I haven't seen yet any disadvantage with this approach. Also, the code
>> don't bring additional complexity. I say the performance and disk space gains it
gives are
>> very positive.
>> This branch still needs to be polished in a few places. But I think it isn't far
from getting mature.
>> Other experiments that can be done are to store view values as raw JSON binaries
as well (instead of EJSON)
>> and optional compression of the stored JSON binaries (since it's pure text, the compression
ratio is very high).
>> However, I would prefer to do these other 2 suggestions in separate branches/patches
- I haven't actually tested
>> any of them yet, so maybe they not bring significant gains.
>> Thoughts? :)
> --
> This message is automatically generated by JIRA.
> For more information on JIRA, see:

View raw message