couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dirkjan Ochtman <>
Subject Re: Optimizing chunked transfer-encoding and the impact on clients
Date Wed, 22 Jul 2015 11:23:10 GMT
On Tue, Jul 21, 2015 at 5:42 PM, Adam Kocoloski <> wrote:
> So — do any of you knowingly rely on this behavior? How difficult would it be to accommodate
this change?

I spent a little time looking at the CouchDB-Python code to see how it
would be impacted. From what I'm seeing, it has a completely different
code path for _changes and for views (which I think would include
_all_docs). It may be worth taking that into account for other clients
as well; at least in Python, the commonly used JSON API's don't really
do streaming consumption, so the unit of processing is very different
for a view (whole object contains all rows) vs changes feeds (object
per update).

In CouchDB-Python, the _changes code seems to rely on there being one
update (e.g. JSON decode) per line in the chunks, but it doesn't seem to rely on
there only being one line per chunk. The view code seems to just read the
whole document into memory at once, so that should also not be an
issue (there's a separate API to do paging through view results).

So it feels like this change could be a no-op from that perspective.
In any case, it seems like these kinds of API changes should
definitely be fair game for a 2.0 release, so IMO Jan's solution 3
seems like a good way forward to me.



View raw message