couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Anderson <jch...@apache.org>
Subject Re: [<0.111.0>] Uncaught error in HTTP request: {exit,{body_too_large,content_length}}
Date Sat, 21 Feb 2009 18:45:09 GMT
On Sat, Feb 21, 2009 at 10:10 AM, Jeff Hinrichs - DM&T
<jeffh@dundeemt.com> wrote:
>
> Google can't seem to help me locate an example of doing a PUT w/ chunked
> transfer for  httplib2 -- does anyone have any pointers?
>

Any luck PUTing attachments with chunked encoding? You'll need to
create the document first, with a PUT, and then create the attachment,
with a chunked PUT.

It should be possible to avoid non-buffered non-chunked standalone
attachment PUTs now as well, based on my recent patch to Mochiweb.
Implementing that would be pretty simple, just a matter of adapting
the attachment writing code to handle it.

There's not much that can be done about memory usage for inlined
base64 attachments. It sounds like that is what you are using for
load.

In the long run, I'm hoping that a form of the all_docs view can be
useful for dump and load. Eg, something like

curl http://localhost:5984/olddb/_all_docs?include_everything=true > backup.json

followed by

curl -X POST -T backup.json http://localhost:5984/newdb/_bulk_docs

Would be all that is needed for dump and load. I'm not sure how close
we are to this yet, but it sure seems like the simplest way.

Chris

-- 
Chris Anderson
http://jchris.mfdz.com

Mime
View raw message