couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dan Santner <>
Subject Re: Limiting doc size to prevent malicious use
Date Thu, 06 Sep 2012 20:12:13 GMT
Another vote for doing this at the web server where it belongs, closest to the edge of your
network.  If it gets all the way into an app server of any sort it's usually already burned
through your routers etc...

Now that doesn't help someone who is on iris or cloudant but I suppose it would be in those
companies best interest to provide some sort of mechanism to throttle incoming post/put. 
I suppose they can charge you higher usage for the weakness but in the end it's never a good
thing to allow someone to go firehosing your sockets (intentional or not). 

Sorry for the soapbox on my first post to this group, but real life production questions like
these are the main reason I can't run just a couch app and be done with it.  I treat couch
like a database in a three tier model.  It's brilliant for just that purpose alone.

On Sep 6, 2012, at 2:31 PM, Mark Hahn wrote:

> I am.  I couldn't live without nginx.  (And node and couchdb).
> On Thu, Sep 6, 2012 at 12:27 PM, Dave Cottlehuber <> wrote:
>> On 6 September 2012 20:50, Robert Newson <> wrote:
>>> function(doc) {
>>>  if (JSON.stringify(doc).length > limit) {
>>>    throw({forbidden : "doc too big"
>>>  }
>>> }
>>> With the caveat that this is inefficient and horrible.
>>> B.
>> And from a network-based (D)DOS, the damage is already done because it
>> was sent & parsed muahahaha. But at least you'll not be storing that
>> in the DB.
>> Has anybody using nginx or apache to enforce a hard limit? e.g.
>> A+
>> Dave

View raw message