couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Isaac Force <>
Subject Re: How many nodes can couchdb scale to?
Date Tue, 01 Mar 2011 00:33:59 GMT
>> Hi Niall, I think the key part is that with this topology your central
>> servers are going to need to support a sustained throughput of 20,000
>> reads/second in order to distribute the updates to all 2,000 servers.
>>  Granted, each read is repeated 2,000 times, so you'll mostly be reading
>> from page cache, but a cached read from CouchDB is not nearly as cheap as
>> reading from e.g. Varnish.
> Thanks Adam,
> Thats a good point. I suppose we could scale this by adding more nodes in
> the data centre.

An alternative would be to create peer-to-peer replication rings from
your edge nodes with a limited set of replication 'uplinks' to the
data center: draw nodes in an even ring, connect each node to its
adjacent peer to make a circle, connect each node to the node opposite
it in the ring, connect a set of nodes relative to the size of the
ring to the data center.

If a set of nodes die, each node should still have an intra-ring path
to each remaining node, and as long as there's still one uplink
remaining replication from the data center will continue. You could
even connect the rings to one another and have the data center merely
be another node on one of the rings.

This way configuration complexity and burden at the edge is traded for
egress replication burden from the data center.


View raw message