couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From niall el-assaad <nial...@gmail.com>
Subject Re: How many nodes can couchdb scale to?
Date Wed, 02 Mar 2011 09:09:12 GMT
Thanks Isaac, thats a great idea makes sense, it would depend on the network
topology, but for MPLS networks there would be no additional traffic load.

On Tue, Mar 1, 2011 at 12:33 AM, Isaac Force <isaac@autognosis.org> wrote:

> >> Hi Niall, I think the key part is that with this topology your central
> >> servers are going to need to support a sustained throughput of 20,000
> >> reads/second in order to distribute the updates to all 2,000 servers.
> >>  Granted, each read is repeated 2,000 times, so you'll mostly be reading
> >> from page cache, but a cached read from CouchDB is not nearly as cheap
> as
> >> reading from e.g. Varnish.
> >
> > Thanks Adam,
> >
> > Thats a good point. I suppose we could scale this by adding more nodes in
> > the data centre.
>
> An alternative would be to create peer-to-peer replication rings from
> your edge nodes with a limited set of replication 'uplinks' to the
> data center: draw nodes in an even ring, connect each node to its
> adjacent peer to make a circle, connect each node to the node opposite
> it in the ring, connect a set of nodes relative to the size of the
> ring to the data center.
>
> If a set of nodes die, each node should still have an intra-ring path
> to each remaining node, and as long as there's still one uplink
> remaining replication from the data center will continue. You could
> even connect the rings to one another and have the data center merely
> be another node on one of the rings.
>
> This way configuration complexity and burden at the edge is traded for
> egress replication burden from the data center.
>
> -Isaac
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message