incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gary Smith <G...@primeexalia.com>
Subject RE: A couple of couchdb questions.
Date Tue, 24 Mar 2009 22:03:11 GMT
Jan, 

I understand your responses below.  I think the implementation of the write/pull slave makes
sense.  The reason I was thinking about using Apache httpd proxy is that we already have a
set of these setup via HA doing something similar.  All of the servers will be public in the
sense that it's on our private network and available to everything on the network.

The replication isn't the biggest problem.  We could code it to do both at the same time and
only consider it accepted if both succeed.  Problem is sometimes there it network latency,
but then that affects any type of gaurenteed commit process.

Anyway, I will probably setup a test environment in the next week or so and see how well it
plays with what I want to do.

Gary

________________________________________
From: Jan Lehnardt [jan@apache.org]
Sent: Tuesday, March 24, 2009 2:46 PM
To: user@couchdb.apache.org
Subject: Re: A couple of couchdb questions.

Hi Gary,


On 24 Mar 2009, at 22:14, Gary Smith wrote:

> Hello,
>
> I'm working to implement a document warehouse for about 15m
> documents.  These documents range between 10kb - 500kb (legacy
> archived pdf's).  Currently we do this by maintaining a mysql
> database with the document stored on a variety of servers
> (consisting of about 6TB).  Most of the problems that we encounter
> are a) backups and b) physical access to the documents as they are
> on a private network.  Since not much changes backups aren't really
> that much of a problem (but restoring is very slow).  We are now
> looking to add documents to this regularly (about 10k per week).  So
> we are looking to implement something new, or at least, more useful.
>
> So we thought about using Amazon S3 for storage but these documents
> fall under HIPA constraints so we have decided to do this in house.
>
> Looking at couchdb, it pretty much does what we are looking to do.
> We really only want to store a document and maybe some very basic
> metadata (which we currently do by having both a PDF and a metadata
> file).  Implementation doesnt look like a problem with the
> documented API.

This sounds like a good application.


> So, the questions.
>
> I would like to break this down into multiple servers and
> incorporate replication at the same time.  The document says that
> pull is recommended over push but doesn't mention why.

Pull replication is faster, more reliable and is better at picking up
failed replications again.


>  Does push replication require the slave (or other node) to accept
> the put/post request as completed?

Not sure what you mean here.


> If we choose pull replication instead of push, I assume that this is
> something we will need to crontab out to schedule it, or does it
> have a background process that constantly syncs?  API looks like
> just a single get request.

It's a POST request, but yeah, there's no automatism. See below.


> Either way, here is what we are looking to do at this time.  At two
> seperate locations we will have multiple servers, setup in a master/
> master configuration.  We should not run into any conflicts are
> updates are not allowed.  ID's are unique (MD5 checksum and some
> other unique information).

Good.


> We wanted to use 4 servers at each location, partly because each
> server has 4TB of space (actually 3TB of raid 5).  Each server will
> hold files based upon the first digit of the MD5 checksum (0-3 on
> server A, 4-7 on server B, 8-A on server C, and B-F on server D).
> We were thinking of using Apache's URL rewriting to proxy the
> request to the proper server.  This should work for both get/put/post.

This sounds sensible. Other proxies can handle that as well, but if
your are comfortable with Apache httpd, no reason to not use it.


> We will also have the backup server at the second location (which
> will be the active for their location) using the same ideology.
>
> What would be most useful is to be able to ensure that before a
> commit is accepted on a server we could gaurentee that it has been
> replicated to a second box.
>
> Any ideas or suggestions on that?

I think it is easiest when your application knows about the fact that
there is no single server when doing a write. Reads can still be
served through a single interface, but to make a proxy understand the
replication protocol is probably not worth the effort.

Here's how I'd set it up: Your application sends a PUT request to your
Apache proxy (or to the correct backend node directly), the document
will end up on one of the four backend servers. Your application also
knows about the backend nodes, the partition rules and can access them
directly, in both locations. After the PUT request that creates the
document completes in location 1 you send a POST request to the
backend server in location 2 telling it to replicate from the node
that the doc was written to in location 1 (issuing a pull
replication). When this call returns successfully (if not, you just
retry), you return to the original caller that wanted to save the
document.

Downsides:

  - You need to deal with the fact that the communication between back
ends might not work and decide what to do (save anyway, allowing
reduced reliability and weak consistency) or deny the write because
the consistency and availability requirements can no longer be
guaranteed).

  - Your backend nodes must be public.


In the Future, CouchDB will be able to do the partitioning for you
(solving the second downside). The first one can never be circumvented
in a distributed system.


Cheers
Jan
--
Mime
View raw message