couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Lehnardt <...@apache.org>
Subject Re: [jira] [Commented] (COUCHDB-1259) Replication ID is not stable if local server has a dynamic port number
Date Tue, 06 Nov 2012 15:40:49 GMT

On Nov 5, 2012, at 08:54 , Benoit Chesneau <bchesneau@gmail.com> wrote:

> btw sound like jira isn't handling mails now so we should continue
> this discussion on the ticket.

It never has, to my knowledge.

IIRC it works with GitHub Issues/Pull Requests, but not here. Would be
a great feature though.

Cheers
Jan
-- 

> 
> On Mon, Nov 5, 2012 at 8:53 AM, Benoit Chesneau <bchesneau@gmail.com> wrote:
>> On Mon, Nov 5, 2012 at 7:48 AM, Dustin Sallings <dustin@spy.net> wrote:
>>> Benoit Chesneau <bchesneau@gmail.com>
>>> writes:
>>> 
>>>>> 1. My home couchdb server (by hostname, only available from inside my
>>>>>    house)
>>>>> 2. My work couchdb server (by hostname, available inside and outside,
>>>>>    but the IP addresses are different in each location).
>>>>> 3. Iriscouch (by hostname, available everywhere on the same address)
>>>>> 
>>>>> In all three cases, it can stop replication, but will resume again if
>>>>> I restart.
>>>>> 
>>>> Most of these cases already work if you are using the new _replicator api.
>>> 
>>> If you're referring to the replicator DB, then yes, that's the way
>>> I set up all my replications, and why it starts back up when I restart.
>>> 
>>>>> Under what circumstances do you consider "stop replicating after
>>>>> sleep, but start again if the user restarts CouchDB" good behavior?
>>>> 
>>>> - local replications should always restart.
>>>> - replication with remote should restart only if the remote didn't
>>>> change and my network didn't change.
>>>> 
>>>> In other cases I need to rely on a mecanism to validate that I can
>>>> continue the replication. In that case I agree it can be automated and
>>>> we have different solution to do it. But that should never be a
>>>> default mecanism imo.
>>> 
>>> Let's assume what you're saying is OK and that the real bug here is
>>> that it *does* restart when I kill and restart CouchDB...
>> 
>> Any log in couch? Does it simply stop the replication?
>>> 
>>> The one that I notice the most is an application that collects data in
>>> my house that replicates to my laptop.  The *only* time this can
>>> possibly work is when my laptop is on my home LAN.  That means, for it
>>> to start properly, it has to be connected to my home LAN before I ever
>>> see anything.
>>> 
>>> Then I go somewhere else.  Let's assume the somewhere else has a
>>> host named "menudo" (which is the unfortunate hostname of the machine in
>>> my house running CouchDB).  Because I'm on a different network, the
>>> replicator decides it's probably not the menudo I'm looking for, so it
>>> ceases replication.
>>> 
>>> When I go back home, shouldn't it start back up?
>> 
>> At this point the replication  should be stopped because it retried
>> too many time on your other network. I am not sure anyway that it is
>> desirable to restart it from the old replication doc at least without
>> checking that the remote is the remote.
>> 
>> 
>>> 
>>> Doesn't this whole thing get a lot more simple and inline with what
>>> any reasonable user might expect if you just say, "configured
>>> replications run as configured"?
>> 
>> I agree, that it is interresting to have such features and i know at
>> least 2 projects proposing layers to handle that. Imo that is
>> requiring a small change in the the replicator client to check the
>> remote and eventually associate a remote id to an replication id and
>> change have a mapping [remote id, {Host, Port}] that is change after a
>> conversation with the remote node. That is this part that should be
>> switchable so for people that want it they could eventually check
>> against a node id . The logic depends on the application policy imo.
>> 
>> - benoƮt


Mime
View raw message