couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <robert.new...@gmail.com>
Subject Re: Re: Re: Re: Newbie question: compaction and mvcc consistency?
Date Wed, 26 May 2010 22:03:56 GMT
I'm sorry if I wasn't clear. I was listing all the reasons why the
patch has not been applied.

B.

On Wed, May 26, 2010 at 10:46 PM, Markus Jelsma
<markus.jelsma@buyways.nl> wrote:
> It's a good question. I wrote the patch because I saw the problem and
> it concerned me. Since it's difficult to induce the problem, and the
> patch is not subtle in its actions, it has not been committed to the
> project (this was, I think, my first couchdb patch).
>
> It remains theoretically possible but given the difficulty of inducing
> it it's not being addressed yet.
>
> But is it addressed in .10? If so, how?
> Storing writes in RAM would violate the durability semantics of
> couchdb and would mean you would have to be more careful during
> compaction.
>
> Of course, a loss of power would not flush a RAM buffer to disk.
> Clients shouldn't need to know or care compaction, which
> is just a system maintenance tasks.
>
>
> Obviously it must be transparent to clients, it would spoil the fun =)
> B.
>
> On Wed, May 26, 2010 at 10:19 PM, Markus Jelsma
> <markus.jelsma@buyways.nl> wrote:
>> How is it that you couldn't reproduce the scenario with .10 and onwards? The patch
you supplied for that JIRA ticket you mention in the other post doesn't seem to be incorporated
in .10 at all. Are there other useful counter measures in .10?
>>
>>
>>
>> Also, on the subject of your ticket and especially Adam's comment to it, would storing
incoming writes during the wait in a RAM buffer help to allow for writes during a compaction
that can't cope with the amount of writes?
>>
>> -----Original message-----
>> From: Robert Newson <robert.newson@gmail.com>
>> Sent: Wed 26-05-2010 22:56
>> To: user@couchdb.apache.org;
>> Subject: Re: Re: Newbie question: compaction and mvcc consistency?
>>
>> I succeeding in preventing compaction completion back in the 0.9 days
>> but I've been unable to reproduce since 0.10 onwards. compaction
>> retries until it succeeds (or you hit the end of the disk). I've not
>> managed to make it retry more than five times before it succeeds.
>>
>> B.
>>
>> On Wed, May 26, 2010 at 9:52 PM, Markus Jelsma <markus.jelsma@buyways.nl> wrote:
>>> On the subject of a compaction that cannot deal with the magnitude of writes,
can that (or has it already) theory be put to the test? Does anyone know a certain setup that
consists of machine specifications relative to the amount of writes/second?
>>>
>>>
>>> This is a theoretical obstacle that could use some factual numbers that could
help everyone avoid it in their specific setup. I wouldn't prefer to have such a situation
in practice especially if compaction is triggered by some process that monitors available
disk space or whatever other condition.
>>> -----Original message-----
>>> From: Randall Leeds <randall.leeds@gmail.com>
>>> Sent: Wed 26-05-2010 22:36
>>> To: user@couchdb.apache.org;
>>> Subject: Re: Newbie question: compaction and mvcc consistency?
>>>
>>> On Wed, May 26, 2010 at 13:29, Robert Buck <buck.robert.j@gmail.com> wrote:
>>>> On Wed, May 26, 2010 at 3:00 PM, Randall Leeds <randall.leeds@gmail.com>
wrote:
>>>>> The switch to the new, compacted database won't happen so long as
>>>>> there are references to the old one. (r) will not disappear until (i)
>>>>> is done with it.
>>>>
>>>> Curious, you said "switch to the new [database]". Does this imply that
>>>> compaction works by creating a new database file adjacent to the old
>>>> one?
>>>
>>> Yes.
>>>
>>>>
>>>> If this is what you are suggesting, I have another question... I also
>>>> read that compaction process may never catch up with the writes if
>>>> they never let up. So along this specific train of thought, does Couch
>>>> perform compaction by walking through the database in a forward-only
>>>> manner?
>>>
>>> If I understand correctly the answer is 'yes'. Meanwhile, new writes
>>> still hit the old database file as the compactor walks the old tree.
>>> If there are new changes when the compactor finishes it will walk the
>>> new changes starting from the root. Typically this process quickly
>>> gets faster and faster on busy databases until it catches up
>>> completely and the switch can be made.
>>>
>>> That said, you can construct an environment where compaction will
>>> never finish, but I haven't seen reports of it happening in the wild.
>>>
>>
>
>
>

Mime
View raw message