ignite-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sasha Belyak <rtsfo...@gmail.com>
Subject Re: Write Behind with delete performance
Date Wed, 10 May 2017 03:13:29 GMT
Hello Jessie,
this happen because write behind work as:
1) Store cache updates in sorted map: oldest updates go to store first, but
if you update (delete/insert, any operation with same key) key in write
behind map (cached), this operation will be coalesced (to tear down store
load), but new operation will have same order as old one (i.e. if you put
key-value pairs into WB store: k1=1, k2=2, k3=0, k1=3 then WB translate it
to k1=3, k2=2, k3=0 (with this order, not k2=2, k3=0, k1=3).
2) After whole WB cache size grow to writeBehindFlushSize (or when
writeBehindFlushFrequency timeout will come... but this is not our case) -
flusher threads start working:
3) All flusher's start to process sorter map (WB cache) and:
3.0) Test if it collect writeBehindBatchSize of entries, or if new entry
have different operation from previous one
3.1) Lock entry, if entry is not evicting now - switch it to PENDING state,
unlock entry
3.2) Process writeAll or deleteAll on store (in 3.0 flusher test that all
entries have same operation) and turn all keys to FLUSHED state
Flusher work until cache is empty.

One additional point: if writer thread trying to update key, that already
presented in WB map and this key in PENDING state - writer will wait until
this key get FLUSHED state.

>From this long text we can get two conclusion:
1) If insert/update and delete operation often switch each other - flusher
can't get whole batch and will flush only continuous inserts/updates or
continuous deletes
2) If some key update very often - writer can often wait until flusher
flush this key.

Its not perfect, but I make two issue for improve it:
1) https://issues.apache.org/jira/browse/IGNITE-5184
2) https://issues.apache.org/jira/browse/IGNITE-5003

And thanks for the excellent description of the problem.

2017-05-10 7:46 GMT+07:00 waterg <jessie.jianwei.lin@gmail.com>:

>
> Hello I've come up with the code that
>
> 1. the write in parallel works great with the parameters below
>
> <property name="writeBehindFlushSize" value="1000"></property>
> <property name="writeBehindFlushThreadCount" value="10"></property>
> <property name="writeBehindBatchSize" value="100"></property>
>
> BUT, when I start to add deletes in the process, for example 1 remove every
> 19 puts,
> I start to the write behind performance deteriorates. And log looks like
> below:
> You see the delete are almost always executed with only 1 records, even
> though a deleteAll method was called.
>
> Tue May 09 10:49:38 PDT 2017Write w Delete start
> ----------------------------------------------------------
> [1494352178763]-----------Datebase BATCH upsert:87 entries successful
> ----------------
> [1494352178763]-----------Datebase BATCH upsert:35 entries successful
> ----------------
> [1494352178780]-----------Datebase BATCH upsert:100 entries successful
> ----------------
> [1494352178782]-----------Datebase BATCH upsert:100 entries successful
> ----------------
> [1494352178784]-----------Datebase BATCH upsert:100 entries successful
> ----------------
> [1494352178884]-----------Datebase BATCH upsert:100 entries successful
> ----------------
> [1494352178902]-----------Datebase BATCH upsert:39 entries successful
> ----------------
> [1494352178902]-----------Datebase BATCH upsert:100 entries successful
> ----------------
> [1494352178903]-----------Datebase BATCH upsert:39 entries successful
> ----------------
> [1494352178903]-----------Datebase BATCH upsert:29 entries successful
> ----------------
> [1494352178906]-----------Datebase BATCH upsert:100 entries successful
> ----------------
> [1494352178906]-----------Datebase BATCH upsert:39 entries successful
> ----------------
> [1494352178910]-----------Datebase BATCH upsert:100 entries successful
> ----------------
> [1494352178910]-----------Datebase BATCH upsert:39 entries successful
> ----------------
> [1494352178923]-----------Datebase BATCH upsert:39 entries successful
> ----------------
> [1494352178960]-----------Datebase BATCH upsert:1 entries successful
> ----------------
> [1494352179009]-----------Datebase BATCH upsert:38 entries successful
> ----------------
> [1494352179023]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179023]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179024]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179038]-----------Datebase BATCH upsert:39 entries successful
> ----------------
> [1494352179039]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179039]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179039]-----------Datebase BATCH upsert:36 entries successful
> ----------------
> [1494352179043]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179095]-----------Datebase BATCH upsert:36 entries successful
> ----------------
> [1494352179135]-----------Datebase BATCH upsert:1 entries successful
> ----------------
> [1494352179139]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179143]-----------Datebase BATCH upsert:1 entries successful
> ----------------
> [1494352179143]-----------Datebase BATCH upsert:39 entries successful
> ----------------
> [1494352179149]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179150]-----------Datebase BATCH upsert:30 entries successful
> ----------------
> [1494352179150]-----------Datebase BATCH upsert:11 entries successful
> ----------------
> [1494352179156]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179162]-----------Datebase BATCH upsert:40 entries successful
> ----------------
> [1494352179166]-----------Datebase BATCH upsert:40 entries successful
> ----------------
> [1494352179237]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
> [1494352179277]-----------Datebase BATCH upsert:38 entries successful
> ----------------
> [1494352179287]-----------Datebase BATCH DELETE:1 entries successful
> ----------------
>
> I've also tried with parameters below. Same writeAll methold called with
> only 1 records.
>
> <property name="writeBehindFlushSize" value="10240"></property>
> <property name="writeBehindFlushThreadCount" value="11"></property>
> <property name="writeBehindBatchSize" value="1024"></property>
>
> Does someone have similar observation? Or explanation of why this is
> happening?
>
> Thanks a lot!
>
> Jessie
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Write-Behind-with-delete-performance-tp12580.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>

Mime
View raw message