couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Filipe Manana (JIRA)" <j...@apache.org>
Subject [jira] Commented: (COUCHDB-913) Caching of BTree nodes and documents
Date Mon, 29 Nov 2010 11:33:37 GMT

    [ https://issues.apache.org/jira/browse/COUCHDB-913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12964703#action_12964703
] 

Filipe Manana commented on COUCHDB-913:
---------------------------------------

All the caching efforts are now in a new branch:  https://github.com/fdmanana/couchdb/commits/couch_append_cache

Basically, I dropped term_cache and created a new cache module couch_cache.erl that is based
on term_cache and the existing auth_cache (for faster lookups).

Most important, I made a fairly detailed analysis, not relying exclusively in relaximation
graphs, which are IMO, hard to interpret sometimes.

Over that branch, I applied this patch: http://friendpaste.com/ZZrIsE8FQ7U2HN1XDiBnj
This patch adds io:format calls around the cache lookups and file read operations and reports
the time spent (in milliseconds) for each of these operations.

With that patch applied, I ran several times the readers and writers relaximation test with
50 writers, 200 readers  for a duration of 5 minutes:
- 1 ran for small documents (around 1.2Kb each) with caching enabled
- 1 ran for small documents (around 1.2Kb each) with caching disabled
- 1 ran for large documents  (around 100Kb each) with caching enabled
- 1 ran for large documents  (around 100Kb each) with caching disabled

I then extracted the relevant messages from the logs (I ran CouchDB with ./utils/run | tee
log) like this:
$ egrep  'btree_cache_'  log  >  'btree_cache_lookups.txt'
$ egrep  'btree_file_read' log > 'btree_file_reads.txt'
$ egrep 'doc_cache_' log > 'small_doc_cache_lookups.txt'
etc...

Now, up to the analysis results:


==== BTree nodes

== 859513 cache lookups

Top 5 frequency:

$ grep 'btree_cache_hit' btree_cache_lookups.txt | cut -d ' ' -f 2 | sort -n | uniq -c | sort
-n -r | head -5
 441509 0.005
 284771 0.004
  72980 0.006
  13761 0.007
   7356 0.008

~93% of cache lookups took up to 0.006ms

10 worst cache lookup times

$ grep 'btree_cache_hit' btree_cache_lookups.txt | cut -d ' ' -f 2 | sort -n | uniq -c | tail
      1 20.69
      1 20.828
      1 22.12
      1 22.408
      1 22.668
      1 23.361
      1 25.732
      1 26.346
      1 33.791
      1 37.375

== 726795 BTree node file reads

Top 5 frequency

$ cat btree_file_reads.txt | cut -d ' ' -f 2 | sort -n | uniq -c | sort -n -r | head -5
   3742 0.038
   3666 0.037
   3564 0.036
   3331 0.039
   3293 0.044

$ cat btree_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if $_ > 0.06;' | wc
-l
726795

All BTree node file reads took more than 0.06ms to complete (one order of magnitude superior
to 93% of the cache lookups)

$ cat btree_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if $_ > 0.1;' | wc
-l
652910

~90% of the BTree node file reads take more than 0.1ms (2 orders of magnitude superior to
93% of the cache lookups)

$ cat btree_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if $_ > 1.0;' | wc
-l
453728

62% of the BTree node file reads took more than 1.0ms (4 orders of magnitude superior to 93%
of the cache lookups)

10 worst BTree node file read times:

$ cat btree_file_reads.txt | cut -d ' ' -f 2 | sort -n | uniq -c | tail
      1 406.357
      1 406.386
      1 406.475
      1 406.482
      1 406.508
      1 406.535
      1 409.724
      1 409.826
      1 425.302
      1 425.458


==== Small docs (about 1.2Kb each)

== 209810 cache lookups

Top 5 frequency

$ grep 'doc_cache_hit' small_doc_cache_lookups.txt | cut -d ' ' -f 2 | sort -n | uniq -c |
sort -n -r | head -5
 126144 0.005
  40580 0.006
  24782 0.004
   8403 0.007
   3027 0.008

~91% of cache lookups took less than 0.006ms to complete

10 worst cache lookup times

$ grep 'doc_cache_hit' small_doc_cache_lookups.txt | cut -d ' ' -f 2 | sort -n | uniq -c |
tail
      1 21.146
      1 21.645
      1 22.102
      1 22.175
      1 22.453
      1 23.23
      1 26.782
      1 27.912
      1 30.98
      1 43.971

== 220519 small doc file reads

Top 5 frequency

$ grep 'doc_file_read' small_doc_file_reads.txt | cut -d ' ' -f 2 | sort -n | uniq -c | sort
-n -r | head -5
   2379 0.04
   2207 0.041
   1889 0.042
   1850 0.039
   1363 0.043

$ cat small_doc_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if $_ > 0.006;'
| wc -l
220519

All small doc file reads took more than 0.006ms to complete

$ cat small_doc_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if $_ > 0.1;' |
wc -l
197054

89% of the small doc file reads took more than 0.1ms to complete (2 orders of magnitude superior
to 91% of the cache lookups)

$ cat small_doc_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if $_ > 1.0;' |
wc -l
136710

62% of the small doc file reads took more than 1.0ms to complete (3 orders of magnitude superior
to 91% of the cache lookups)

$ cat small_doc_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if $_ > 0.5;' |
wc -l
159923

~73% of the small doc file reads took more than 0.5ms to complete (2 orders of magnitude superior
to 91% of the cache lookups)

10 worst read times

$ cat small_doc_file_reads.txt | cut -d ' ' -f 2 | sort -n | uniq -c | tail
      1 122.645
      1 122.657
      1 122.67
      1 122.672
      1 125.368
      1 125.401
      1 149.895
      1 150.192
      1 150.637
      1 150.948


==== Large docs (about 100Kb each)

== 27670 cache lookups

Top 10 frequency

$ grep 'doc_cache_hit' large_doc_cache_lookups.txt | cut -d ' ' -f 2 | sort -n | uniq -c |
sort -n -r | head -10
    306 0.12
    304 0.114
    304 0.107
    293 0.115
    292 0.121
    290 0.111
    290 0.108
    289 0.13
    285 0.124
    284 0.136

$ grep 'doc_cache_hit' large_doc_cache_lookups.txt | cut -d ' ' -f 2 | perl -n -e 'print $_
if $_ < 0.20;' | wc -l
24016

87% of the cache lookups took less than 0.2ms

10 worst cache lookup times

$ grep 'doc_cache_hit' large_doc_cache_lookups.txt | cut -d ' ' -f 2 | sort -n | uniq -c |
tail
      1 19.161
      1 20.373
      1 20.557
      1 21.272
      1 22.963
      1 23.549
      1 24.099
      1 25.099
      1 25.611
      1 27.38

== 26741 large doc file reads

Top 10 frequency

$ grep 'doc_file_read' large_doc_file_reads.txt | cut -d ' ' -f 2 | sort -n | uniq -c | sort
-n -r | head -10
      8 5.119
      6 5.545
      6 5.422
      6 4.979
      6 4.812
      6 4.791
      6 4.715
      6 4.701
      6 4.413
      6 4.065

$ grep 'doc_file_read' large_doc_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if
$_ > 4.000;' | wc -l
23029

86% of the large doc file reads took more than 4.0ms to complete (1 order of magnitude superior
to 87% of the cache lookups)

$ grep 'doc_file_read' large_doc_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if
$_ < 0.2;' | wc -l
4

4 out of 26741 large doc file reads took less than 0.2ms to complete

10 worst read times

$ grep 'doc_file_read' large_doc_file_reads.txt | cut -d ' ' -f 2 | sort -n | uniq -c | tail
      1 585.005
      1 585.987
      1 588.969
      1 592.247
      1 592.519
      1 593.615
      1 593.981
      1 594.967
      1 608.922
      1 609.879

$ grep 'doc_file_read' large_doc_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if
$_ > 100.0;' | wc -l
3678

~14% of the large doc file reads took more than 100.0ms (3 orders of magnitude superior to
87% of the cache lookups)

$ grep 'doc_file_read' large_doc_file_reads.txt | cut -d ' ' -f 2 | perl -n -e 'print $_ if
$_ > 20.0;' | wc -l
12714

~48% of the large doc file reads took more than 20.0ms (2 orders of magnitude superior to
87% of the cache lookups)

=========

All these tests were done in a Linux machine (Thinkpad T500), with Erlang OTP R14B and an
hard drive of 5400 rpms.

I would love to see the results in a machine with a SSD drive. Unfortunately I don't have
one to test.

I still have some things to do on that branch (including cache specific test cases).
But given community approval, I'll quickly move this forward.

I attach here a tarball with the files I used for this analysis and the logging patch mentioned
before, in case anyone wants to double check the results.

Comments?

> Caching of BTree nodes and documents
> ------------------------------------
>
>                 Key: COUCHDB-913
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-913
>             Project: CouchDB
>          Issue Type: Improvement
>            Reporter: Filipe Manana
>            Assignee: Filipe Manana
>
> The following patch adds optional caching of BTree nodes (for both DBs and view indexes)
and documents:
> http://github.com/fdmanana/couchdb/commit/c6361d1e0d86942fc86350bea9ca1e8d7539c1b3
> Look at the new parameters introduced in default.ini.tpl.in for more details.
> This is based on the work initially done by Randall Leeds, found at:
> http://github.com/tilgovi/couchdb/commits/forevercache
> I'll not commit it before 1.1 nor without any feedback.
> cheers

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message