db-derby-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Db-derby Wiki] Update of "DerbyLruCacheManager" by GokulSoundararajan
Date Sat, 10 Jun 2006 17:27:07 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Db-derby Wiki" for change notification.

The following page has been changed by GokulSoundararajan:
http://wiki.apache.org/db-derby/DerbyLruCacheManager

------------------------------------------------------------------------------
  
  Commenting on the original thread in Derby Dev Mailing List [http://thread.gmane.org/gmane.comp.apache.db.derby.devel/21263/focus=21263
Link]
  
- Thanks to all who commented on my early results. I have added the results of a mixed workload
containing both Zipf and Scans. I followed the example provided in the 2Q paper in which they
tried out scans of different lengths. I used a 10000 item cache with 100000 item dataset and
ran a mixed workload of 0.8 Zipf with 33% scans of lengths (0, 10, 100, 1000, 10000). The
resulting graph is available as [http://www.eecg.toronto.edu/~gokul/soc/mixed-80zipf-10000items.png
PNG] and [http://www.eecg.toronto.edu/~gokul/soc/mixed-80zipf-10000items.png PDF]. The results
show that there is a significant benefit by using the CART algorithm. Earlier, I was leaning
towards using the 2Q algorithm but I found out that it has a significant synchronization penalty.
The Postgres community implemented the 2Q algorithm (in 8.0) when they found out that ARC
was patented by IBM. Since then, they have gone to Clock (in 8.1) mostly due to the contention
penalty in 2Q. Since CART is a cousin of Clock
 , it may have less overhead.
+ Thanks to all who commented on my early results. I have added the results of a mixed workload
containing both Zipf and Scans. I followed the example provided in the 2Q paper in which they
tried out scans of different lengths. I used a 10000 item cache with 100000 item dataset and
ran a mixed workload of 0.8 Zipf with 33% scans of lengths (0, 10, 100, 1000, 10000). The
resulting graph is available as [http://www.eecg.toronto.edu/~gokul/soc/mixed-80zipf-10000items.png
PNG] and [http://www.eecg.toronto.edu/~gokul/soc/mixed-80zipf-10000items.pdf PDF]. The results
show that there is a significant benefit by using the CART algorithm. Earlier, I was leaning
towards using the 2Q algorithm but I found out that it has a significant synchronization penalty.
The Postgres community implemented the 2Q algorithm (in 8.0) when they found out that ARC
was patented by IBM. Since then, they have gone to Clock (in 8.1) mostly due to the contention
penalty in 2Q. Since CART is a cousin of Clock
 , it may have less overhead.
  
  === June 04, 2006 ===
  

Mime
View raw message