Return-Path: X-Original-To: apmail-trafficserver-dev-archive@www.apache.org Delivered-To: apmail-trafficserver-dev-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 23B73D32F for ; Sun, 30 Dec 2012 04:50:29 +0000 (UTC) Received: (qmail 35673 invoked by uid 500); 30 Dec 2012 04:50:28 -0000 Delivered-To: apmail-trafficserver-dev-archive@trafficserver.apache.org Received: (qmail 35411 invoked by uid 500); 30 Dec 2012 04:50:28 -0000 Mailing-List: contact dev-help@trafficserver.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@trafficserver.apache.org Delivered-To: mailing list dev@trafficserver.apache.org Received: (qmail 35392 invoked by uid 99); 30 Dec 2012 04:50:27 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 30 Dec 2012 04:50:27 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of yunkai.me@gmail.com designates 209.85.215.50 as permitted sender) Received: from [209.85.215.50] (HELO mail-la0-f50.google.com) (209.85.215.50) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 30 Dec 2012 04:50:21 +0000 Received: by mail-la0-f50.google.com with SMTP id fs13so2556768lab.23 for ; Sat, 29 Dec 2012 20:50:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=K52UnakxdzReMhqjeNhqIKIURQV9mcmZs+BRiNUNTrM=; b=vhEZcwVWbIEr7F9YeI0muUN/9x1DAh9gREaa0EXVzzjVPf5Tgk1Q9cbfsFnoGxcT5s uptURfGEdg4QhvhNy/MhyYlqFyM/4Y3mNqJGNFdfLC1TMS/LnwiPoeS8ROH/8SpEFCx+ O9hsig54paN3EWjHJgvXBLcnbCJjDOCtXTRBcKIuORtpa9YbsYDs+yvYbEWT0KeSe9B2 W66eMkb5CwbYmIkS4Z0FOC6v0b4DSYZ9itEfWrBQKAmS3BrqzeGB7szLwwyzQR3Wwkjs EXla69xmC5MyUi89qxQ+fwlGce0bkvygN3NvbkjZUCl1IdnSLaHwUpSqZwfg9YOkvdBj HE7g== MIME-Version: 1.0 Received: by 10.152.124.226 with SMTP id ml2mr35025273lab.46.1356843000258; Sat, 29 Dec 2012 20:50:00 -0800 (PST) Received: by 10.114.79.40 with HTTP; Sat, 29 Dec 2012 20:50:00 -0800 (PST) In-Reply-To: References: Date: Sun, 30 Dec 2012 12:50:00 +0800 Message-ID: Subject: Re: Question about RamCacheLRU From: Yunkai Zhang To: dev Content-Type: multipart/alternative; boundary=f46d042f96fc0be1c904d20aa135 X-Virus-Checked: Checked by ClamAV on apache.org --f46d042f96fc0be1c904d20aa135 Content-Type: text/plain; charset=ISO-8859-1 On Sun, Dec 30, 2012 at 12:17 PM, John Plevyak wrote: > Lol, If they have optimized it by removing the LRU nature, it was perhaps > overzealous, or perhaps your workload is such that it fits within the RAM > cache so replacement is not an issue. Without the LRU there are > approximately the same number of buckets as objects, so replacement based > on bucket would be largely random. Instead we can have a partitioned > LRU, and we are probably going to have to go that route as it looks like > lock contention in the cache is pretty bad. That's the next area I was > thinking of looking at... > It seems that my coworker split the original one LRU queue into multiple queues according data size(e->data->_size_index). What they do is not to optimize LRU itself, but to reduce memory fragment caused by allocate/free memory frequently. I'll send that patch to here, and hope you can give some advise. > > cheers, > john > > On Sat, Dec 29, 2012 at 7:59 PM, Yunkai Zhang wrote: > > > On Sun, Dec 30, 2012 at 1:57 AM, John Plevyak wrote: > > > > > This code in ::put() implements the LRU, and as you can see, it uses > the > > > LRU data structure (i.e. simple list from most recently used to least): > > > > > > while (bytes > max_bytes) { > > > RamCacheLRUEntry *ee = lru.dequeue(); > > > if (ee) > > > remove(ee); > > > else > > > break; > > > } > > > > > > > It seems that the code I read had been changed on this section you showed > > above, as my coworker have optimized it a bit. But thanks for your > > explanation all the same. > > > > > > > > > > > > > > > > On Sat, Dec 29, 2012 at 9:39 AM, Yunkai Zhang > > wrote: > > > > > > > Hi folks: > > > > > > > > I'm reading code about RamCacheLRU, but I was confused by > > > RamCacheLRU->lru > > > > queue defined as following: > > > > > > > > struct RamCacheLRU: public RamCache { > > > > ... > > > > Que(RamCacheLRUEntry, lru_link) lru; > > > > DList(RamCacheLRUEntry, hash_link) *bucket; > > > > ... > > > > }; > > > > > > > > By reading put/get/remove functions of RamCacheLRU class, it seems > > that > > > > LRU algorithm was implemented by accessing *bucket* list instead of > > *lru* > > > > queue. > > > > > > > > Do I understand it correctly? if so, we can remove lru queue and > > relative > > > > code to speed up the put/get function of LRU a bit. > > > > > > > > -- > > > > Yunkai Zhang > > > > Work at Taobao > > > > > > > > > > > > > > > -- > > Yunkai Zhang > > Work at Taobao > > > -- Yunkai Zhang Work at Taobao --f46d042f96fc0be1c904d20aa135--