perl-modperl mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Issac Goldstand <>
Subject Re: Database transaction across multiple web requests
Date Mon, 03 Apr 2006 11:03:43 GMT
If anything, it really doesn't make sense to cache something in the
query cache with limits intact; LIMIT is just a modifier which trims the
result set on the server side.  Since LIMIT doesn't actually affect the
result set, per se, it doesn't make sense for the query cache to pay
attention to it any more than it would pay attention to HIGH_PRIORITY,
BIG_RESULT or any other modifier.

I can think of a few scenarios where I also saw a performance boost with
the query cache on limits, BUT IIRC, they're all queries that I've used
SQL_CALC_FOUND_ROWS (which causes the query to run to the end, instead
of aborting when LIMIT is matched).  My thought was that the queries in
question were being cached since the results ran to the end, and the
LIMIT modifier on subsequent queries just took the result set from the
query cache and returned the limits from there.

Recall, also, that when running the query the first time, it'll start
slurping tables indexes into the key buffer, which might make subsequent
queries to the same underlying tables a bit faster in some (mostly
low-volume or development) environments.

By the way, apparently MySQL also supports the PostgreSQL syntax of


Jonathan Field wrote:
>> From here you can watch in real time for Qcache_hits.  "Questions" is the 
> total number of queries.  Indeed, LIMIT queries are cached, but just the
> part that was returned.
> If the query is the slightest bit different when run again, or if any of
> the underlying tables have been written to, the query cache will be
> ignored.  But the data itself will most likely be in memory the next
> time around so it will be faster than the first run anyways.
> Just some food for thought.
> Cheers,
> Jonathan Field
> On Sun, 2 Apr 2006, Perrin Harkins wrote:
>> On Mon, 2006-04-03 at 10:28 +0800, Foo Ji-Haw wrote:
>>> Perrin Harkins wrote:
>>>> Jeff wrote:
>>>>> Your application simply uses approach (b) and MySQL does the rest
>>>>> automatically. So if you
>>>> Have you tried this?  I was under the impression that MySQL would just
>>>> stop when it finds enough row to satisfy LIMIT, so it wouldn't cache
>>>> the whole result set.
>>> I have, and I can verify that MySQL does some kind of internal
>>> 'caching', so that the next limit call is very fast.
>> How do you know it isn't just the operating system caching the disk
>> fetches?  The docs certainly make it sound like a query cached with
>> LIMIT will not be useful when different LIMIT values are applied.
>> - Perrin

View raw message