lucene-solr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Noble Paul നോബിള്‍ नोब्ळ् <noble.p...@corp.aol.com>
Subject Re: [jira] Commented: (SOLR-1513) Use Google Collections in ConcurrentLRUCache
Date Thu, 22 Oct 2009 05:36:16 GMT
On Wed, Oct 21, 2009 at 6:34 PM, Mark Miller <markrmiller@gmail.com> wrote:
> bq.  and Mark is representing "just keep working, ok?".
>
> But I'm not :) Like I said, I don't view the purpose of a soft value
> cache as avoiding OOM's. Size your caches correctly for that.
>
> For those that don't understand the how and why of soft value caches,
> they probably should not choose to use it.

Users may not have a clue on how much memory eventually the caches
will take up. Now if the admin page can let them know cache trashing
has happened , they can think of adding more RAM
>
> Lance Norskog wrote:
>> On-topic: Will the Google implementations + soft references behave
>> well with 8+ processors?
>>
>> Semi-on-topic: If you want to really know multiprocessor algorithms,
>> this is the bible: "The Art Of Multiprocessor Programming". Hundreds
>> of parallel algorithms for many different jobs, all coded in Java, and
>> cross-referenced with the java.util.concurrent package. Just amazing.
>>
>> http://www.elsevier.com/wps/find/bookdescription.cws_home/714091/description#description
>>
>> Off-topic: I was representing a system troubleshooting philosophy:
>> "Fail Early, Fail Loud". Meaning, if there is a problem like OOMs,
>> tell me and I'll fix it permanently. But different situations call for
>> different answers, and Mark is representing "just keep working, ok?".
>> Brittle v.s. Supple is one way to think of it.
>>
>> On Tue, Oct 20, 2009 at 11:27 AM, Shalin Shekhar Mangar
>> <shalinmangar@gmail.com> wrote:
>>
>>> On Tue, Oct 20, 2009 at 3:56 PM, Mark Miller <markrmiller@gmail.com> wrote:
>>>
>>>
>>>> On Oct 20, 2009, at 12:12 AM, Shalin Shekhar Mangar <
>>>> shalinmangar@gmail.com> wrote:
>>>>
>>>>  I don't think the debate is about weak reference vs. soft references.
>>>>
>>>> There appears to be confusion between the two here no matter what the
>>>> debate - soft references are for cachinh, weak references are not so much.
>>>> Getting it right is important.
>>>>
>>>>  I
>>>>
>>>>> guess the point that Lance is making is that using such a technique will
>>>>> make application performance less predictable. There's also a good chance
>>>>> that a soft reference based cache will cause cache thrashing and will
hide
>>>>> OOMs caused by inadequate cache sizes. So basically we trade an OOM for
>>>>> more
>>>>> CPU usage (due to re-computation of results).
>>>>>
>>>>>
>>>> That's the whole point. Your not hiding anything. I don't follow you.
>>>>
>>>>
>>> Using a soft reference based cache can hide the fact that one has inadequate
>>> memory for the cache size one has configured. Don't get me wrong. I'm not
>>> against the feature. I was merely trying to explain Lance's concerns as I
>>> understood them.
>>>
>>>
>>>
>>>>
>>>>
>>>>> Personally, I think giving an option is fine. What if the user does not
>>>>> have
>>>>> enough RAM and he is willing to pay the price? Right now, there is no
way
>>>>> he
>>>>> can do that at all. However, the most frequent reason behind OOMs is
not
>>>>> having enough RAM to create the field caches and not Solr caches, so
I'm
>>>>> not
>>>>> sure how important this is.
>>>>>
>>>>>
>>>> How important is any feature? You don't have a use for it, so it's not
>>>> important to you - someone else does so it is important to them. Soft value
>>>> caches can be useful.
>>>>
>>> Don't jump to conclusions :)
>>>
>>> The reason behind this feature request is to have Solr caches which resize
>>> themselves when enough memory is not available. I agree that soft value
>>> caches are useful for this. All I'm saying is that most OOMs that get
>>> reported on the list are due to inadequate free memory for allocating field
>>> caches. Finding a way around that will be the key to make a Lucene/Solr
>>> application practical in a limited memory environment.
>>>
>>> Just for the record, I'm +1 for adding this feature but keeping the current
>>> behavior as the default.
>>>
>>> --
>>> Regards,
>>> Shalin Shekhar Mangar.
>>>
>>>
>>
>>
>>
>>
>
>
> --
> - Mark
>
> http://www.lucidimagination.com
>
>
>
>



-- 
-----------------------------------------------------
Noble Paul | Principal Engineer| AOL | http://aol.com

Mime
View raw message