lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nico Heid <>
Subject Re: Benchmarking tools?
Date Mon, 30 Jun 2008 11:26:07 GMT
I basically followed this:

I basically put all my queries in a flat text file. you could either use 
two parameters or put them in one file.
The good point of this is, that each test uses the same queries, so you 
can compare the settings better afterwards.

If you use varying facets, you might just go with 2 text files. If it 
stays the same in one test you can hardcode it into the test case.

I polished the result a little, if you want to take a look: , JMeter itself does not plot such 
nice graphs.
(green is the max results delivered, upon 66 "active users" per second 
the response time increases (orange/yellow, average and median of the 
response times)
(i know the scales and descriptions are missing :-) but you should get 
the picture)
I manually reduced the machines capacity, elsewise solr would server 
more than 12000 requests per second. (the whole index did fit into ram)
I can send you my saved test case if this would help you.


Jacob Singh wrote:
> Hi Nico,
> Thanks for the info. Do you have you scripts available for this?
> Also, is it configurable to give variable numbers of facets and facet
> based searches?  I have a feeling this will be the limiting factor, and
> much slower than keyword searches but I could be (and usually am) wrong.
> Best,
> Jacob
> Nico Heid wrote:
>> Hi,
>> I did some trivial Tests with Jmeter.
>> I set up Jmeter to increase the number of threads steadily.
>> For requests I either usa a random word or combination of words in a
>> wordlist or some sample date from the test system. (this is described in the
>> JMeter manual)
>> In my case the System works fine as long as I don't exceed the max number of
>> requests per second it can handel. But thats not a big surprise. More
>> interesting seems the fact, that to a certain degree, after exceeding the
>> max nr of requests response time seems to rise linear for a little while and
>> then exponentially. But that might also be the result of my test szenario.
>> Nico
>>> -----Original Message-----
>>> From: Jacob Singh []
>>> Sent: Sunday, June 29, 2008 6:04 PM
>>> To:
>>> Subject: Benchmarking tools?
>>> Hi folks,
>>> Does anyone have any bright ideas on how to benchmark solr?
>>> Unless someone has something better, here is what I am thinking:
>>> 1. Have a config file where one can specify info like how
>>> many docs, how large, how many facets, and how many updates /
>>> searches per minute
>>> 2. Use one of the various client APIs to generate XML files
>>> for updates using some kind of lorem ipsum text as a base and
>>> store them in a dir.
>>> 3. Use siege to set the update run at whatever interval is
>>> specified in the config, sending an update every x seconds
>>> and removing it from the directory
>>> 4. Generate a list of search queries based upon the facets
>>> created, and build a urls.txt with all of these search urls
>>> 5. Run the searches through siege
>>> 6. Monitor the output using nagios to see where load kicks in.
>>> This is not that sophisticated, and feels like it won't
>>> really pinpoint bottlenecks, but would aproximately tell us
>>> where a server will start to bail.
>>> Does anyone have any better ideas?
>>> Best,
>>> Jacob Singh

View raw message