Return-Path: X-Original-To: apmail-lucene-solr-user-archive@minotaur.apache.org Delivered-To: apmail-lucene-solr-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 05C3D49C5 for ; Fri, 13 May 2011 10:57:54 +0000 (UTC) Received: (qmail 33899 invoked by uid 500); 13 May 2011 10:57:51 -0000 Delivered-To: apmail-lucene-solr-user-archive@lucene.apache.org Received: (qmail 33846 invoked by uid 500); 13 May 2011 10:57:51 -0000 Mailing-List: contact solr-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: solr-user@lucene.apache.org Delivered-To: mailing list solr-user@lucene.apache.org Received: (qmail 33838 invoked by uid 99); 13 May 2011 10:57:51 -0000 Received: from minotaur.apache.org (HELO minotaur.apache.org) (140.211.11.9) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 May 2011 10:57:51 +0000 Received: from localhost (HELO [10.0.0.11]) (127.0.0.1) (smtp-auth username gsingers, mechanism plain) by minotaur.apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 May 2011 10:57:50 +0000 Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Apple Message framework v1084) Subject: Re: Huge performance drop in distributed search w/ shards on the same server/container From: Grant Ingersoll In-Reply-To: <58B6CA2F3B4F49D2A657CC204D24C2E0@gmail.com> Date: Fri, 13 May 2011 06:57:50 -0400 Content-Transfer-Encoding: quoted-printable Message-Id: <5161313A-16D5-4C44-9C0E-9FDC2DF7852C@apache.org> References: <58B6CA2F3B4F49D2A657CC204D24C2E0@gmail.com> To: solr-user@lucene.apache.org X-Mailer: Apple Mail (2.1084) Is that 10 different Tomcat instances or are you using multicore? How = are you testing? On May 13, 2011, at 6:08 AM, Frederik Kraus wrote: > Hi,=20 >=20 > I'm having some serious problems scaling the following setup: >=20 > 48 CPU / Tomcat / ... >=20 > localhost/shard1 > ... > localhost/shard10 >=20 > When using all 10 shards in the query the req/s drop down to about 300 = without fully utilizing cpu (60% idle) or ram (disk i/o is zero - = everything fits into the ram) >=20 > When only quering one shard I get about 5k-6k req/s=20 >=20 > Are there any known limits and/or work-arounds? >=20 > Thanks, >=20 > Fred. -------------------------------------------- Grant Ingersoll Join the LUCENE REVOLUTION Lucene & Solr User Conference May 25-26, San Francisco www.lucenerevolution.org