Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id E011E200D26 for ; Fri, 20 Oct 2017 23:46:32 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id DEA58160BCB; Fri, 20 Oct 2017 21:46:32 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 2FDE11609ED for ; Fri, 20 Oct 2017 23:46:32 +0200 (CEST) Received: (qmail 36572 invoked by uid 500); 20 Oct 2017 21:46:30 -0000 Mailing-List: contact solr-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: solr-user@lucene.apache.org Delivered-To: mailing list solr-user@lucene.apache.org Received: (qmail 36560 invoked by uid 99); 20 Oct 2017 21:46:30 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Oct 2017 21:46:30 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 4E5C01A120F for ; Fri, 20 Oct 2017 21:46:29 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.541 X-Spam-Level: * X-Spam-Status: No, score=1.541 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_BRBL_LASTEXT=1.644, RP_MATCHES_RCVD=-0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=elyograg.org Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id fSrjIvpjlXrd for ; Fri, 20 Oct 2017 21:46:28 +0000 (UTC) Received: from frodo.elyograg.org (frodo.elyograg.org [166.70.79.219]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id EE7255FBE6 for ; Fri, 20 Oct 2017 21:46:26 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by frodo.elyograg.org (Postfix) with ESMTP id 5D2ED97E for ; Fri, 20 Oct 2017 15:46:25 -0600 (MDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=elyograg.org; h= content-language:content-transfer-encoding:content-type :content-type:in-reply-to:mime-version:user-agent:date:date :message-id:from:from:references:subject:subject:received :received; s=mail; t=1508535985; bh=OCP8ss2P6zMee1asWF8fg0j8lect JpqDeSHUlxHq2yI=; b=QX3ceUGp7RfOS8VkrVm1gHmXBdp0hVffosuQmmG2XM+i yhjd2KoGHrkYaEuIgR+NgOFOTzXP3H0jtScx7ERb30hTpne3qLkeMwF3l0H+efcD czYEJkoHez/e2tI2o5fKFbiCxaAZ+2s+56XieVpU3JBQ584hdEmZzBBEymM8Hzs= X-Virus-Scanned: Debian amavisd-new at frodo.elyograg.org Received: from frodo.elyograg.org ([127.0.0.1]) by localhost (frodo.elyograg.org [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Mex77OC1dJsx for ; Fri, 20 Oct 2017 15:46:25 -0600 (MDT) Received: from [10.2.0.108] (client175.mainstreamdata.com [209.63.42.175]) (using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: elyograg@elyograg.org) by frodo.elyograg.org (Postfix) with ESMTPSA id CF71297B for ; Fri, 20 Oct 2017 15:46:23 -0600 (MDT) Subject: Re: Jetty maxThreads To: solr-user@lucene.apache.org References: <250C760C-6E6C-41C2-A51C-BBFC098C9DDD@wunderwood.org> From: Shawn Heisey Message-ID: <302caa7c-e6e8-d962-7e4e-70c58b471923@elyograg.org> Date: Fri, 20 Oct 2017 15:46:18 -0600 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0 MIME-Version: 1.0 In-Reply-To: <250C760C-6E6C-41C2-A51C-BBFC098C9DDD@wunderwood.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Content-Language: en-US archived-at: Fri, 20 Oct 2017 21:46:33 -0000 On 10/18/2017 2:41 PM, Walter Underwood wrote: > Jetty maxThreads is set to 10,000 which seams way too big. > > The comment suggests 5X the number of CPUs. We have 36 CPUs, which would mean 180 threads, which seems more reasonable. I have not seen any evidence that maxThreads at 10000 causes memory issues.� The out-of-the-box heap size for all recent releases is 512MB, and Solr starts up just fine with 10000 maxThreads. Most containers (including Jetty and Tomcat) default to a maxThreads value of 200.� The Jetty included with Solr has had a setting of 10000 since I first started testing with version 1.4.� Users who provide their own containers frequently run into problems where the container will not allow Solr to start the threads it needs to run properly, so they must increase the value. This is a graph of threads on a couple of my Solr servers: https://www.dropbox.com/s/4ux2y3xwvsjjrmt/solr-thread-graph.png?dl=0 The server named bigindy5 (rear graph in the screenshot) is my dev server, running 6.6.2-SNAPSHOT.� The server named idxb6 is running 5.3.2-SNAPSHOT and is a backup production server. The dev server has 8 CPU cores without hyperthreading and 24 Solr cores (indexes).� Most of those cores have index data in them -- the dev server has copies of *all* my indexes onboard.� It has very little activity though -- aside from once-a-minute maintenance updates and a monitoring server, there's virtually no query activity. The production server has 20 CPU cores with hyperthreading (so it looks like 40 to the OS), the same 24 Solr cores, but only a handful of those cores have data, the rest are idle.� There's one critical difference in activity for this server compared to the dev server -- four of the cores on the machine are actively indexing from MySQL with the dataimport handler, because I'm doing a full rebuild on that index.� Because this server is in a backup role currently, its query load is similar to the dev server -- almost nothing. These servers handle distributed indexes, but they are NOT running in cloud mode.� If there were active queries, then more threads would be needed than currently are running.� If there were more Solr cores (indexes), then more threads would be needed. My installation is probably bigger than typical, but is definitely not what I would call large.� As you can see from the screenshot, these servers have reached thread counts in the 300 range, and are currently sitting at about 250.� If I followed that recommendation of 5 threads per CPU, I would configure a value of 40 on the dev server, which wouldn't be anywhere near enough. I've got another server running version 4.7.2 with 8 CPU cores (no hyperthreading) and slightly fewer Solr cores.� This is a server that actively gets queries at a fairly low QPS rate.� It shows a steady thread count of around 200, with a peak thread count of 1032.� That instance of Solr has an uptime of 208 days. Based on what I have seen on my servers, I would not run with maxThreads less than 2000, and I don't see any reason to change it from the provided default of 10000. Thanks, Shawn