Return-Path: Delivered-To: apmail-harmony-commits-archive@www.apache.org Received: (qmail 18457 invoked from network); 30 May 2007 14:26:36 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 30 May 2007 14:26:36 -0000 Received: (qmail 79949 invoked by uid 500); 30 May 2007 14:26:40 -0000 Delivered-To: apmail-harmony-commits-archive@harmony.apache.org Received: (qmail 79937 invoked by uid 500); 30 May 2007 14:26:40 -0000 Mailing-List: contact commits-help@harmony.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@harmony.apache.org Delivered-To: mailing list commits@harmony.apache.org Received: (qmail 79928 invoked by uid 99); 30 May 2007 14:26:40 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 May 2007 07:26:40 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 May 2007 07:26:36 -0700 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id DABF47141A8 for ; Wed, 30 May 2007 07:26:15 -0700 (PDT) Message-ID: <13325283.1180535175893.JavaMail.jira@brutus> Date: Wed, 30 May 2007 07:26:15 -0700 (PDT) From: "Peter Novodvorsky (JIRA)" To: commits@harmony.apache.org Subject: [jira] Commented: (HARMONY-4001) [drlvm][threading] DRLVM can't start more then ~1600 threads due to memory consumption (win32). In-Reply-To: <30968984.1180533916178.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HARMONY-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500142 ] Peter Novodvorsky commented on HARMONY-4001: -------------------------------------------- please, make this bug non-critical. problem of not changable stack size is very well known. and this is duplicate. > [drlvm][threading] DRLVM can't start more then ~1600 threads due to memory consumption (win32). > ----------------------------------------------------------------------------------------------- > > Key: HARMONY-4001 > URL: https://issues.apache.org/jira/browse/HARMONY-4001 > Project: Harmony > Issue Type: Bug > Components: DRLVM > Environment: Windows Server 2003 (32-bit) > Reporter: Sergey Kuksenko > Assignee: weldon washburn > Priority: Critical > Attachments: Test.java > > > Attached test shown that DRLVM is not avaliable to have more then ~1600 threads running simultaneously. > Even such empty threads as in the test. > This bug is critical to running SpecJAppServer2004 because of even simple txRate=100 needs ~1300 running threads. > The test tries to run 3000 simultaneously threads (doing nothing). > RI pass the test succesfully. > Perfmon data shows the following: > - starting each thread adds (in average) 1.24M of memory to the process address space. > - so the test on 1636 thread reached 2G Windows limit and hung up. > Setting to Sun1.6 -Xss1M options which significially increase stack size for each thread leads to failure too (after 1827 thread). But RI throws the follwoing exception: > Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:597) > at Test.main(Test.java:17) > 1. DRLVM should have the similar option for managing thread stack size. > 2. DRLVM should correctly throw exception in this case. > 3. default stack size shoule be less then 1M. > I found that "vm\thread\src\thread_java_basic.c" has harcoded default thread stack size as 1M. But setting it to 16K doesn't change the test behavior and even in this case I can see with perfmon that each thread use 1.24M. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.