incubator-olio-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joshua Schnee <jpsch...@gmail.com>
Subject Re: OLIO-Java harness runs forever - round 2
Date Mon, 01 Feb 2010 23:59:16 GMT
Turns out I didn't get as much info as what is probably needed.  Here's an
updated zip file with more dumps...

Thanks,


On Mon, Feb 1, 2010 at 5:45 PM, Joshua Schnee <jpschnee@gmail.com> wrote:

> Just realized the list didn't get copied.
>
>
> ---------- Forwarded message ----------
> From: Joshua Schnee <jpschnee@gmail.com>
> Date: Mon, Feb 1, 2010 at 1:46 PM
> Subject: Re: OLIO-Java harness runs forever - round 2
> To: Akara Sucharitakul <Akara.Sucharitakul@sun.com>
>
>
> Thanks,
>
> Here's two PIDs I dumped, let me know if these aren't the ones you want as
> there are several.
>
> Thanks,
> Joshua
>
>
> On Mon, Feb 1, 2010 at 1:12 PM, Akara Sucharitakul <
> Akara.Sucharitakul@sun.com> wrote:
>
>> kill -QUIT <pid>
>>
>> or
>>
>> $JAVA_HOME/bin/jstack <pid>
>>
>> -Akara
>>
>> Joshua Schnee wrote:
>>
>>> Can you tell me how to do this?  I'm not sure how do this when the
>>> harness doesn't hit an exception...
>>>
>>> Thanks,
>>> Joshua
>>>
>>> On Mon, Feb 1, 2010 at 12:17 PM, Akara Sucharitakul <
>>> Akara.Sucharitakul@sun.com <mailto:Akara.Sucharitakul@sun.com>> wrote:
>>>
>>>    Can you please obtain me a stack dump of the Faban harness? If it
>>>    hangs somewhere, we'll be able to diagnose better. Thanks.
>>>
>>>    -Akara
>>>
>>>
>>>    Joshua Schnee wrote:
>>>
>>>        OK, so I'm seeing the harness just run indefinitely again, and
>>>        this time it isn't related to the maximum open files.
>>>
>>>        Details and Background information:
>>>        Faban 1.0 build 111109
>>>        Olio Java 0.2
>>>        The workload is being driven from physical Windows client
>>>        against 2 VMs on a system that is doing many other tasks.
>>>
>>>        Usage Info:
>>>        Physical Client usage : Avg ~ 22.93%, Max ~ 42.15%
>>>        Total System under test Utilization ~95%
>>>        Web avg util = 5.3%, Max =40.83% (during manual reload)
>>>        DB   avg uili = 4.2%, Max = 55.6% (during auto reload)
>>>
>>>        Granted, the system under test is near saturation, the client
>>>        isn't.  I'm not sure why the harness is never exiting.  Even if
>>>        the VMs or even the system under test gets so saturated that
>>>        they can't respond to requests, shouldn't the test, which is
>>>        running on the under utilized client finish regardless,
>>>        reporting whatever results it can?  Shanti, you had previously
>>>        asked me to file a JIRA, on this, of which I forgot.  I can do
>>>        so now, if you'd like.  Finally, glassfish appears to be stuck,
>>>        it's running, but not responding to requests, probably due to
>>>        the SEVERE entry in the server.log file (see below).
>>>
>>>        Faban\Logs:
>>>         agent.log = empty
>>>         cmdagent.log = No issues
>>>         faban.log.xml = No issues
>>>
>>>        Master\Logs
>>>         catalina.out = No issues
>>>         localhost*log*txt = No issues
>>>         OlioDriver.3C\
>>>         log.xml : Two types of issues, doTagSearch and the catastrophic
>>>        "Forcefully terminating benchmark run" : attached
>>>          GlassFish logs
>>>          jvm.log : Numerous dependency_failed entries : attached
>>>  server.log : Serveral SEVERE entires : Most notibly one where "a
>>>        signal was attempted before wait()" : Attached
>>>        Any help resolving this would me much appreciated,
>>>        -Josh
>>>
>>>
>>>
>>>
>>>
>>> --
>>> -Josh
>>>
>>>
>>
>
>
> --
> -Josh
>
>
>
>
> --
> -Josh
>
>


-- 
-Josh

Mime
View raw message