incubator-olio-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bruno Guimarães Sousa <brgso...@gmail.com>
Subject Re: problem with sockets (too many)
Date Tue, 08 Jun 2010 02:51:32 GMT
Which one determines if the connection will use KeepAlive : client (faban)
or web server ?
I suppose client(faban) requests for a keepalive connection and the web
server determines it will be http 1.0 or not. Is that correct?

I am using webroar web server at the moment.

--
Bruno Guimarães Sousa
www.ifba.edu.br
PONTONET - DGTI - IFBA
Ciência da Computação UFBA
Registered Linux user #465914


On Mon, Jun 7, 2010 at 11:28 PM, Shanti Subramanyam <
shanti.subramanyam@gmail.com> wrote:

> This seems like KeepAlive is not being used and the connection is perhaps
> being closed after every request from the client. Are you using 'nginx' to
> front-end your rail servers ? If so, that may explain it. nginx uses http
> 1.0 and does not support keepalive connections (at least that's how it was
> with the version I have run). You may want to check on this in the nginx
> forums.
>
> Shanti
>
> 2010/6/7 Bruno Guimarães Sousa <brgsousa@gmail.com>
>
> The limit is 1024 here. I'm using faban 1.0 .
>> I imagine that concurrent users from the workload is directly related to
>> that. The value is 25 for concurrent users. The thing is that database is
>> loaded for 100 concurrent users. I want to start low, then I'll stress the
>> SUT with 100 concurrent users. Is that a wrong choice?
>>
>> *A. For a successful faban run it goes like this:*
>> 1. During ramp up, the number of sockets in use jumps high with lines like
>> this from netstat:
>> *tcp6       0      0 kamet:40197             k2:3002
>> TIME_WAIT*
>>
>> It is weird that netstat reports a tcp6 socket. Although server is not
>> using ipv6 addresses
>>
>> 2. Then, during steady state, it gets even higher:
>> *# netstat -a | wc -l
>> 14854*
>>
>> And the most of sockets are in TIME_WAIT state (that's weird for me)
>>
>> 3. The ramp down is configured to last 60 sec. And results are OK
>>
>> 4. After ramp down there still are some waiting sockets. And then, they'll
>> "fade away"
>>
>>
>> *B. But if I start another run right after a successful one, the
>> "java.net.SocketException: Too many open files" problem happens.*
>>
>> *C. So, I have to wait a couple of minutes to start another run and then
>> it will go fine.*
>>
>> *D. Here is "run configuration":*
>> <olio>
>> -
>> <jvmConfig>
>> <javaHome>/usr/lib/jvm/java-6-openjdk</javaHome>
>> <jvmOptions>-Xmx1g -Xms256m -XX:+DisableExplicitGC</jvmOptions>
>> </jvmConfig>
>> -
>> <fa:runConfig definition="org.apache.olio.workload.driver.UIDriver">
>> <fh:description>MAKALU standalone webroar (400 seg)</fh:description>
>> -
>> <fa:hostConfig>
>> <fa:host>192.168.1.18</fa:host>
>> <fh:enabled>true</fh:enabled>
>> <fh:cpus>0</fh:cpus>
>> <fh:tools>vmstat 1; nicstat -i eth0 1</fh:tools>
>> <fh:userCommands/>
>> </fa:hostConfig>
>> <fa:scale>25</fa:scale>
>> -
>> <fa:runControl>
>> <fa:rampUp>60</fa:rampUp>
>> <fa:steadyState>400</fa:steadyState>
>> <fa:rampDown>60</fa:rampDown>
>> <fa:variableLoad>false</fa:variableLoad>
>> <fa:variableLoadFile>/faban/load.txt</fa:variableLoadFile>
>> </fa:runControl>
>> <outputDir>/usr/local/faban/output/OlioDriver.4W/</outputDir>
>> <audit>false</audit>
>> -
>> <threadStart>
>> <delay>10</delay>
>> <simultaneous>false</simultaneous>
>> <parallel>false</parallel>
>> </threadStart>
>> -
>> <stats>
>> <maxRunTime>6</maxRunTime>
>> <interval>30</interval>
>> </stats>
>> -
>> <runtimeStats enabled="false">
>> <interval>5</interval>
>> </runtimeStats>
>> -
>> <driverConfig name="UIDriver">
>> <agents>1</agents>
>> -
>> <stats>
>> <interval>30</interval>
>> </stats>
>> <runtimeStats target="9988"/>
>> -
>> <properties>
>> <property name="serverType">rails</property>
>> </properties>
>> </driverConfig>
>> </fa:runConfig>
>> -
>> <proxyServer>
>> -
>> <fa:hostConfig>
>> <fa:host>192.168.1.13</fa:host>
>> <fa:hostPorts>192.168.1.13:3003</fa:hostPorts>
>> <enabled>true</enabled>
>> <cpus>0</cpus>
>> <tools>vmstat 1; nicstat -i eth0 1</tools>
>> <userCommands/>
>> </fa:hostConfig>
>> <type/>
>> -
>> <fh:service>
>> <fh:name>NginxService</fh:name>
>> <fh:tools>NONE</fh:tools>
>> <fh:restart>true</fh:restart>
>> -
>> <fh:config>
>> <cmdPath/>
>> <logsDir/>
>> <pidDir/>
>> <confPath/>
>> <getAccLog>false</getAccLog>
>> </fh:config>
>> </fh:service>
>> </proxyServer>
>> -
>> <webServer>
>> -
>> <fa:hostConfig>
>> <fa:host/>
>> <fa:hostPorts/>
>> <enabled>true</enabled>
>> <cpus>0</cpus>
>> <tools/>
>> <userCommands/>
>> </fa:hostConfig>
>> -
>> <fh:service>
>> <fh:name>RailsService</fh:name>
>> <fh:tools>NONE</fh:tools>
>> <fh:restart>true</fh:restart>
>> -
>> <fh:config>
>> <type/>
>> <appDir/>
>> <cmdPath/>
>> <logsDir/>
>> <pidsDir/>
>> <numInstances/>
>> <rakePath/>
>> </fh:config>
>> </fh:service>
>> </webServer>
>> -
>> <dbServer>
>> -
>> <fa:hostConfig>
>> <fa:host>192.168.1.16</fa:host>
>> <enabled>true</enabled>
>> <cpus>0</cpus>
>> <tools>vmstat 10; nicstat -i eth0 1</tools>
>> <userCommands/>
>> </fa:hostConfig>
>> <dbDriver>com.mysql.jdbc.Driver</dbDriver>
>> -
>> <connectURL>
>> jdbc:mysql://
>> 192.168.1.16/olio?user=olio&password=olio&relaxAutoCommit=true&sessionVariables=FOREIGN_KEY_CHECKS=0
>> </connectURL>
>> <reloadDB>true</reloadDB>
>> <scale>100</scale>
>> -
>> <fh:service>
>> <fh:name>MySQLService</fh:name>
>> <fh:tools/>
>> <fh:restart>false</fh:restart>
>> -
>> <fh:config>
>> <serverHome/>
>> <user>olio</user>
>> <password>olio</password>
>> <confPath/>
>> </fh:config>
>> </fh:service>
>> </dbServer>
>> -
>> <dataStorage>
>> -
>> <fa:hostConfig>
>> <fa:host/>
>> <enabled>true</enabled>
>> <cpus>0</cpus>
>> <tools>NONE</tools>
>> <userCommands/>
>> </fa:hostConfig>
>> <reloadMedia>false</reloadMedia>
>> <mediaDir>/filestore</mediaDir>
>> </dataStorage>
>> -
>> <cacheServers>
>> -
>> <fa:hostConfig>
>> <fa:host/>
>> <fa:hostPorts/>
>> <enabled>false</enabled>
>> <cpus>0</cpus>
>> <tools>NONE</tools>
>> <userCommands/>
>> </fa:hostConfig>
>> -
>> <fh:service>
>> <fh:name>MemcachedService</fh:name>
>> <fh:tools>MemcacheStats -i 10</fh:tools>
>> <fh:restart>true</fh:restart>
>> -
>> <fh:config>
>> <cmdPath>/usr/lib/memcached</cmdPath>
>> <serverMemSize>256</serverMemSize>
>> </fh:config>
>> </fh:service>
>> </cacheServers>
>> </olio>
>>
>>
>>
>>
>>
>> regards,
>> --
>> Bruno Guimarães Sousa
>> www.ifba.edu.br
>> PONTONET - DGTI - IFBA
>> Ciência da Computação UFBA
>> Registered Linux user #465914
>>
>>
>> On Mon, Jun 7, 2010 at 1:00 AM, Shanti Subramanyam <
>> shanti.subramanyam@gmail.com> wrote:
>>
>>> You may have to increase the open file descriptor limit on your system.
>>> Check 'ulimit' if you are running on Unix.
>>>
>>> Shanti
>>>
>>>
>>> 2010/6/5 Bruno Guimarães Sousa <brgsousa@gmail.com>
>>>
>>> The first faban run goes  OK.
>>>> But, from the second and on, there is an error:
>>>>  10:34:51   SEVERE*
>>>> exception* Endpoint ServerSocket[addr=
>>>> 0.0.0.0/0.0.0.0,port=0,localport=9980] ignored exception:
>>>> java.net.SocketException: Too many open files
>>>> I think it is a tomcat or faban error. Did anyone have this same
>>>> problem?
>>>> (geocoder and faban are on the same computer)
>>>>
>>>> regards,
>>>>  --
>>>> Bruno Guimarães Sousa
>>>> www.ifba.edu.br
>>>> PONTONET - DGTI - IFBA
>>>> Ciência da Computação UFBA
>>>> Registered Linux user #465914
>>>>
>>>
>>>
>>
>

Mime
View raw message