guacamole-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Couchman <>
Subject Re: Server Out Of Memory
Date Fri, 04 Aug 2017 00:51:44 GMT
Okay, let me try to take these one at a time...

On Thursday, August 3, 2017, 8:16:09 PM EDT, James Fraser <>

> I recently upgraded to 0.9.13 and am experiencing an issue with my Production server.

> This is potentially a Tomcat issue or JDBC driver issue.

What extensions do you have loaded?  Looks like MySQL JDBC - anything else?

> WARNING: The web application [guacamole] appears to have started a thread named [Abandoned
connection cleanup thread] but has failed to stop it. This is very likely to create a memory
leak. Stack trace of thread:
> java.lang.Object.wait(Native Method)
> java.lang.ref.ReferenceQueue.remove(
I use PostgreSQL and see these messages periodically, too, but they've never led to any adverse
> Which leads too

> Aug 03, 2017 10:04:16 PM$Poller run
> java.lang.OutOfMemoryError: Java heap space
Yeah, that's not good, but it doesn't mean your server is running out of memory, it means
the JavaVM is running out of heap space.  Those are different things.  What parameters do
you have set for memory in Java in your Tomcat startup?  Look for the -Xmx flag either in
the ps output for the PID of Java associated with Tomcat or in the Tomcat file.
 If you don't see it, then the default is 1/4 of your total RAM, so 1GB.  You can add the
-Xmx flag to the java runtime parameters for Tomcat and bump it up to 2GB or something like
that and see if that helps.  If run out of RAM after bumping it up to 2 or 3GB, then you
may have run into a memory leak, but I'd give that a shot, first.  When you set it, you can
use abbreviations for various byte multiples - for example, -Xmx1024m is 1024MB or 1GB.  So,
you might want to start with -Xmx2048m to bump up to 2GB and see if that helps. 

> The server has 4GB of ram

I ran Guacamole 0.9.12 and the development versions of 0.9.13 on a system with 4GB of RAM
for quite some time and never had any issues.  How many connections do you have?  How many
users connecting concurrently?

> root@MGMT-GUAC-01:/var/log/tomcat8# free -h
>              total        used        free      shared  buff/cache  
> Mem:           3.4G        939M        128M         22M       
2.3G        2.1G
> Swap:            0B          0B          0B

> A restart of tomcat resolves the issue for a period of time, I have just written a cron
job that restarts tomcat on appearance of this issue.

I've done Linux system admin/engineering for many years, and, from my point of view, those
numbers from the output of free look just fine.  While it's easy to look at the "free" column
and see 128M and think your system is short on RAM, the "available" column is what really
counts.  Linux uses available RAM to cache and buffer things like disk and network I/O, and
your system is consuming 2-ish GB for that.  Memory allocated for buffer/cache can be easily
freed when applications need it, so that's why the available column shows 2.1GB.  So, whenever
you ran the "free" command on your system, the system itself is fine on RAM (for the moment)
- it's most likely a Java heap size issue (-Xmx flag needs to be set).
View raw message