cloudstack-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kevin McCormick (JIRA)" <>
Subject [jira] [Commented] (CLOUDSTACK-8678) OOM Kills Guests
Date Thu, 06 Aug 2015 21:08:05 GMT


Kevin McCormick commented on CLOUDSTACK-8678:

Daan, I work with Josh, and we've been doing some work on this. We found an agent property
that looked like it was exactly what we wanted: host.reserved.mem.mb. As far as I can tell,
this property is completely undocumented, but it's been in the code since KVM support was
first added. That property ends up being sent by the agent to the management server as dom0MinMem
during StartupRoutingCommand, and has a default value of 768MB. Unfortunately, the management
server seems to completely ignore it. I tested creating an instance that used up all but about
80MB on an empty host, and it deployed to that host just fine.

vm.memballoon.disable doesn't sound like it will do anything unless mem.overprovisioning.factor
is >1, which we don't want to do.
mem.overprovisioning.factor can't be <1 (which is fine, setting that <1 would be more
of a hack than a real solution IMO).
cluster.memory.allocated.capacity.notificationthreshold & cluster.memory.allocated.capacity.disablethreshold
don't really do the job either, as they are cluster-wide. It's very possible for there to
be plenty of RAM available in the cluster, but the allocator puts a VM on a host that doesn't
quite have room for it. If there were an allocator that spread VMs out on the hosts based
on largest available RAM, they might be more viable.

So, for additional functional demands, I think there needs to be a way for KVM hosts to save
some RAM for themselves. Probably either host.reserved.mem.mb & dom0MinMem or something
along the lines of _host_.memory.allocated.capacity.notificationthreshold & _host_.memory.allocated.capacity.disablethreshold.

> OOM Kills Guests
> ----------------
>                 Key: CLOUDSTACK-8678
>                 URL:
>             Project: CloudStack
>          Issue Type: Bug
>      Security Level: Public(Anyone can view this level - this is the default.) 
>          Components: Hypervisor Controller, KVM
>    Affects Versions: 4.4.2
>         Environment: Intel Xeon Quad Core CPU L5520 @ 2.27GHz
> 98 GB RAM
> Ubuntu 14.04
> Running Cloustack 4.4.2
>            Reporter: Josh Harshman
>            Assignee: Daan Hoogland
>            Priority: Critical
> We have several KVM nodes running Cloudstack 4.4.2. Sometimes an instance with X amount
of RAM provisioned will be started on a host that has X+a small amount of RAM free. The kernel
OOM killer will eventually kill off the instance. Has anyone else seen this behavior, is there
a way to reserve RAM for use by the host instead of by Cloudstack? Looking at the numbers
in the database and the logs, Cloudstack is trying to use 100% of the RAM on the host.
> Any thoughts would be appreciated.
> Thank you,

This message was sent by Atlassian JIRA

View raw message