cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Koushik Das <koushik....@citrix.com>
Subject Re: Strange bug? "spam" in management log files...
Date Thu, 04 Jun 2015 16:53:36 GMT
This is expected in a clustered MS setup. What is the distribution of HV hosts across these
MS (check host table in db for MS id)? MS owning the HV host processes all commands for that
host.
Grep for the sequence numbers (for e.g. 73-7374644389819187201) in both MS logs to correlate.



On 04-Jun-2015, at 8:30 PM, Andrija Panic <andrija.panic@gmail.com> wrote:

> Hi,
> 
> I have 2 ACS MGMT servers, loadbalanced properly (AFAIK), and sometimes it
> happens that on the first node, we have extremem number of folowing line
> entries in the log fie, which causes many GB log in just few hours or less:
> (as you can see here they are not even that frequent, but sometimes, it
> gets really crazy with the speed/numer logged per seconds:
> 
> 2015-06-04 16:55:04,089 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-29:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,129 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-28:null) Seq 1-3297479352165335041: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,129 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-8:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,169 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-26:null) Seq 1-3297479352165335041: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,169 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-30:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,209 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-27:null) Seq 1-3297479352165335041: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,209 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-2:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,249 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-4:null) Seq 1-3297479352165335041: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,249 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-7:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,289 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-3:null) Seq 1-3297479352165335041: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,289 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-5:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,329 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-1:null) Seq 1-3297479352165335041: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,330 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-15:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,369 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-11:null) Seq 1-3297479352165335041: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,369 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-17:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,409 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-14:null) Seq 1-3297479352165335041: MgmtId
> 90520745449919: Resp: Routing to peer
> 2015-06-04 16:55:04,409 DEBUG [c.c.a.m.ClusteredAgentManagerImpl]
> (AgentManager-Handler-12:null) Seq 73-7374644389819187201: MgmtId
> 90520745449919: Resp: Routing to peer
> 
> 
> We have haproxy VIP, to which SSVM connects, and all cloudstack agents
> (agent.properties file).
> 
> Any suggestions, how to avoid this - I noticed when I turn off second ACS
> MGMT server, and then reboot first one (restart cloudstack-management) it
> stops and behaves nice :)
> 
> This is ACS 4.5.1, Ubuntu 14.04 for mgmt nodes.
> 
> Thanks,
> -- 
> 
> Andrija Panić

Mime
View raw message