activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lionel Cons (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (APLO-170) Incorrect metrics in latest snapshot
Date Thu, 22 Mar 2012 06:48:23 GMT

    [ https://issues.apache.org/jira/browse/APLO-170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13235403#comment-13235403
] 

Lionel Cons commented on APLO-170:
----------------------------------

Yes, we may have had some durable subscriptions but this is not enough to explain the discrepancy.

Most queues were empty and, even with double counting, queue_items=280614806 was wrong by
several orders of magnitude. This gives the impressions that some counters are not decremented
in some situations.

However your comment brings another question: how can I get from Apollo the amount of disk
space it is using (for monitoring purposes)? We are current using queue_size for this but,
from your explanation, this number does not match what we want.
                
> Incorrect metrics in latest snapshot
> ------------------------------------
>
>                 Key: APLO-170
>                 URL: https://issues.apache.org/jira/browse/APLO-170
>             Project: ActiveMQ Apollo
>          Issue Type: Bug
>         Environment: Apollo 99-trunk-20120306.040517-2
>            Reporter: Lionel Cons
>
> After running a stress test (with stomp-benchmark) on the latest Apollo snapshot, we
get incorrect metrics via the REST API.
> Most queues are empty but we get for /dest-metrics:
> {
>   'consumer_count' => 2,
>   'consumer_counter' => 3747,
>   'current_time' => '1331811113155',
>   'dequeue_item_counter' => 1308687850,
>   'dequeue_size_counter' => '1676664197980',
>   'dequeue_ts' => '1331811112533',
>   'enqueue_item_counter' => 2903640770,
>   'enqueue_size_counter' => '2557292455212',
>   'enqueue_ts' => '1331811113155',
>   'expired_item_counter' => 0,
>   'expired_size_counter' => 0,
>   'expired_ts' => '1331811094462',
>   'nack_item_counter' => 219021,
>   'nack_size_counter' => 86391291,
>   'nack_ts' => '1331811094462',
>   'objects' => 12,
>   'producer_count' => 77,
>   'producer_counter' => 4838,
>   'queue_items' => 280614806,
>   'queue_size' => '236922566764',
>   'swap_in_item_counter' => 326917022,
>   'swap_in_size_counter' => '345397442494',
>   'swap_out_item_counter' => 610143466,
>   'swap_out_size_counter' => '583772449904',
>   'swapped_in_items' => 163303,
>   'swapped_in_size' => 88326399,
>   'swapped_in_size_max' => 819200000,
>   'swapping_in_size' => 1752840,
>   'swapping_out_size' => 1310961,
> }
> Note that queue_size=236922566764, so rough;y 220GB. OTOH, we see:
> # du -hs data/
> 9.4G	data/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message