jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jukka Zitting (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (JCR-3588) Response time higher on Node1 with load when Node2 has no load
Date Mon, 06 May 2013 10:38:17 GMT

    [ https://issues.apache.org/jira/browse/JCR-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13649644#comment-13649644
] 

Jukka Zitting commented on JCR-3588:
------------------------------------

The CPU use on the node with no user load is due to the indexer catching up with changes happening
on the other node. For performance reasons the Jackrabbit 2.x clustering architecture keeps
a search index locally on each cluster node (and there's no easy way to share index updates
across nodes), which is why the indexing work needs to be duplicated on each cluster node.

>From the information included here it's hard to tell why you'd experience the behavior
you're seeing. Can you reduce the issue to a simple test case that can be reproduced elsewhere?
                
> Response time higher on Node1 with load when Node2 has no load
> --------------------------------------------------------------
>
>                 Key: JCR-3588
>                 URL: https://issues.apache.org/jira/browse/JCR-3588
>             Project: Jackrabbit Content Repository
>          Issue Type: Bug
>          Components: clustering
>    Affects Versions: 2.4.3
>         Environment: CentOS 6.4 running WebSphere Application Server 7.0.0.19. Jackrabbit
cluster configuration with 2 WAS servers. Repository on DB2 9.7.
>            Reporter: Cody Burleson
>         Attachments: JackrabbitCluster-ResponseTime.png, Node1repository.xml, Node2repository.xml,
Screen Shot 2013-05-03 at 3.49.52 PM.png, Screen Shot 2013-05-03 at 4.04.32 PM.png, Screen
Shot 2013-05-03 at 4.14.52 PM.png
>
>
> In our performance analysis, we are seeing a strange effect, which we does not make sense
to us. It may or may not be a defect, but we need to understand why the effect occurs. In
a 2 node cluster, we can run a certain load (reading and writing) directly on Node1 and an
equivalent load (reading and writing on Node2). We measure the response time on both nodes,
and it's less than 2 seconds. If we stop the load to one of the servers, the response time
on the other server triples (with no additional load). See attached image "JackrabbitCluster-ResponseTime.png".
The left side of the report shows when only one node (Node1) has load and Node2 has no load.
In this case, the response times on Node1 are at about 6 seconds. Then, on the right side
of the report, we add an equivalent load to Node2 and then the response times on Node1 drop
to 2 seconds. So, the load on Node1 was always consistent, yet ADDING load to Node2 actually
improves response time on Node1. Logically, it doesn't make much sense, eh? Someone, please,
at least help us understand why this may be happening.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message