Return-Path: X-Original-To: apmail-trafficserver-users-archive@www.apache.org Delivered-To: apmail-trafficserver-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E4F76C179 for ; Fri, 13 Apr 2012 20:10:50 +0000 (UTC) Received: (qmail 18763 invoked by uid 500); 13 Apr 2012 20:10:50 -0000 Delivered-To: apmail-trafficserver-users-archive@trafficserver.apache.org Received: (qmail 18709 invoked by uid 500); 13 Apr 2012 20:10:50 -0000 Mailing-List: contact users-help@trafficserver.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@trafficserver.apache.org Delivered-To: mailing list users@trafficserver.apache.org Received: (qmail 18701 invoked by uid 99); 13 Apr 2012 20:10:50 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Apr 2012 20:10:50 +0000 X-ASF-Spam-Status: No, hits=-2.8 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_HI,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of Steve.Owens@disney.com designates 204.128.192.17 as permitted sender) Received: from [204.128.192.17] (HELO msg1.disney.com) (204.128.192.17) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 13 Apr 2012 20:10:40 +0000 Received: from int1.disney.pvt (int1.disney.pvt [153.7.110.9]) by msg1.disney.com (Switch-3.4.4/Switch-3.4.4) with ESMTP id q3DKAFCf007368 for ; Fri, 13 Apr 2012 20:10:15 GMT Received: from sm-cala-xht02.swna.wdpr.disney.com (SM-CALA-XHT02.swna.wdpr.disney.com [153.7.248.17]) by int1.disney.pvt (Switch-3.4.4/Switch-3.4.4) with ESMTP id q3DKA91U010653 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL) for ; Fri, 13 Apr 2012 20:10:15 GMT Received: from sm-cala-vxmb06a.swna.wdpr.disney.com ([fe80::8920:73be:eaf5:e3af]) by sm-cala-xht02.swna.wdpr.disney.com ([2002:9907:f811::9907:f811]) with mapi; Fri, 13 Apr 2012 13:10:10 -0700 From: "Owens, Steve" To: "users@trafficserver.apache.org" Date: Fri, 13 Apr 2012 13:10:10 -0700 Subject: Request for suggestions failure to scale in cluster mode. Thread-Topic: Request for suggestions failure to scale in cluster mode. Thread-Index: Ac0ZsW1vRke1/YmcTGqcAKTa2rfbmw== Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: user-agent: Microsoft-MacOutlook/14.12.0.110505 acceptlanguage: en-US Content-Type: multipart/alternative; boundary="_000_CBADD63212143steveowensemaildisneycom_" MIME-Version: 1.0 X-Source-IP: SM-CALA-XHT02.swna.wdpr.disney.com [153.7.248.17] --_000_CBADD63212143steveowensemaildisneycom_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hello All, We are benchmarking our ATS configuration which uses a custom ATS plugin. At any rate we are observing some interesting behavior. We are configured to hit about 5 urls in traffic server which return a payl= oad of about 500 bytes. These URI's are being cached using cache control headers such that the requ= ests are almost 100% being served from cache. At any rate we can't seem to push the cluster past 2800 transactions per se= cond. The interesting thing is that when we reduce the back end node count from 2= to 1, the 1 node maintains the load at about 60% cpu. When we run with 2 nodes we see about 30% cpu on each node. Can anyone offer suggestions on how to get this cluster to scale linearly w= ith respect to number of nodes? We know that our load test software is capable of generating in excess of 2= 0K tps, so this seems to be some sort of contention in the cluster itself. Thoughts? Best Regards, Steve Owens --_000_CBADD63212143steveowensemaildisneycom_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable
Hello All,
We are benchmarking our ATS configuration which uses a custom A= TS plugin.

At any rate we are observing some inter= esting behavior.

We are configured to hit about 5 = urls in traffic server which return a payload of about 500 bytes.

These URI's are being cached using cache control headers su= ch that the requests are almost 100% being served from cache.
At any rate we can't seem to push the cluster past 2800 transac= tions per second.

The interesting thing is that wh= en we reduce the back end node count from 2 to 1, the 1 node maintains the = load at about 60% cpu.
When we run with 2 nodes we see about 30% = cpu on each node.

Can anyone offer suggestions on = how to get this cluster to scale linearly with respect to number of nodes?<= /div>

We know that our load test software is capable of = generating in excess of 20K tps, so this seems to be some sort of contentio= n in the cluster itself.

Thoughts?

<= /div>
Best Regards,

Steve Owens

=
--_000_CBADD63212143steveowensemaildisneycom_--