Return-Path: X-Original-To: apmail-trafficserver-users-archive@www.apache.org Delivered-To: apmail-trafficserver-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C278A10AE0 for ; Thu, 17 Apr 2014 19:59:09 +0000 (UTC) Received: (qmail 10712 invoked by uid 500); 17 Apr 2014 19:59:08 -0000 Delivered-To: apmail-trafficserver-users-archive@trafficserver.apache.org Received: (qmail 10672 invoked by uid 500); 17 Apr 2014 19:59:07 -0000 Mailing-List: contact users-help@trafficserver.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@trafficserver.apache.org Delivered-To: mailing list users@trafficserver.apache.org Received: (qmail 10664 invoked by uid 99); 17 Apr 2014 19:59:07 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Apr 2014 19:59:07 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of baconandtuna@gmail.com designates 74.125.82.67 as permitted sender) Received: from [74.125.82.67] (HELO mail-wg0-f67.google.com) (74.125.82.67) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 17 Apr 2014 19:59:03 +0000 Received: by mail-wg0-f67.google.com with SMTP id k14so202579wgh.10 for ; Thu, 17 Apr 2014 12:58:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=2yOxSpDSTMtUM1YtV5JM91zXEPDOO5g2sWZCVJRoLjA=; b=N3iH1yXh5rM9n1as/DclRiIpkGjS+uJcGqYXr5HyVgepepDKOb86XZJxBSDztzXL0J AOfKgUNyv3/GU0Mdaq9x6C5Shg1iyoltx2bnYHizZaFqBpA0AY8nSql+8iXQazUXYI3+ NKajCKq1PG/+/m6+LRU7Ljsd6s/2iaYB5EEo+mFXfjjAHTYXfucdsNtN0NVSK00ksZBx g/5FIEnzmu3ZKVMe4KHwnINMd4OY0xA+8PEmBXYvXbzEyhMCPpDj7Pn1kqtnaD1YsFHb DPYLNaoAUC81ULWROUy9XEwdHb1qO5eHx8LXbeLSOUoE/nbai6IGWkvgb8jTLdp27SpQ l8tg== MIME-Version: 1.0 X-Received: by 10.194.202.229 with SMTP id kl5mr352863wjc.86.1397764722124; Thu, 17 Apr 2014 12:58:42 -0700 (PDT) Received: by 10.194.175.40 with HTTP; Thu, 17 Apr 2014 12:58:42 -0700 (PDT) In-Reply-To: <5350220C.5010608@thelounge.net> References: <5350220C.5010608@thelounge.net> Date: Thu, 17 Apr 2014 12:58:42 -0700 Message-ID: Subject: Re: ATS x datacenter(s) From: Bacon Tuna To: users@trafficserver.apache.org Content-Type: multipart/alternative; boundary=047d7b87442cbdee1904f7427544 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b87442cbdee1904f7427544 Content-Type: text/plain; charset=UTF-8 I'm actually not concerned about the backend availability as I am about the gateways. If the gateways die nothing works. Here's a little more info in a scenario format: Client requests object from ATS. Request is handled by geographic load balancer and directed to a cluster in once data center 'A'. The cluster fetches the object and caches it. Client makes request for object again. Request is handled by geo load balancer and directed to cluster in data center 'B'. Client requests object purged from cache. Purge request is handled by geo load balancer and directed to cluster in data center 'A' Cluster invalidates object. At this point the cache across 'A' and 'B' is inconsistent. Further requests would either retrieve a "stale" object or fetch a new object and cache it. Both objects would persist. Do folks just preach "eventual consistency", or handle this via some other means? On Thu, Apr 17, 2014 at 11:48 AM, Reindl Harald wrote: > > > Am 17.04.2014 20:41, schrieb Bacon Tuna: > > I haven't had luck finding white papers or documentation discussing > strategies for running ATS clusters across > > multiple data centers or availability zones to support high availability? > > > > Any chance someone on here can point me in the right direction? > > > > Do people load balance multiple clusters with separate caches, or do > they employ some mechanism to maintain > > consistency across regions? > > i would let caches alone, it get's filled alone and has no imprtant data > > if you only have one backend and some ATS for load-balancing you are > completly fine - your spread your load and take a lot of away from > the backend server possible resulting in you have no need for more > backend servers - the high ability itself of the backend can by > something like VMware HA or whatever doing failover > > if you need more than one backendserver you should take much > more care to keep sessions between the backends synchronous > to avoid troubles having one request of the same client on > two different backend servers and log the client out > > > --047d7b87442cbdee1904f7427544 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I'm actually not concerned about the backend availabil= ity as I am about the gateways. If the gateways die nothing works. Here'= ;s a little more info in a scenario format:

Client reque= sts object from ATS.=C2=A0
Request is handled by geographic load balancer and directed to a clust= er in once data center 'A'.
The cluster fetches the object and caches it.
Client makes r= equest for object again.
Request is handled by geo load balancer = and directed to cluster in data center 'B'.
Client reques= ts object purged from cache.
Purge request is handled by geo load balancer and directed to cluster = in data center 'A'
Cluster invalidates object.
=
At this point the cache across 'A' and 'B' i= s inconsistent. Further requests would either retrieve a "stale" = object or fetch a new object and cache it. Both objects would persist.

Do folks just preach "eventual consistency&q= uot;, or handle this via some other means?




On T= hu, Apr 17, 2014 at 11:48 AM, Reindl Harald <h.reindl@thelounge.net= > wrote:


Am 17.04.2014 20:41, schrieb Bacon Tuna:
> I haven't had luck finding white papers or = documentation discussing strategies for running ATS clusters across
> multiple data centers or availability zones to support high availabili= ty?
>
> Any chance someone on here can point me in the right direction?
>
> Do people load balance multiple clusters with separate caches, or do t= hey employ some mechanism to maintain
> consistency across regions?

i would let caches alone, it get's filled alone and has no = imprtant data

if you only have one backend and some ATS for load-balancing you are
completly fine - your spread your load and take a lot of away from
the backend server possible resulting in you have no need for more
backend servers - the high ability itself of the backend can by
something like VMware HA or whatever doing failover

if you need more than one backendserver you should take much
more care to keep sessions between the backends synchronous
to avoid troubles having one request of the same client on
two different backend servers and log the client out



--047d7b87442cbdee1904f7427544--