httpd-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohit Anchlia <>
Subject Re: [users@httpd] Active Active Data center and stickyness
Date Tue, 29 Mar 2011 02:42:46 GMT
Thanks! We are using F5 GTM as global load balancer with LTM. So
global load balancing is not a problem. Problem is user stickyness
that need to persist beyond individual session.

I forgot to mention that the problem is that these connections come
from the servers using http rather than browser and that's why cookies
here will probably not work.

On Mon, Mar 28, 2011 at 5:50 PM, Ben Timby <> wrote:
> On Mon, Mar 28, 2011 at 8:30 PM, Mohit Anchlia <> wrote:
>> Apache 2:
>> We use apache 2 and we have 2 data centers. Problem is that both data
>> centers are active. So if User uploads a file for eg: in site A that
>> User can be directed to site B. Files are kept in sync asynchronously.
>> And it could take as long as 1 hr to bring them in sync on the other
>> data center. Now this presents unique problem as to how to provide
>> user stickyness accross data centers with 2 diff. cluster of nodes
>> each cluster running with nodes only in that data center.
>> I am sending this out in case people have solved it already or have
>> some suggestions on how this can be done.
> You don't provide enough information on how you have both data centers
> "active". My guess is that you are using DNS multiple A records, but
> who knows.
> If you are using DNS, then my suggestion would be to allow users to visit:
>, which is distributed to both data centers, then after
> login, redirect them to:
> - or -
> The downside is there is no automatic fail over, the user would have
> to go back to and log back in to be redirected to an
> accessible data center. For most cases, this would be acceptable.
> Another way to do it is to put an intelligent load balancer at each
> data center. This load balancer would then send connections to it's
> backend servers. You can then configure all the backend servers at
> both data centers into the load balancer. Use sticky connections in
> the load balancer to send the user to the correct backend (perhaps at
> the other data center).
> If you have good enough load balancing software, in the case of no
> stickiness, you could prefer the local backend servers over the remote
> ones, avoiding a long hop to the other data center. Also, you can use
> the presence of a cookie to determine stickiness. Then you can stick a
> user to a group of backend servers AFTER they upload something.
> Something like this.
> 1. User hits your site.
> 2. They are routed to a random data center.
> 3. They have no association, and thus are routed to a local backend.
> 4. User keeps browsing.
> 5. User performs a POST request to your upload script, it writes a
> cookie to their browser which expires in 1h.
> 6. Subsequent requests hit the load balancer and are routed to the
> preferred backend (where the upload occurred).
> 7. One hour later, the user is released and free to "float" again.
> This is probably the best solution as a user will always use the
> shortest path unless there is a reason not to (cookie). It also allows
> you to control the stickiness at the application level, so you can be
> selective. Failover occurs automatically, as the browser will connect
> to a live data center, however, their data may not live there. Such is
> life.
> This is not even close to all the options available to you, but these
> are the two simplest ones I could come up with without any specifics.
> ---------------------------------------------------------------------
> The official User-To-User support forum of the Apache HTTP Server Project.
> See <URL:> for more info.
> To unsubscribe, e-mail:
>   "   from the digest:
> For additional commands, e-mail:

The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:> for more info.
To unsubscribe, e-mail:
   "   from the digest:
For additional commands, e-mail:

View raw message