httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Kalu┼ża <jkal...@redhat.com>
Subject PR 58267: Regression in 2.2.31 caused by r1680920
Date Mon, 24 Aug 2015 14:47:54 GMT
Hi,

unfortunately, the r1680920 brought undesired behavior described in PR 
58267 to 2.2.x. The bug is well described in the PR, so I won't describe 
it in this email.

I have tried to debug it and I think the problem is that we use also 
server->server_hostname to compute the hash in the 
ap_proxy_set_scoreboard_lb. This hash is used to find out proper 
ap_scoreboard field.

It all happens in mod_proxy.c:child_init's scope.

If the "<Proxy Balancer://foobar>" has been defined, all the 
BalancerMembers are initialized with the hash computed with usage of 
global server->server_hostname.

Later, if the "ProxyPass /foobar/ Balancer://foobar/" has been used in 
the VirtualHost, ap_proxy_initialize_worker_share is called again with 
server->server_hostname set to the VirtualHost's one.

Now, the root of the error is that the scoreboard size is static (set to 
proxy_lb_workers + PROXY_DYNAMIC_BALANCER_LIMIT), but it is not 
incremented when ProxyPass with balancer is used in the virtualhost. 
This leads to lack of space in scoreboard when Balancers are used in 
multiple virtualhosts.

I think there are two possible fixes:

1) Do not use server->server_hostname when computing hash which is used 
to determine right scoreboard field. I think this would fix this bug, 
but I'm not sure what would happen in situations when you define 2 
balancers with the same name in different virtualhosts...

On the other-side, when there is global Proxy balancer, it make sense to 
use the same worker->s for all the ProxyPass in virtualhosts.

2) Increment proxy_lb_workers according to number of workers in balancer 
when using "ProxyPass /foobar/ Balancer://foobar/" in the VirtualHost. 
The scoreboard would have right size and ap_proxy_set_scoreboard_lb 
would not fail then.


Since it's 2.2.x which should be probably stable without big changes, 
I'm asking the list for more opinions... I will try to implement patch 
for option 2) tomorrow and see if this really fixes the issue.

Regards,
Jan Kaluza

Mime
View raw message