phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-3654) Load Balancer for thin client
Date Wed, 01 Mar 2017 17:45:46 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15890655#comment-15890655
] 

Andrew Purtell commented on PHOENIX-3654:
-----------------------------------------

bq. "If multiple cluster is using same zookeeper ensemble, then the security would be based
on the parent cluster name as present in hbase-site.xml."

We should expand on this. What security concerns and how. Protecting the ZK znodes with ACLs?
Providing configuration on what credentials to use for cluster access?

bq. "The PQS will create an ephemeral node under the a parent node and register itself. [...]
PQS will also keep updating it’s znode with the number of connection it is handling. The
update could be done via creating a child node within its structure"

I believe ephemerals aren't allowed to have children, at least not until whenever container
znodes are offered in a GA version of ZooKeeper. You can write and update a structure as the
data of ephemeral znode though, probably you'll opt for JSON encoding.

bq. " It add watcher so that any change to the ephemeral node will also modify the cached
data"

You will need a watch on the parent of the ephemerals to catch any changes in membership.
You will need a watch on each ephemeral to catch a change in data. 



> Load Balancer for thin client
> -----------------------------
>
>                 Key: PHOENIX-3654
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3654
>             Project: Phoenix
>          Issue Type: New Feature
>    Affects Versions: 4.8.0
>         Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>            Reporter: Rahul Shrivastava
>             Fix For: 4.9.0
>
>         Attachments: LoadBalancerDesign.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> We have been having internal discussion on load balancer for thin client for PQS. The
general consensus we have is to have an embedded load balancer with the thin client instead
of using external load balancer such as haproxy. The idea is to not to have another layer
between client and PQS. This reduces operational cost for system, which currently leads to
delay in executing projects.
> But this also comes with challenge of having an embedded load balancer which can maintain
sticky sessions, do fair load balancing knowing the load downstream of PQS server. In addition,
load balancer needs to know location of multiple PQS server. Now, the thin client needs to
keep track of PQS servers via zookeeper ( or other means). 
> In the new design, the client ( PQS client) , it is proposed to  have an embedded load
balancer.
> Where will the load Balancer sit ?
> The load load balancer will embedded within the app server client.  
> How will the load balancer work ? 
> Load balancer will contact zookeeper to get location of PQS. In this case, PQS needs
to register to ZK itself once it comes online. Zookeeper location is in hbase-site.xml. It
will maintain a small cache of connection to the PQS. When a request comes in, it will check
for an open connection from the cache. 
> How will load balancer know load on PQS ?
> To start with, it will pick a random open connection to PQS. This means that load balancer
does not know PQS load. Later , we can augment the code so that thin client can receive load
info from PQS and make intelligent decisions.  
> How will load balancer maintain sticky sessions ?
> While we still need to investigate how to implement sticky sessions. We can look for
some open source implementation for the same.
> How will PQS register itself to service locator ?
> PQS will have location of zookeeper in hbase-site.xml and it would register itself to
the zookeeper. Thin client will find out PQS location using zookeeper.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message