Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E6489E751 for ; Sat, 29 Dec 2012 22:52:12 +0000 (UTC) Received: (qmail 65609 invoked by uid 500); 29 Dec 2012 22:52:12 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 65536 invoked by uid 500); 29 Dec 2012 22:52:12 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 65526 invoked by uid 99); 29 Dec 2012 22:52:12 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 29 Dec 2012 22:52:12 +0000 Date: Sat, 29 Dec 2012 22:52:12 +0000 (UTC) From: "stack (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-7460) Cleanup client connection layers MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-7460?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1354= 1010#comment-13541010 ]=20 stack commented on HBASE-7460: ------------------------------ I am in this area at the moment, at a level just above HBaseClient trying t= o make use of it. I'm playing with trying to use protobuf Service and hook= ing it up on either end to use our RPC. There are pros but a bunch of cons= with the main one being mostly the amount of refactoring that would have t= o do in this area if we were to go this route. My first impression submerging below the level of HBaseClientRPC is that th= ere is a bunch of cruft in here, stuff that has been accumulating over time= and that we've probably been afraid to apply the compressed air can too. I want to make use of clients. Was going to copy what is going on in Invok= er not knowing any better. I want to use something else than "protocol" as= the key getting the client. In my investigations, the first thing to jettison would be the proxy stuff.= In my case it is in the way (I'd use the protobuf Service.Stub instead). = Getting a proxy has a bunch of overrides. A bunch look unused, as you say= . Also, protocol 'versioning' and protocol 'fingerprinting' -- VersionedPr= otocol and ProtocolSignature -- are in the former case not hooked up, and i= n the latter, a facility that is incomplete and unused so all this code nee= ds finishing or we need to just throw it out. bq. It seems to me like each HConnection(Implementation) instance should ha= ve it's own HBaseClient instance, doing away with the ClientCache mapping Sounds imminently sensible. I'd be up for sketching something out if you had a few minutes to hang G. Still to do, though not directly related here but it is in this realm only = at a lower level, is the back and forth over RPC, what we put on the wire. = As is where we create a pb from an already made request pb -- with the for= mer making a copy of the latter -- needs fixing and we should take the oppo= rtunity to address some of the criticisms' Beno=C3=AEt/Tsuna raised in Unof= ficial Hadoop / HBase RPC protocol documentation (http://www.google.com/url= ?q=3Dhttps%3A%2F%2Fgithub.com%2FOpenTSDB%2Fasynchbase%2Fblob%2Fmaster%2Fsrc= %2FHBaseRpc.java%23L164&sa=3DD&sntz=3D1&usg=3DAFQjCNEy00ZQVclIR7BaBJYBdRV-i= 7QGTg) =20 > Cleanup client connection layers > -------------------------------- > > Key: HBASE-7460 > URL: https://issues.apache.org/jira/browse/HBASE-7460 > Project: HBase > Issue Type: Improvement > Components: Client, IPC/RPC > Reporter: Gary Helmling > > This issue originated from a discussion over in HBASE-7442. We currently= have a broken abstraction with {{HBaseClient}}, where it is bound to a sin= gle {{Configuration}} instance at time of construction, but then reused for= all connections to all clusters. This is combined with multiple, overlapp= ing layers of connection caching. > Going through this code, it seems like we have a lot of mismatch between = the higher layers and the lower layers, with too much abstraction in betwee= n. At the lower layers, most of the {{ClientCache}} stuff seems completely = unused. We currently effectively have an {{HBaseClient}} singleton (for {{S= ecureClient}} as well in 0.92/0.94) in the client code, as I don't see anyt= hing that calls the constructor or {{RpcEngine.getProxy()}} versions with a= non-default socket factory. So a lot of the code around this seems like bu= ilt up waste. > The fact that a single Configuration is fixed in the {{HBaseClient}} seem= s like a broken abstraction as it currently stands. In addition to cluster = ID, other configuration parameters (max retries, retry sleep) are fixed at = time of construction. The more I look at the code, the more it looks like t= he {{ClientCache}} and sharing the {{HBaseClient}} instance is an unnecessa= ry complication. Why cache the {{HBaseClient}} instances at all? In {{HConn= ectionManager}}, we already have a mapping from {{Configuration}} to {{HCon= nection}}. It seems to me like each {{HConnection(Implementation)}} instanc= e should have it's own {{HBaseClient}} instance, doing away with the {{Clie= ntCache}} mapping. This would keep each {{HBaseClient}} associated with a s= ingle cluster/configuration and fix the current breakage from reusing the s= ame {{HBaseClient}} against different clusters. > We need a refactoring of some of the interactions of {{HConnection(Implem= entation)}}, {{HBaseRPC/RpcEngine}}, and {{HBaseClient}}. Off hand, we migh= t want to expose a separate {{RpcEngine.getClient()}} method that returns a= new {{RpcClient}} interface (implemented by {{HBaseClient}}) and move the = {{RpcEngine.getProxy()}}/{{stopProxy()}} implementations into the client. S= o all proxy invocations can go through the same client, without requiring t= he static client cache. I haven't fully thought this through, so I could be= missing other important aspects. But that approach at least seems like a s= tep in the right direction for fixing the client abstractions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrato= rs For more information on JIRA, see: http://www.atlassian.com/software/jira