Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id AD8E7200C17 for ; Fri, 27 Jan 2017 02:25:43 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id AC288160B50; Fri, 27 Jan 2017 01:25:43 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 0159F160B4C for ; Fri, 27 Jan 2017 02:25:42 +0100 (CET) Received: (qmail 24905 invoked by uid 500); 27 Jan 2017 01:25:38 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 24745 invoked by uid 99); 27 Jan 2017 01:25:38 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jan 2017 01:25:38 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 46002C0B7C for ; Fri, 27 Jan 2017 01:25:38 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -1.999 X-Spam-Level: X-Spam-Status: No, score=-1.999 tagged_above=-999 required=6.31 tests=[KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-2.999] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 8qWIYuUo-axp for ; Fri, 27 Jan 2017 01:25:33 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 5416B60D95 for ; Fri, 27 Jan 2017 01:25:30 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 694DFE040F for ; Fri, 27 Jan 2017 01:25:26 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 2ED802528A for ; Fri, 27 Jan 2017 01:25:25 +0000 (UTC) Date: Fri, 27 Jan 2017 01:25:25 +0000 (UTC) From: "Enis Soztutar (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HBASE-14850) C++ client implementation MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Fri, 27 Jan 2017 01:25:43 -0000 [ https://issues.apache.org/jira/browse/HBASE-14850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15840799#comment-15840799 ] Enis Soztutar commented on HBASE-14850: --------------------------------------- I was reviewing HBASE-17465, and some notes there but let me put it here instead. For those of you not following closely, this patch at HBASE-17465 extends on the earlier work for the RpcClient / RpcChannel using the underlying wangle libs and existing RPC client code. The current RpcClient code can locate a region using meta, and also do a single RPC to the located regionserver. However, it lacks the retry / exception handling and timeout logic. The approach we are taking in building the hbase-native client is to follow closely to the async java client in terms of implementation to reduce the development time, and also make the client more maintainable. In terms of the C++ client architecture, we are doing the layered approach where each layer corresponds roughly to the following java layer: || Layer ||Java Async || C++ || | low level async socket | netty | wangle | | thread pools, futures, buffers, etc | netty thread pools, futures and bufs and Java 8 futures | folly Futures, IOBuf, wangle thread pools | | tcp connection management/pooling | AsyncRpcClient | connection-pool.cc, rpc-client.cc | | Rpc request / response | (netty-based) AsyncRpcChannel, AsyncServerResponseHandler | (wangle-based) pipeline.cc, client-handler.cc | | Rpc interface | PB-generated service stubs, HBaseRpcController | PB-generated stubs, rpc-controller.cc (and wangle-based request.cc, service.cc) | | Request,response conversion (Get -> GetRequest) | RequestConverter | request-converter.cc, response-converter.cc | | Rpc retry, timeout, exception handling | RawAsyncTableImpl, AsyncRpcRetyingCaller, XXRequestCaller | async-rpc-retrying-caller.cc, async-rpc-retrying-caller-factory | | meta lookup | ZKAsyncRegistry, curator | location-cache.cc, zk C client| | meta cache | MetaCache | location-cache.cc | | Async Client interface (exposed) | AsyncConnection, AsyncTable | | | Sync client implementation over async interfaces | | table.cc | | Sync Client Interface (exposed) | ConnectionFactory, Connection, Table, Configuration, etc | client.h, table.h, configuration.h | | Operations API | Get, Put, Scan, Result, Cell | Get, Put, Scan, Cell| So, in a sense, we are not reinventing a wheel, but using wangle / folly instead of netty on the C++ side, and building the client to be similar to the {{TableImpl -> AsyncTable -> RawAsyncTable -> AsyncConnection -> AsyncRpcClient -> AsyncRpcChannel -> Netty}} workflow. Anyway, please feel free to check and review if you are interested. > C++ client implementation > ------------------------- > > Key: HBASE-14850 > URL: https://issues.apache.org/jira/browse/HBASE-14850 > Project: HBase > Issue Type: Task > Reporter: Elliott Clark > > It's happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332)