From dev-return-55512-archive-asf-public=cust-asf.ponee.io@phoenix.apache.org Thu Feb 21 00:46:24 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 1A1BF18075F for ; Thu, 21 Feb 2019 01:46:22 +0100 (CET) Received: (qmail 28685 invoked by uid 500); 21 Feb 2019 00:46:22 -0000 Mailing-List: contact dev-help@phoenix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@phoenix.apache.org Delivered-To: mailing list dev@phoenix.apache.org Received: (qmail 28673 invoked by uid 99); 21 Feb 2019 00:46:21 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Feb 2019 00:46:21 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 161E2C2927 for ; Thu, 21 Feb 2019 00:46:21 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.801 X-Spam-Level: * X-Spam-Status: No, score=1.801 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_FONT_LOW_CONTRAST=0.001, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=23andme.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id yJTncnqc_1W4 for ; Thu, 21 Feb 2019 00:46:18 +0000 (UTC) Received: from mail-lj1-f176.google.com (mail-lj1-f176.google.com [209.85.208.176]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 0B6D45FB94 for ; Thu, 21 Feb 2019 00:46:18 +0000 (UTC) Received: by mail-lj1-f176.google.com with SMTP id d14so11294329ljl.9 for ; Wed, 20 Feb 2019 16:46:17 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to; bh=7IKnBhtHerl0hp2AYcknky67ORYcvL2xgdQeeKM3BHk=; b=Gd0xYxHQSWgnrUeUw2pFoKrI+7bnHRgaSUVdngJxRah3LSSEmaDwn9hNlD8i7hR0FP 7vuAXR+Bn4lfoowsfvBu1ZC50ZBKTWHwXgB6tQId44KPe7IdVKzj1fREZkkgmpsWL2CS b/bfpUBrLEe0jpT5a0P+XyR66WyLAuZsCzgivdOdm1IgkP/x07wNU7v2vJZFBGTRJWi+ JBDbvRaF6Zxc6sfjDn/Iohox6Y0Uekz4yw1nwTfe05M+S0rFo/5ILmFBHSh6Jp1ueju+ XKvcsQduNm5/Xe24jc8m9z3+i8PryL6yvmEFkjd9NMgMVaZZArKdJkyOTZHs1zb737Cn IjvA== X-Gm-Message-State: AHQUAuaG3bPtTI8lDxQQYR5vko3i8wPPV64BKbAuVxCQDBh5KSUBEyXx wVziojDze21RVPbSMfacbNbIkoXqlNO/HdEIo4meT4ErIRk= X-Google-Smtp-Source: AHgI3IbCdT132v5od7kiAtRuhMTPjVA9VC6xEb3Uy5sW/g8ppsoDoDYdFyJPeERLxG4+sZQRGGeNTcsnjHKvvsYWhMY= X-Received: by 2002:a2e:8585:: with SMTP id b5mr21972693lji.125.1550709970725; Wed, 20 Feb 2019 16:46:10 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Xiaoxiao Wang Date: Wed, 20 Feb 2019 16:45:59 -0800 Message-ID: Subject: Re: How Phoenix JDBC connection get hbase configuration To: dev@phoenix.apache.org Content-Type: multipart/alternative; boundary="000000000000f4829605825ccc3e" --000000000000f4829605825ccc3e Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Since I've known that the configuration have been loaded up correctly through the classpath I have tested on the real application, however, it still timed out with the same default value from the mappers Error: java.io.IOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=3D36= , exceptions: Thu Feb 21 00:38:28 UTC 2019, null, java.net.SocketTimeoutException: callTimeout=3D60000, callDuration=3D60309 On Wed, Feb 20, 2019 at 4:25 PM Xiaoxiao Wang wrote: > i made this work on my toy application, getConf() is not an issue, and > hbase conf can get the correct settings > > I'm trying out again on the real application > > On Wed, Feb 20, 2019 at 4:13 PM William Shen > wrote: > >> Whatever is in super.getConf() should get overriden by hbase-site.xml >> because addHbaseResources because will layer on hbase-site.xml last. The >> question is which one got picked up... (maybe there is another one on th= e >> classpath, is that possible?) >> >> On Wed, Feb 20, 2019 at 4:10 PM Xiaoxiao Wang > > >> wrote: >> >> > I'm trying out on the mapreduce application, I made it work on my toy >> > application >> > >> > On Wed, Feb 20, 2019 at 4:09 PM William Shen < >> willshen@marinsoftware.com> >> > wrote: >> > >> > > A bit of a long shot, but do you happen to have another hbase-site.x= ml >> > > bundled in your jar accidentally that might be overriding what is on >> the >> > > classpath? >> > > >> > > On Wed, Feb 20, 2019 at 3:58 PM Xiaoxiao Wang >> > > > >> > > wrote: >> > > >> > > > A bit more information, I feel the classpath didn't get passed in >> > > correctly >> > > > by doing >> > > > >> > > > conf =3D HBaseConfiguration.addHbaseResources(super.getConf()); >> > > > >> > > > and this conf also didn't pick up the expected properties >> > > > >> > > > >> > > > On Wed, Feb 20, 2019 at 3:56 PM Xiaoxiao Wang >> > > wrote: >> > > > >> > > > > Pedro >> > > > > >> > > > > thanks for your info, yes, I have tried both >> > > > > HADOOP_CLASSPATH=3D/etc/hbase/conf/hbase-site.xml and >> > > > > HADOOP_CLASSPATH=3D/etc/hbase/conf/ (without file), and yes chec= ked >> > > > > hadoop-env.sh as well to make sure it did >> > > > > HADOOP_CLASSPATH=3D$HADOOP_CLASSPATH:/others >> > > > > >> > > > > And also for your second question, it is indeed a map reduce job= , >> and >> > > it >> > > > > is trying to query phoenix from map function! (and we make sure >> all >> > the >> > > > > nodes have hbase-site.xml installed properly ) >> > > > > >> > > > > thanks >> > > > > >> > > > > On Wed, Feb 20, 2019 at 3:53 PM Pedro Boado < >> pedro.boado@gmail.com> >> > > > wrote: >> > > > > >> > > > >> Your classpath variable should be pointing to the folder >> containing >> > > your >> > > > >> hbase-site.xml and not directly to the file. >> > > > >> >> > > > >> But certain distributions tend to override that envvar inside >> > > > >> hadoop-env.sh >> > > > >> or hadoop.sh . >> > > > >> >> > > > >> Out of curiosity, have you written a map-reduce application and >> are >> > > you >> > > > >> querying phoenix from map functions? >> > > > >> >> > > > >> On Wed, 20 Feb 2019, 23:34 Xiaoxiao Wang, >> > > > > > >> > > > >> wrote: >> > > > >> >> > > > >> > HI Pedro >> > > > >> > >> > > > >> > thanks for your help, I think we know that we need to set the >> > > > classpath >> > > > >> to >> > > > >> > the hadoop program, and what we tried was >> > > > >> > HADOOP_CLASSPATH=3D/etc/hbase/conf/hbase-site.xml hadoop jar >> > $test_jar >> > > > >> but it >> > > > >> > didn't work >> > > > >> > So we are wondering if anything we did wrong? >> > > > >> > >> > > > >> > On Wed, Feb 20, 2019 at 3:24 PM Pedro Boado > > >> > > > wrote: >> > > > >> > >> > > > >> > > Hi, >> > > > >> > > >> > > > >> > > How many concurrent client connections are we talking about= ? >> You >> > > > >> might be >> > > > >> > > opening more connections than the RS can handle ( under the= se >> > > > >> > circumstances >> > > > >> > > most of the client threads would end exhausting their retry >> > count >> > > ) >> > > > . >> > > > >> I >> > > > >> > > would bet that you've get a bottleneck in the RS keeping >> > > > >> SYSTEM.CATALOG >> > > > >> > > table (this was an issue in 4.7 ) as every new connection >> would >> > be >> > > > >> > querying >> > > > >> > > this table first. >> > > > >> > > >> > > > >> > > Try to update to our cloudera-compatible parcels instead of >> > using >> > > > >> clabs - >> > > > >> > > which are discontinued by Cloudera and not supported by the >> > Apache >> > > > >> > Phoenix >> > > > >> > > project - . >> > > > >> > > >> > > > >> > > Once updated to phoenix 4.14 you should be able to use >> > > > >> > > UPDATE_CACHE_FREQUENCY >> > > > >> > > property in order to reduce pressure on system tables. >> > > > >> > > >> > > > >> > > Adding an hbase-site.xml with the required properties to th= e >> > > client >> > > > >> > > application classpath should just work. >> > > > >> > > >> > > > >> > > I hope it helps. >> > > > >> > > >> > > > >> > > On Wed, 20 Feb 2019, 22:50 Xiaoxiao Wang, >> > > > > > > > >> > >> > > > >> > > wrote: >> > > > >> > > >> > > > >> > > > Hi, who may help >> > > > >> > > > >> > > > >> > > > We are running a Hadoop application that needs to use >> phoenix >> > > JDBC >> > > > >> > > > connection from the workers. >> > > > >> > > > The connection works, but when too many connection >> established >> > > at >> > > > >> the >> > > > >> > > same >> > > > >> > > > time, it throws RPC timeouts >> > > > >> > > > >> > > > >> > > > Error: java.io.IOException: >> > > > >> > > > org.apache.phoenix.exception.PhoenixIOException: Failed >> after >> > > > >> > > attempts=3D36, >> > > > >> > > > exceptions: Wed Feb 20 20:02:43 UTC 2019, null, java.net >> > > > >> > > .SocketTimeoutException: >> > > > >> > > > callTimeout=3D60000, callDuration=3D60506. ... >> > > > >> > > > >> > > > >> > > > So we have figured we should probably set a higher >> > > > >> hbase.rpc.timeout >> > > > >> > > > value, but then it comes to the issue: >> > > > >> > > > >> > > > >> > > > A little bit background on how we run the application >> > > > >> > > > >> > > > >> > > > Here is how we get PhoenixConnection from java program >> > > > >> > > > DriverManager.getConnection("jdbc:phoenix:host", props) >> > > > >> > > > And we trigger the program by using >> > > > >> > > > hadoop jar $test_jar >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > We have tried multiple approaches to load hbase/phoenix >> > > > >> configuration, >> > > > >> > > but >> > > > >> > > > none of them get respected by PhoenixConnection, here are >> the >> > > > >> methods >> > > > >> > we >> > > > >> > > > tried >> > > > >> > > > * Pass hbase_conf_dir through HADOOP_CLASSPATH, so run th= e >> > > hadoop >> > > > >> > > > application like HADOOP_CLASSPATH=3D/etc/hbase/conf/ hado= op >> jar >> > > > >> > $test_jar . >> > > > >> > > > However, PhoenixConnection doesn=E2=80=99t respect the pa= rameters >> > > > >> > > > * Tried passing -Dhbase.rpc.timeout=3D1800, which is pick= ed >> up >> > by >> > > > >> hbase >> > > > >> > > conf >> > > > >> > > > object, but not PhoniexConnection >> > > > >> > > > * Explicitly set those parameters and pass them to the >> > > > >> > PhoenixConnection >> > > > >> > > > props.setProperty("hbase.rpc.timeout", "1800"); >> > > > >> > > > props.setProperty(=E2=80=9Cphoenix.query.timeoutMs", "180= 0"); >> > > > >> > > > Also didn=E2=80=99t get respected by PhoenixConnection >> > > > >> > > > * also tried what is suggested by phoenix here >> > > > >> > > > https://phoenix.apache.org/#connStr , use :longRunning >> > together >> > > > >> with >> > > > >> > > > those properties, still didn=E2=80=99t seem to work >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > Besides all those approaches we tried, I have explicitly >> > output >> > > > >> those >> > > > >> > > > parameters we care from the connection, >> > > > >> > > > connection.getQueryServices().getProps() >> > > > >> > > > The default values I got are 60000 for hbase.rpc.timeout, >> and >> > > 600k >> > > > >> for >> > > > >> > > > phoenix.query.timeoutMs , so I have tried to run a query >> lthat >> > > > would >> > > > >> > run >> > > > >> > > > longer than 10 mins, Ideally it should timeout, however, = it >> > runs >> > > > >> over >> > > > >> > 20 >> > > > >> > > > mins and didn=E2=80=99t timeout. So I=E2=80=99m wondering= how >> > PhoenixConnection >> > > > >> respect >> > > > >> > > > those properties? >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > So with some of your help, we=E2=80=99d like to know if t= here=E2=80=99s any >> > > thing >> > > > >> wrong >> > > > >> > > > with our approaches. And we=E2=80=99d like to get rid of = those >> > > > >> > > SocketTimeExceptions. >> > > > >> > > > We are using phoenix-core version is >> 4.7.0-clabs-phoenix1.3.0 >> > , >> > > > and >> > > > >> our >> > > > >> > > > phoenix-client version is >> phoenix-4.7.0-clabs-phoenix1.3.0.23 >> > > (we >> > > > >> have >> > > > >> > > > tried phoenix-4.14.0-HBase-1.3 as well, which didn=E2=80= =99t work >> > > either). >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > Thanks for your time >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > >> > > > >> > >> > > > >> >> > > > > >> > > > >> > > >> > >> > --000000000000f4829605825ccc3e--