tephra-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Poorna Chandra (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (TEPHRA-200) Unable to discover tx service
Date Wed, 16 Nov 2016 01:51:58 GMT

    [ https://issues.apache.org/jira/browse/TEPHRA-200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15669035#comment-15669035
] 

Poorna Chandra commented on TEPHRA-200:
---------------------------------------

Transaction manager announces it hostname and port in Zookeeer, and transaction client discovers
the transaction manager through Zookeeper. Looks like there is some issue in transaction manager
announcing itself, can you attach the transaction manager logs?

Also in future it would be simpler to just send an email to dev@tephra.incubator.apache.org
for such issues. We can file a JIRA once we figure out the reason.

> Unable to discover tx service
> -----------------------------
>
>                 Key: TEPHRA-200
>                 URL: https://issues.apache.org/jira/browse/TEPHRA-200
>             Project: Tephra
>          Issue Type: Bug
>          Components: client
>    Affects Versions: 0.10.0-incubating
>         Environment: hadoop-2.6.0
> HBase-1.1.1
> zookeeper-3.4.6
> jdk_1.8.0_112
>            Reporter: Arcflash
>            Assignee: Poorna Chandra
>
> When running the example _BalanceBooks_,I get an error :
> _Unable to discover tx service_.
>  The log is 
> {noformat}
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in [jar:file:/home/hadoop/.m2/repository/org/slf4j/slf4j-log4j12/1.7.5/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in [jar:file:/home/hadoop/.m2/repository/ch/qos/logback/logback-classic/1.0.9/logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 0 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:zookeeper.version=3.4.6-1569965,
built on 02/20/2014 09:09 GMT
> 1 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:host.name=master-0
> 1 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:java.version=1.8.0_112
> 1 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:java.vendor=Oracle
Corporation
> 1 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:java.home=/java/jdk1.8.0_112/jre
> 1 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:java.class.path=/home/hadoop/workspace/test4tephra/target/classes:/home/hadoop/.m2/repository/org/apache/hbase/hbase-client/1.1.1/hbase-client-1.1.1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-annotations/1.1.1/hbase-annotations-1.1.1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-protocol/1.1.1/hbase-protocol-1.1.1.jar:/home/hadoop/.m2/repository/commons-codec/commons-codec/1.9/commons-codec-1.9.jar:/home/hadoop/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/hadoop/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/home/hadoop/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/hadoop/.m2/repository/com/google/guava/guava/12.0.1/guava-12.0.1.jar:/home/hadoop/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/hadoop/.m2/repository/io/netty/netty-all/4.0.23.Final/netty-all-4.0.23.Final.jar:/home/hadoop/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/home/hadoop/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar:/home/hadoop/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/home/hadoop/.m2/repository/org/jruby/jcodings/jcodings/1.0.8/jcodings-1.0.8.jar:/home/hadoop/.m2/repository/org/jruby/joni/joni/2.1.2/joni-2.1.2.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-auth/2.5.1/hadoop-auth-2.5.1.jar:/home/hadoop/.m2/repository/org/apache/directory/server/apacheds-kerberos-codec/2.0.0-M15/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoop/.m2/repository/org/apache/directory/server/apacheds-i18n/2.0.0-M15/apacheds-i18n-2.0.0-M15.jar:/home/hadoop/.m2/repository/org/apache/directory/api/api-asn1-api/1.0.0-M20/api-asn1-api-1.0.0-M20.jar:/home/hadoop/.m2/repository/org/apache/directory/api/api-util/1.0.0-M20/api-util-1.0.0-M20.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.5.1/hadoop-mapreduce-client-core-2.5.1.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.5.1/hadoop-yarn-common-2.5.1.jar:/home/hadoop/.m2/repository/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar:/home/hadoop/.m2/repository/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar:/home/hadoop/.m2/repository/javax/activation/activation/1.1/activation-1.1.jar:/home/hadoop/.m2/repository/com/github/stephenc/findbugs/findbugs-annotations/1.3.9-1/findbugs-annotations-1.3.9-1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-server/1.1.1/hbase-server-1.1.1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-procedure/1.1.1/hbase-procedure-1.1.1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-common/1.1.1/hbase-common-1.1.1-tests.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-prefix-tree/1.1.1/hbase-prefix-tree-1.1.1.jar:/home/hadoop/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar:/home/hadoop/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-hadoop-compat/1.1.1/hbase-hadoop-compat-1.1.1.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-hadoop2-compat/1.1.1/hbase-hadoop2-compat-1.1.1.jar:/home/hadoop/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-core/1.9/jersey-core-1.9.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-server/1.9/jersey-server-1.9.jar:/home/hadoop/.m2/repository/asm/asm/3.1/asm-3.1.jar:/home/hadoop/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/hadoop/.m2/repository/org/apache/commons/commons-math/2.2/commons-math-2.2.jar:/home/hadoop/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jetty-sslengine/6.1.26/jetty-sslengine-6.1.26.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jsp-2.1/6.1.14/jsp-2.1-6.1.14.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/jsp-api-2.1/6.1.14/jsp-api-2.1-6.1.14.jar:/home/hadoop/.m2/repository/org/mortbay/jetty/servlet-api-2.5/6.1.14/servlet-api-2.5-6.1.14.jar:/home/hadoop/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/home/hadoop/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.jar:/home/hadoop/.m2/repository/tomcat/jasper-compiler/5.5.23/jasper-compiler-5.5.23.jar:/home/hadoop/.m2/repository/tomcat/jasper-runtime/5.5.23/jasper-runtime-5.5.23.jar:/home/hadoop/.m2/repository/org/jamon/jamon-runtime/2.3.1/jamon-runtime-2.3.1.jar:/home/hadoop/.m2/repository/com/lmax/disruptor/3.3.0/disruptor-3.3.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.5.1/hadoop-hdfs-2.5.1.jar:/home/hadoop/.m2/repository/commons-daemon/commons-daemon/1.0.13/commons-daemon-1.0.13.jar:/home/hadoop/.m2/repository/org/apache/hbase/hbase-common/1.1.1/hbase-common-1.1.1.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-common/2.6.0/hadoop-common-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-annotations/2.6.0/hadoop-annotations-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/commons/commons-math3/3.1.1/commons-math3-3.1.1.jar:/home/hadoop/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/hadoop/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/hadoop/.m2/repository/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar:/home/hadoop/.m2/repository/com/sun/jersey/jersey-json/1.9/jersey-json-1.9.jar:/home/hadoop/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/hadoop/.m2/repository/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar:/home/hadoop/.m2/repository/org/codehaus/jackson/jackson-xc/1.8.3/jackson-xc-1.8.3.jar:/home/hadoop/.m2/repository/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/home/hadoop/.m2/repository/commons-el/commons-el/1.0/commons-el-1.0.jar:/home/hadoop/.m2/repository/net/java/dev/jets3t/jets3t/0.9.0/jets3t-0.9.0.jar:/home/hadoop/.m2/repository/com/jamesmurty/utils/java-xmlbuilder/0.4/java-xmlbuilder-0.4.jar:/home/hadoop/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/hadoop/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/hadoop/.m2/repository/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar:/home/hadoop/.m2/repository/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar:/home/hadoop/.m2/repository/org/slf4j/slf4j-api/1.7.5/slf4j-api-1.7.5.jar:/home/hadoop/.m2/repository/org/slf4j/slf4j-log4j12/1.7.5/slf4j-log4j12-1.7.5.jar:/home/hadoop/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.jar:/home/hadoop/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/hadoop/.m2/repository/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar:/home/hadoop/.m2/repository/com/jcraft/jsch/0.1.42/jsch-0.1.42.jar:/home/hadoop/.m2/repository/org/apache/curator/curator-client/2.6.0/curator-client-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/curator/curator-recipes/2.6.0/curator-recipes-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/curator/curator-framework/2.6.0/curator-framework-2.6.0.jar:/home/hadoop/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/home/hadoop/.m2/repository/org/htrace/htrace-core/3.0.4/htrace-core-3.0.4.jar:/home/hadoop/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/hadoop/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-client/2.6.0/hadoop-client-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-app/2.6.0/hadoop-mapreduce-client-app-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-common/2.6.0/hadoop-mapreduce-client-common-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-yarn-client/2.6.0/hadoop-yarn-client-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-yarn-server-common/2.6.0/hadoop-yarn-server-common-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-shuffle/2.6.0/hadoop-mapreduce-client-shuffle-2.6.0.jar:/home/hadoop/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-yarn-api/2.6.0/hadoop-yarn-api-2.6.0.jar:/home/hadoop/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/2.6.0/hadoop-mapreduce-client-jobclient-2.6.0.jar:/home/hadoop/.m2/repository/jdk/tools/jdk.tools/1.7/jdk.tools-1.7.jar:/home/hadoop/.m2/repository/org/apache/tephra/tephra-core/0.10.0-incubating/tephra-core-0.10.0-incubating.jar:/home/hadoop/.m2/repository/org/apache/tephra/tephra-api/0.10.0-incubating/tephra-api-0.10.0-incubating.jar:/home/hadoop/.m2/repository/org/apache/tephra/tephra-hbase-compat-1.1/0.10.0-incubating/tephra-hbase-compat-1.1-0.10.0-incubating.jar:/home/hadoop/.m2/repository/org/apache/twill/twill-zookeeper/0.8.0/twill-zookeeper-0.8.0.jar:/home/hadoop/.m2/repository/org/apache/twill/twill-api/0.8.0/twill-api-0.8.0.jar:/home/hadoop/.m2/repository/org/apache/twill/twill-discovery-api/0.8.0/twill-discovery-api-0.8.0.jar:/home/hadoop/.m2/repository/org/apache/twill/twill-common/0.8.0/twill-common-0.8.0.jar:/home/hadoop/.m2/repository/ch/qos/logback/logback-core/1.0.9/logback-core-1.0.9.jar:/home/hadoop/.m2/repository/ch/qos/logback/logback-classic/1.0.9/logback-classic-1.0.9.jar:/home/hadoop/.m2/repository/com/google/inject/guice/3.0/guice-3.0.jar:/home/hadoop/.m2/repository/javax/inject/javax.inject/1/javax.inject-1.jar:/home/hadoop/.m2/repository/aopalliance/aopalliance/1.0/aopalliance-1.0.jar:/home/hadoop/.m2/repository/org/apache/twill/twill-core/0.8.0/twill-core-0.8.0.jar:/home/hadoop/.m2/repository/org/apache/twill/twill-discovery-core/0.8.0/twill-discovery-core-0.8.0.jar:/home/hadoop/.m2/repository/io/netty/netty/3.6.6.Final/netty-3.6.6.Final.jar:/home/hadoop/.m2/repository/org/xerial/snappy/snappy-java/1.0.5/snappy-java-1.0.5.jar:/home/hadoop/.m2/repository/org/ow2/asm/asm-all/5.0.2/asm-all-5.0.2.jar:/home/hadoop/.m2/repository/org/apache/kafka/kafka_2.10/0.8.0/kafka_2.10-0.8.0.jar:/home/hadoop/.m2/repository/org/scala-lang/scala-library/2.10.1/scala-library-2.10.1.jar:/home/hadoop/.m2/repository/net/sf/jopt-simple/jopt-simple/3.2/jopt-simple-3.2.jar:/home/hadoop/.m2/repository/org/scala-lang/scala-compiler/2.10.1/scala-compiler-2.10.1.jar:/home/hadoop/.m2/repository/org/scala-lang/scala-reflect/2.10.1/scala-reflect-2.10.1.jar:/home/hadoop/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/hadoop/.m2/repository/com/yammer/metrics/metrics-annotation/2.2.0/metrics-annotation-2.2.0.jar:/home/hadoop/.m2/repository/com/google/inject/extensions/guice-assistedinject/3.0/guice-assistedinject-3.0.jar:/home/hadoop/.m2/repository/it/unimi/dsi/fastutil/6.5.6/fastutil-6.5.6.jar:/home/hadoop/.m2/repository/org/apache/thrift/libthrift/0.9.3/libthrift-0.9.3.jar:/home/hadoop/.m2/repository/org/apache/httpcomponents/httpclient/4.4.1/httpclient-4.4.1.jar:/home/hadoop/.m2/repository/org/apache/httpcomponents/httpcore/4.4.1/httpcore-4.4.1.jar:/home/hadoop/.m2/repository/com/codahale/metrics/metrics-core/3.0.2/metrics-core-3.0.2.jar
> 2 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 2 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:java.io.tmpdir=/tmp
> 7 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:java.compiler=<NA>
> 7 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:os.name=Linux
> 8 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:os.arch=amd64
> 8 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:os.version=3.10.0-327.el7.x86_64
> 8 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:user.name=hadoop
> 8 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:user.home=/home/hadoop
> 8 [main] INFO org.apache.zookeeper.ZooKeeper  - Client environment:user.dir=/home/hadoop/workspace/test4tephra
> 9 [main] INFO org.apache.zookeeper.ZooKeeper  - Initiating client connection, connectString=localhost
sessionTimeout=90000 watcher=org.apache.tephra.zookeeper.TephraZKClientService$5@cd3fee8
> 11 [main] DEBUG org.apache.zookeeper.ClientCnxn  - zookeeper.disableAutoWatchReset is
false
> 24 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Opening
socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate
using SASL (unknown error)
> 81 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Socket connection
established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
> 82 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Session
establishment request sent on localhost/0:0:0:0:0:0:0:1:2181
> 119 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Session
establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15866a20b02000c,
negotiated timeout = 90000
> 136 [zk-client-EventThread] DEBUG org.apache.tephra.zookeeper.TephraZKClientService 
- Connected to ZooKeeper: localhost
> 170 [main] DEBUG org.apache.tephra.distributed.TransactionServiceClient  - Retry strategy
is sleep 100 ms with back off factor 4 and limit 30000 ms
> 332 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=,
sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos
logins and latency (milliseconds)])
> 342 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=,
sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos
logins and latency (milliseconds)])
> 342 [main] DEBUG org.apache.hadoop.metrics2.lib.MutableMetricsFactory  - field org.apache.hadoop.metrics2.lib.MutableRate
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=,
sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[GetGroups])
> 343 [main] DEBUG org.apache.hadoop.metrics2.impl.MetricsSystemImpl  - UgiMetrics, User
and group related metrics
> 422 [main] DEBUG org.apache.hadoop.security.authentication.util.KerberosName  - Kerberos
krb5 configuration not found, setting default realm to empty
> 426 [main] DEBUG org.apache.hadoop.security.Groups  -  Creating new Groups object
> 429 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader  - Trying to load the custom-built
native-hadoop library...
> 429 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader  - Failed to load native-hadoop
with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
> 430 [main] DEBUG org.apache.hadoop.util.NativeCodeLoader  - java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> 430 [main] WARN org.apache.hadoop.util.NativeCodeLoader  - Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
> 430 [main] DEBUG org.apache.hadoop.util.PerformanceAdvisory  - Falling back to shell
based
> 431 [main] DEBUG org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback  -
Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
> 472 [main] DEBUG org.apache.hadoop.util.Shell  - setsid exited with exit code 0
> 472 [main] DEBUG org.apache.hadoop.security.Groups  - Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback;
cacheTimeout=300000; warningDeltaMs=5000
> 476 [main] DEBUG org.apache.hadoop.security.UserGroupInformation  - hadoop login
> 477 [main] DEBUG org.apache.hadoop.security.UserGroupInformation  - hadoop login commit
> 482 [main] DEBUG org.apache.hadoop.security.UserGroupInformation  - using local user:UnixPrincipal:
hadoop
> 482 [main] DEBUG org.apache.hadoop.security.UserGroupInformation  - Using user: "UnixPrincipal:
hadoop" with name hadoop
> 482 [main] DEBUG org.apache.hadoop.security.UserGroupInformation  - User entry: "hadoop"
> 484 [main] DEBUG org.apache.hadoop.security.UserGroupInformation  - UGI loginUser:hadoop
(auth:SIMPLE)
> 625 [main] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper  - Process identifier=hconnection-0x3e11f9e9
connecting to ZooKeeper ensemble=localhost:2181
> 625 [main] INFO org.apache.zookeeper.ZooKeeper  - Initiating client connection, connectString=localhost:2181
sessionTimeout=90000 watcher=hconnection-0x3e11f9e90x0, quorum=localhost:2181, baseZNode=/hbase
> 629 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Opening
socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using
SASL (unknown error)
> 629 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Socket
connection established to localhost/127.0.0.1:2181, initiating session
> 629 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Session
establishment request sent on localhost/127.0.0.1:2181
> 679 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Session
establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x15866a20b02000d,
negotiated timeout = 90000
> 680 [main-EventThread] DEBUG org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher  - hconnection-0x3e11f9e90x0,
quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected,
path=null
> 683 [main-EventThread] DEBUG org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher  - hconnection-0x3e11f9e9-0x15866a20b02000d
connected
> 683 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000d, packet:: clientPath:null serverPath:null finished:false
header:: 1,3  replyHeader:: 1,29255,0  request:: '/hbase/hbaseid,F  response:: s{17,29013,1478070051367,1479190651690,46,0,0,0,67,0,17}

> 686 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000d, packet:: clientPath:null serverPath:null finished:false
header:: 2,4  replyHeader:: 2,29255,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a313630303018ffffffd6121553ffffffc9fffffff5ffffffd450425546a2464353733613132642d646363322d343763642d616538372d613333646162316630306566,s{17,29013,1478070051367,1479190651690,46,0,0,0,67,0,17}

> 942 [main] DEBUG org.apache.hadoop.hbase.ipc.AbstractRpcClient  - Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2584b82d,
compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000,
minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
> 1212 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000d, packet:: clientPath:null serverPath:null finished:false
header:: 3,3  replyHeader:: 3,29255,0  request:: '/hbase,F  response:: s{3,3,1478070047178,1478070047178,0,270,0,0,0,16,29029}

> 1216 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000d, packet:: clientPath:null serverPath:null finished:false
header:: 4,4  replyHeader:: 4,29255,0  request:: '/hbase/master,F  response:: #ffffffff000146d61737465723a3136303030ffffffabffffff8732ffffffef50ffffffe2ffffff8f3d50425546a14a86d61737465722d3010ffffff807d18ffffff9fffffffa8ffffff88ffffffb5ffffff862b10018ffffff8a7d,s{29009,29009,1479190650643,1479190650643,0,0,0,96940238143881216,56,0,29009}

> 1404 [main] DEBUG org.apache.hadoop.hbase.ipc.AbstractRpcClient  - Use SIMPLE authentication
for service MasterService, sasl=false
> 1432 [main] DEBUG org.apache.hadoop.hbase.ipc.AbstractRpcClient  - Connecting to master-0/192.168.3.111:16000
> 2788 [main] INFO org.apache.hadoop.hbase.client.HBaseAdmin  - Created testbalances1
> 2788 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation
 - Closing master protocol: MasterService
> 2789 [main] INFO org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation
 - Closing zookeeper sessionid=0x15866a20b02000d
> 2790 [main] DEBUG org.apache.zookeeper.ZooKeeper  - Closing session: 0x15866a20b02000d
> 2790 [main] DEBUG org.apache.zookeeper.ClientCnxn  - Closing client for session: 0x15866a20b02000d
> 2828 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000d, packet:: clientPath:null serverPath:null finished:false
header:: 5,-11  replyHeader:: 5,29267,0  request:: null response:: null
> 2828 [main] DEBUG org.apache.zookeeper.ClientCnxn  - Disconnecting client for session:
0x15866a20b02000d
> 2828 [main] INFO org.apache.zookeeper.ZooKeeper  - Session: 0x15866a20b02000d closed
> 2828 [main] DEBUG org.apache.hadoop.hbase.ipc.AbstractRpcClient  - Stopping rpc client
> 2829 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn  - EventThread shut down
> 2985 [main] INFO org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper  - Process identifier=hconnection-0x12f9af83
connecting to ZooKeeper ensemble=localhost:2181
> 2985 [main] INFO org.apache.zookeeper.ZooKeeper  - Initiating client connection, connectString=localhost:2181
sessionTimeout=90000 watcher=hconnection-0x12f9af830x0, quorum=localhost:2181, baseZNode=/hbase
> 2986 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Opening
socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate
using SASL (unknown error)
> 2987 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Socket
connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session
> 2987 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Session
establishment request sent on localhost/0:0:0:0:0:0:0:1:2181
> 3048 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn  - Session
establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x15866a20b02000e,
negotiated timeout = 90000
> 3048 [main-EventThread] DEBUG org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher  - hconnection-0x12f9af830x0,
quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected,
path=null
> 3051 [main-EventThread] DEBUG org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher  - hconnection-0x12f9af83-0x15866a20b02000e
connected
> 3051 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000e, packet:: clientPath:null serverPath:null finished:false
header:: 1,3  replyHeader:: 1,29268,0  request:: '/hbase/hbaseid,F  response:: s{17,29013,1478070051367,1479190651690,46,0,0,0,67,0,17}

> 3053 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000e, packet:: clientPath:null serverPath:null finished:false
header:: 2,4  replyHeader:: 2,29268,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a313630303018ffffffd6121553ffffffc9fffffff5ffffffd450425546a2464353733613132642d646363322d343763642d616538372d613333646162316630306566,s{17,29013,1478070051367,1479190651690,46,0,0,0,67,0,17}

> 3054 [main] DEBUG org.apache.hadoop.hbase.ipc.AbstractRpcClient  - Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@19b93fa8,
compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000,
minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null
> 3114 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000c, packet:: clientPath:/discoverable/transaction serverPath:/discoverable/transaction
finished:false header:: 1,12  replyHeader:: 1,29268,-101  request:: '/discoverable/transaction,T
 response:: v{} 
> 3125 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn  - Reading
reply sessionid:0x15866a20b02000c, packet:: clientPath:/discoverable/transaction serverPath:/discoverable/transaction
finished:false header:: 2,3  replyHeader:: 2,29268,-101  request:: '/discoverable/transaction,T
 response::  
> 5119 [Thread-5] ERROR org.apache.tephra.distributed.AbstractClientProvider  - Unable
to discover tx service.
> 5120 [Thread-5] DEBUG org.apache.tephra.distributed.RetryWithBackoff  - Sleeping 100
ms before retry.
> 5122 [Thread-7] ERROR org.apache.tephra.distributed.AbstractClientProvider  - Unable
to discover tx service.
> 5122 [Thread-6] ERROR org.apache.tephra.distributed.AbstractClientProvider  - Unable
to discover tx service.
> 5122 [Thread-6] DEBUG org.apache.tephra.distributed.RetryWithBackoff  - Sleeping 100
ms before retry.
> 5122 [Thread-7] DEBUG org.apache.tephra.distributed.RetryWithBackoff  - Sleeping 100
ms before retry.
> 5221 [Thread-5] INFO org.apache.tephra.distributed.TransactionServiceClient  - Retrying
startShort after Thrift error: Unable to discover tx service.
> 5221 [Thread-5] DEBUG org.apache.tephra.distributed.TransactionServiceClient  - Retrying
startShort after Thrift error: Unable to discover tx service.
> org.apache.thrift.TException: Unable to discover tx service.
> 	at org.apache.tephra.distributed.AbstractClientProvider.newClient(AbstractClientProvider.java:106)
> 	at org.apache.tephra.distributed.AbstractClientProvider.newClient(AbstractClientProvider.java:85)
> 	at org.apache.tephra.distributed.PooledClientProvider$TxClientPool.create(PooledClientProvider.java:48)
> 	at org.apache.tephra.distributed.PooledClientProvider$TxClientPool.create(PooledClientProvider.java:41)
> 	at org.apache.tephra.distributed.ElasticPool.getOrCreate(ElasticPool.java:138)
> 	at org.apache.tephra.distributed.ElasticPool.obtain(ElasticPool.java:125)
> 	at org.apache.tephra.distributed.PooledClientProvider.getCloseableClient(PooledClientProvider.java:101)
> 	at org.apache.tephra.distributed.TransactionServiceClient.execute(TransactionServiceClient.java:217)
> 	at org.apache.tephra.distributed.TransactionServiceClient.execute(TransactionServiceClient.java:188)
> 	at org.apache.tephra.distributed.TransactionServiceClient.startShort(TransactionServiceClient.java:261)
> 	at org.apache.tephra.TransactionContext.start(TransactionContext.java:91)
> 	at test.test4tephra.BalanceBooks$Client.runOnce(BalanceBooks.java:304)
> 	at test.test4tephra.BalanceBooks$Client.run(BalanceBooks.java:289)
> {noformat}
> And the _TransactionServiceMain_ is running.
> How should I configure it to discover tx service?
> Anything is help ,thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message