hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sachin Mittal <sjmit...@gmail.com>
Subject Re: Can not connect local java client to a remote Hbase
Date Fri, 22 Apr 2016 02:49:41 GMT
Check these links out
http://stackoverflow.com/questions/36377393/connecting-to-hbase-1-0-3-via-java-client-stuck-at-zookeeper-clientcnxn-session
http://mail-archives.apache.org/mod_mbox/hbase-user/201604.mbox/browser

First what is you machines IP address.

If you can specify only IP address in regionserver and hbase-site.xml and
also remove 192.168.1.240   master-sigma from hosts then you can be sure
everything is getting resolved via IP address only.

Also enable trace logging to understand more, as what call is getting
failed and why.

What I have found is that in HBase some servers are resolved differently as
pointed in those links.

Hope it helps.

Sachin


On Thu, Apr 21, 2016 at 11:11 PM, SOUFIANI Mustapha | السفياني مصطفى <
s.mustapha86@gmail.com> wrote:

> Hi all,
> I'm trying to connect my local java client (pentaho) to a remote Hbase but
> every time I get a TimeOut error telleing me that this connection couldn't
> be established.
>
> herer is the full message error:
>
>
> *******************************************************************************************
>
> java.io.IOException:
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=36, exceptions:
> Wed Apr 20 10:32:43 WEST 2016, null, java.net.SocketTimeoutException:
> callTimeout=60000, callDuration=75181: row 'pentaho_mappings,,' on table
> 'hbase:meta' at region=hbase:meta,,1.1588230740,
> hostname=localhost,16020,1461071963695, seqNum=0
>
>
>     at
>
> com.pentaho.big.data.bundles.impl.shim.hbase.table.HBaseTableImpl.exists(HBaseTableImpl.java:71)
>
>     at
>
> org.pentaho.big.data.kettle.plugins.hbase.mapping.MappingAdmin.getMappedTables(MappingAdmin.java:502)
>
>     at
>
> org.pentaho.big.data.kettle.plugins.hbase.output.HBaseOutputDialog.setupMappedTableNames(HBaseOutputDialog.java:818)
>
>     at
>
> org.pentaho.big.data.kettle.plugins.hbase.output.HBaseOutputDialog.access$900(HBaseOutputDialog.java:88)
>
>     at
>
> org.pentaho.big.data.kettle.plugins.hbase.output.HBaseOutputDialog$7.widgetSelected(HBaseOutputDialog.java:398)
>
>     at org.eclipse.swt.widgets.TypedListener.handleEvent(Unknown Source)
>
>     at org.eclipse.swt.widgets.EventTable.sendEvent(Unknown Source)
>
>     at org.eclipse.swt.widgets.Widget.sendEvent(Unknown Source)
>
>     at org.eclipse.swt.widgets.Display.runDeferredEvents(Unknown Source)
>
>     at org.eclipse.swt.widgets.Display.readAndDispatch(Unknown Source)
>
>     at
>
> org.pentaho.big.data.kettle.plugins.hbase.output.HBaseOutputDialog.open(HBaseOutputDialog.java:603)
>
>     at
>
> org.pentaho.di.ui.spoon.delegates.SpoonStepsDelegate.editStep(SpoonStepsDelegate.java:125)
>
>     at org.pentaho.di.ui.spoon.Spoon.editStep(Spoon.java:8783)
>
>     at
> org.pentaho.di.ui.spoon.trans.TransGraph.editStep(TransGraph.java:3072)
>
>     at
>
> org.pentaho.di.ui.spoon.trans.TransGraph.mouseDoubleClick(TransGraph.java:755)
>
>     at org.eclipse.swt.widgets.TypedListener.handleEvent(Unknown Source)
>
>     at org.eclipse.swt.widgets.EventTable.sendEvent(Unknown Source)
>
>     at org.eclipse.swt.widgets.Widget.sendEvent(Unknown Source)
>
>     at org.eclipse.swt.widgets.Display.runDeferredEvents(Unknown Source)
>
>     at org.eclipse.swt.widgets.Display.readAndDispatch(Unknown Source)
>
>     at org.pentaho.di.ui.spoon.Spoon.readAndDispatch(Spoon.java:1347)
>
>     at org.pentaho.di.ui.spoon.Spoon.waitForDispose(Spoon.java:7989)
>
>     at org.pentaho.di.ui.spoon.Spoon.start(Spoon.java:9269)
>
>     at org.pentaho.di.ui.spoon.Spoon.main(Spoon.java:662)
>
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>     at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>
>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>
>     at java.lang.reflect.Method.invoke(Unknown Source)
>
>     at org.pentaho.commons.launcher.Launcher.main(Launcher.java:92)
>
> Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
> after attempts=36, exceptions:
> Wed Apr 20 10:32:43 WEST 2016, null, java.net.SocketTimeoutException:
> callTimeout=60000, callDuration=75181: row 'pentaho_mappings,,' on table
> 'hbase:meta' at region=hbase:meta,,1.1588230740,
> hostname=localhost,16020,1461071963695, seqNum=0
>
>
>     at
>
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:270)
>
>     at
>
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:225)
>
>     at
>
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:63)
>
>     at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>
>     at
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
>
>     at
>
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:289)
>
>     at
>
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:161)
>
>     at
> org.apache.hadoop.hbase.client.ClientScanner.<init>(ClientScanner.java:156)
>
>     at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:888)
>
>     at
>
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:601)
>
>     at
>
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:365)
>
>     at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:310)
>
>     at
>
> org.pentaho.hadoop.hbase.factory.HBase10Admin.tableExists(HBase10Admin.java:41)
>
>     at
>
> org.pentaho.hbase.shim.common.CommonHBaseConnection.tableExists(CommonHBaseConnection.java:206)
>
>     at
>
> org.pentaho.hbase.shim.common.HBaseConnectionImpl.access$801(HBaseConnectionImpl.java:35)
>
>     at
>
> org.pentaho.hbase.shim.common.HBaseConnectionImpl$9.call(HBaseConnectionImpl.java:185)
>
>     at
>
> org.pentaho.hbase.shim.common.HBaseConnectionImpl$9.call(HBaseConnectionImpl.java:181)
>
>     at
>
> org.pentaho.hbase.shim.common.HBaseConnectionImpl.doWithContextClassLoader(HBaseConnectionImpl.java:76)
>
>     at
>
> org.pentaho.hbase.shim.common.HBaseConnectionImpl.tableExists(HBaseConnectionImpl.java:181)
>
>     at
>
> com.pentaho.big.data.bundles.impl.shim.hbase.HBaseConnectionWrapper.tableExists(HBaseConnectionWrapper.java:72)
>
>     at
>
> com.pentaho.big.data.bundles.impl.shim.hbase.table.HBaseTableImpl.exists(HBaseTableImpl.java:69)
>
>     ... 28 more
>
> Caused by: java.net.SocketTimeoutException: callTimeout=60000,
> callDuration=75181: row 'pentaho_mappings,,' on table 'hbase:meta' at
> region=hbase:meta,,1.1588230740, hostname=localhost,16020,1461071963695,
> seqNum=0
>
>     at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159)
>
>     at
>
> org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
>
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>
>     at java.lang.Thread.run(Unknown Source)
>
> Caused by: java.net.ConnectException: Connection refused: no further
> information
>
>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>
>     at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
>
>     at
>
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>
>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
>
>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
>
>     at
>
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:404)
>
>     at
>
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:710)
>
>     at
>
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:890)
>
>     at
>
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:859)
>
>     at
> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1193)
>
>     at
>
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
>
>     at
>
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
>
>     at
>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
>
>     at
>
> org.apache.hadoop.hbase.client.ScannerCallable.openScanner(ScannerCallable.java:372)
>
>     at
>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
>
>     at
>
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
>
>     at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>
>     at
>
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:371)
>
>     at
>
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:345)
>
>     at
>
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
>
>     ... 4 more
>
> *******************************************************************************************
>
> here is the hbase-site.xml file :
>
>
> *******************************************************************************************
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <!--
> /**
>  *
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  *     http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> -->
> <configuration>
>
> <property>
>     <name>hbase.rootdir</name>
>     <value>hdfs://master-sigma:54310/hbase</value>
>   </property>
>
>   <property>
>     <name>hbase.cluster.distributed</name>
>     <value>true</value>
>   </property>
>
>   <property>
>     <name>hbase.zookeeper.quorum</name>
>     <value>master-sigma</value>
>   </property>
>
>   <property>
>     <name>dfs.replication</name>
>     <value>1</value>
>   </property>
>
>   <property>
>     <name>hbase.zookeeper.property.clientPort</name>
>     <value>2181</value>
>   </property>
>
>   <property>
>     <name>hbase.zookeeper.property.dataDir</name>
>     <value>/home/hduser/hbase/zookeeper</value>
>   </property>
>
>
> </configuration>
>
>
> *******************************************************************************************
>
> regionserver file :
>
>
> *******************************************************************************************
> master-sigma
>
> *******************************************************************************************
>
> the /etc/hosts file :
>
>
> *******************************************************************************************
> 127.0.0.1    localhost
> 127.0.0.1    big-services
> 192.168.1.240   master-sigma
> # The following lines are desirable for IPv6 capable hosts
> ::1     localhost ip6-localhost ip6-loopback
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
>
>
> *******************************************************************************************
>
> Can you help me on this ?
>
> Thanks in advance.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message