accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Keith Turner <ke...@deenlo.com>
Subject Re: Remote connections to Accumulo
Date Tue, 27 May 2014 15:40:48 GMT
Seems like there is a problem connecting to hdfs.  Seems its trying to
connect to the namenode using localhost:9000.  What is your namenode
configured to in your hdfs config where the shell is running?

You can try using the -zi and -zh options w/ the Accumulo shell.  With
these options, only zookeeper will be used to find accumulo servers (hdfs
will not be used to find the instance id).




On Tue, May 27, 2014 at 9:35 AM, Geoffry Roberts <threadedblue@gmail.com>wrote:

> I have Accumulo set up in a virtual environment.  From within the guest
> environment, I can connect with the shell, and I can connect to Zookeeper.
>  But from the host environment, things are different.  I can connect to
> Zookeeper just fine, but I cannot connect to Accumulo with with a program
> or with the shell.  the shell throws errors and the program appears to hang.
>
> Hadoop: 2.3.0
> Zookeeper: 3.4.6
> Accumulo: 1.5.1
> Host: OSX 10.9
> Guest: Ubuntu precise 64.
> Virtual Box 4.3.10
>
> My questions:
>
>
>    1. Should the shell be able to connect remotely?  Maybe I'm wrong in
>    thinking it should.
>    2. How should I interpret the error listed below?  I'm guessing the
>    problem has to do with localhost:9000, but I'm not getting it.  Yes the
>    accumulo_id appears to be available in hdfs.
>
>
> Thanks
>
> Error dump:
>
> Starting /usr/local/accumulo/bin/accumulo shell -u root
>
> 2014-05-27 09:25:51.329 java[1015:6503] Unable to load realm info from
> SCDynamicStore
>
> 2014-05-27 09:25:51,411 [util.NativeCodeLoader] WARN : Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
>
> 2014-05-27 09:25:52,216 [client.ZooKeeperInstance] ERROR: Problem reading
> instance id out of hdfs at /accumulo/instance_id
>
> java.io.IOException: Failed on local exception: java.io.IOException:
> Connection reset by peer; Host Details : local host is: "abend.home/
> 192.168.1.7"; destination host is: "localhost":9000;
>
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1410)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1359)
>
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
> at com.sun.proxy.$Proxy9.getListing(Unknown Source)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
> at com.sun.proxy.$Proxy9.getListing(Unknown Source)
>
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:502)
>
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1727)
>
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1710)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:646)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:98)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:704)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:704)
>
> at
> org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(ZooKeeperInstance.java:288)
>
> at
> org.apache.accumulo.core.util.shell.Shell.getDefaultInstance(Shell.java:402)
>
> at org.apache.accumulo.core.util.shell.Shell.setInstance(Shell.java:394)
>
> at org.apache.accumulo.core.util.shell.Shell.config(Shell.java:258)
>
> at org.apache.accumulo.core.util.shell.Shell.main(Shell.java:411)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at org.apache.accumulo.start.Main$1.run(Main.java:103)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.io.IOException: Connection reset by peer
>
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>
> at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>
> at
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
>
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
>
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
>
> at
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:510)
>
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
>
> at
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1050)
>
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:945)
>
> Thread "shell" died java.lang.reflect.InvocationTargetException
>
>
> --
> There are ways and there are ways,
>
> Geoffry Roberts
>

Mime
View raw message