accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Vines <vi...@apache.org>
Subject Re: remote accumulo instance issue
Date Wed, 08 May 2013 16:43:16 GMT
What version of Accumulo are you running?

Sent from my phone, please pardon the typos and brevity.
On May 8, 2013 12:38 PM, "Marc Reichman" <mreichman@pixelforensics.com>
wrote:

> I can't find anything wrong with the networking. Here is the whole error
> with stack trace:
> 2057 [main] WARN org.apache.accumulo.core.client.impl.ServerClient  -
> Failed to find an available server in the list of servers:
> [192.168.1.164:9997:9997 (120000), 192.168.1.192:9997:9997 (120000),
> 192.168.1.194:9997:9997 (120000), 192.168.1.162:9997:9997 (120000),
> 192.168.1.190:9997:9997 (120000), 192.168.1.166:9997:9997 (120000),
> 192.168.1.168:9997:9997 (120000), 192.168.1.196:9997:9997 (120000)]
> Exception in thread "main" java.lang.IncompatibleClassChangeError:
> Implementing class
> at java.lang.ClassLoader.defineClass1(Native Method)
>  at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:615)
>  at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
>  at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
>  at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
>  at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> at
> org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:146)
>  at
> org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:123)
> at
> org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:105)
>  at
> org.apache.accumulo.core.client.impl.ServerClient.execute(ServerClient.java:71)
> at
> org.apache.accumulo.core.client.impl.ConnectorImpl.<init>(ConnectorImpl.java:75)
>  at
> org.apache.accumulo.core.client.ZooKeeperInstance.getConnector(ZooKeeperInstance.java:218)
> at
> org.apache.accumulo.core.client.ZooKeeperInstance.getConnector(ZooKeeperInstance.java:206)
>
> Running on JDK 1.6.0_27
>
>
> On Wed, May 8, 2013 at 10:38 AM, Keith Turner <keith@deenlo.com> wrote:
>
>>
>>
>>
>> On Wed, May 8, 2013 at 11:09 AM, Marc Reichman <
>> mreichman@pixelforensics.com> wrote:
>>
>>> I have seen this as ticket ACCUMULO-687 which has been marked resolved,
>>> but I still see this issue.
>>>
>>> I am connecting to a remote accumulo instance to query and to launch
>>> mapreduce jobs using AccumuloRowInputFormat, and I'm seeing an error like:
>>>
>>> 91 [main-SendThread(padres.home:2181)] INFO
>>> org.apache.zookeeper.ClientCnxn  - Socket connection established to
>>> padres.home/192.168.1.160:2181, initiating session
>>> 166 [main-SendThread(padres.home:2181)] INFO
>>> org.apache.zookeeper.ClientCnxn  - Session establishment complete on server
>>> padres.home/192.168.1.160:2181, sessionid = 0x13e7b48f9d17af7,
>>> negotiated timeout = 30000
>>> 1889 [main] WARN org.apache.accumulo.core.client.impl.ServerClient  -
>>> Failed to find an available server in the list of servers:
>>> [192.168.1.164:9997:9997 (120000), 192.168.1.192:9997:9997 (120000),
>>> 192.168.1.194:9997:9997 (120000), 192.168.1.162:9997:9997 (120000),
>>> 192.168.1.190:9997:9997 (120000), 192.168.1.166:9997:9997 (120000),
>>> 192.168.1.168:9997:9997 (120000), 192.168.1.196:9997:9997 (120000)]
>>>
>>> My zookeeper's "tservers" key looks like:
>>> [zk: localhost:2181(CONNECTED) 1] ls
>>> /accumulo/908a756e-1c81-4bea-a4de-675456499a10/tservers
>>> [192.168.1.164:9997, 192.168.1.192:9997, 192.168.1.194:9997,
>>> 192.168.1.162:9997, 192.168.1.190:9997, 192.168.1.166:9997,
>>> 192.168.1.168:9997, 192.168.1.196:9997]
>>>
>>> My masters and slaves file look like:
>>> [hadoop@padres conf]$ cat masters
>>> 192.168.1.160
>>> [hadoop@padres conf]$ cat slaves
>>> 192.168.1.162
>>> 192.168.1.164
>>> 192.168.1.166
>>> 192.168.1.168
>>> 192.168.1.190
>>> 192.168.1.192
>>> 192.168.1.194
>>> 192.168.1.196
>>>
>>> tracers, gc, and monitor are the same as masters.
>>>
>>> I have no issues executing on the master, but I would like to work from
>>> a remote host. The remote host is on a VPN, and its default resolver is NOT
>>> the resolver from the remote network. If I do reverse lookup over the VPN
>>> *using* the remote resolver it shows proper hostnames.
>>>
>>> My concern is that something is causing the "host:port" entry plus the
>>> port to come up with this concatenated view of host:port:port, which is
>>> obviously not going to work.
>>>
>>
>> The second port is nothing to worry about. Its created by concatenating
>> what came from zookeeper with the default tserver port.  The location from
>> zookeeper can contain a port.   If for some reason the location in
>> zookeeper did not have a port, it would use the default.
>>
>> That second port should probably go away, its being added
>> by vestigial code.  We always expect what comes from zookeeper to have port
>> now.
>>
>>
>>>
>>> What else can I try? I previously had hostnames in the
>>> masters/slaves/etc. files but now have the IPs. Should I re-init the
>>> instance to see if it changes anything in zookeeper?
>>>
>>
>>
>

Mime
View raw message