Return-Path: X-Original-To: apmail-accumulo-user-archive@www.apache.org Delivered-To: apmail-accumulo-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7253518A57 for ; Fri, 12 Jun 2015 12:51:56 +0000 (UTC) Received: (qmail 89139 invoked by uid 500); 12 Jun 2015 12:51:56 -0000 Delivered-To: apmail-accumulo-user-archive@accumulo.apache.org Received: (qmail 89088 invoked by uid 500); 12 Jun 2015 12:51:56 -0000 Mailing-List: contact user-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@accumulo.apache.org Delivered-To: mailing list user@accumulo.apache.org Received: (qmail 89077 invoked by uid 99); 12 Jun 2015 12:51:56 -0000 Received: from Unknown (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 12 Jun 2015 12:51:56 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id B56451A54F7 for ; Fri, 12 Jun 2015 12:51:55 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.9 X-Spam-Level: *** X-Spam-Status: No, score=3.9 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_REPLY=1, HTML_MESSAGE=3, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd2-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-west.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id cKY3Cwb-PJn8 for ; Fri, 12 Jun 2015 12:51:43 +0000 (UTC) Received: from mail-vn0-f49.google.com (mail-vn0-f49.google.com [209.85.216.49]) by mx1-us-west.apache.org (ASF Mail Server at mx1-us-west.apache.org) with ESMTPS id C11E9216F2 for ; Fri, 12 Jun 2015 12:51:38 +0000 (UTC) Received: by vnbg7 with SMTP id g7so5795770vnb.1 for ; Fri, 12 Jun 2015 05:51:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=AMHNTowDp5m9rML8lyeyFf+ZcvKKHYeJam1+VtvKM/0=; b=UOFwNFVYrVVb61Xf+JeGx+cW81+w8n/3HGJFIKOWUI66uVOn555eg0ML6MM+wqoT+J bRtVpPK2/yNxf9SFh4rK75b49Az5PTFQbFgkwfnPS9N2Ic55A7tpPGNgws6CKE6LR3gP 9XBYcaelAtpRVbFLw15rGFdi6nafaj5rwQnFt3zRS0UTZfyoUh4Zs82xFUvXW9AXuF74 9JXMdHW8kWbLgcktYPDdYPbEEn1lZZzQ2bZmll+91gm2poclPZXRSJh+czms0osr8UmB 5jvHymSC9087vJ0j50scZnseSMSKSsNsFIVvkqezHeEhYxYkuZmYyd341jq0yzapQaMq Ov9A== MIME-Version: 1.0 X-Received: by 10.52.240.198 with SMTP id wc6mr25514301vdc.34.1434113497708; Fri, 12 Jun 2015 05:51:37 -0700 (PDT) Received: by 10.52.190.230 with HTTP; Fri, 12 Jun 2015 05:51:37 -0700 (PDT) In-Reply-To: <557A51C6.3030603@gmail.com> References: <5579FDC5.5020006@gmail.com> <557A30A0.6060601@gmail.com> <557A38B5.90504@gmail.com> <557A51C6.3030603@gmail.com> Date: Fri, 12 Jun 2015 08:51:37 -0400 Message-ID: Subject: same token couldn't authenticate twice? From: "Xu (Simon) Chen" To: "user@accumulo.apache.org" Content-Type: multipart/alternative; boundary=20cf307810b2993f2e0518519157 --20cf307810b2993f2e0518519157 Content-Type: text/plain; charset=UTF-8 Emm.. I have ~/.accumulo/config with "instance.rpc.sasl.enabled=true". That property is indeed populated to ClientConfiguration the first time - that's why I said the token worked initially. Apparently, in the Hadoop portion that property is not set, as I added some debug message to ZooKeeperInstance class. I think that's likely the issue. So the zookeeper instance is created in the following sequence: https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java#L341 https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/InputConfigurator.java#L671 https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java#L361 The getClientConfiguration function calls getDefaultSearchPath() eventually, so my ~/.accumulo/config should be searched. I think we are close to the root cause... Will update when I find out more. Thanks! -Simon On Thu, Jun 11, 2015 at 11:28 PM, Josh Elser wrote: > Are you sure that the spark tasks have the proper ClientConfiguration? They > need to have instance.rpc.sasl.enabled. I believe you should be able to set > this via the AccumuloInputFormat > > You can turn up logging org.apache.accumulo.core.client=TRACE and/or set the > system property -Dsun.security.krb5.debug=true to get some more information > as to why the authentication is failing. > > > Xu (Simon) Chen wrote: >> >> Josh, >> >> I am using this function: >> >> >> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java#L106 >> >> If I pass in a KerberosToken, it's stuck at line 111; if I pass in a >> delegation token, the setConnectorInfo function finishes fine. >> >> But when I do something like queryRDD.count, spark eventually calls >> HadoopRDD.getPartitions, which calls the following and get stuck in >> the last authenticate() function: >> >> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java#L621 >> >> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.java#L348 >> >> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/ZooKeeperInstance.java#L248 >> >> https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70 >> >> Which essentially the same place where it would be stuck with >> KerberosToken. >> >> -Simon >> >> On Thu, Jun 11, 2015 at 9:41 PM, Josh Elser wrote: >>> >>> What are the Accumulo methods that you are calling and what is the error >>> you >>> are seeing? >>> >>> A KerberosToken cannot be used in a MapReduce job which is why a >>> DelegationToken is automatically retrieved. You should still be able to >>> provide your own DelegationToken -- if that doesn't work, that's a bug. >>> >>> Xu (Simon) Chen wrote: >>>> >>>> I actually added a flag such that I can pass in either a KerberosToken >>>> or a DelegationTokenImpl to accumulo. >>>> >>>> Actually when a KerberosToken is passed in, accumulo converts it to a >>>> DelegationToken - the conversion is where I am having trouble. I tried >>>> passing in a delegation token directly to bypass the conversion, but a >>>> similar problem happens, that I am stuck at authenticate on the client >>>> side, and server side outputs the same output... >>>> >>>> On Thursday, June 11, 2015, Josh Elser>>> > wrote: >>>> >>>> Keep in mind that the authentication path for DelegationToken >>>> (mapreduce) and KerberosToken are completely different. >>>> >>>> Since most mapreduce jobs have multiple mappers (or reducers), I >>>> expect we would have run into the case that the same >>>> DelegationToken >>>> was used multiple times. It would still be good to narrow down the >>>> scope of the problem. >>>> >>>> Xu (Simon) Chen wrote: >>>> >>>> Thanks Josh... >>>> >>>> I tested this in scala REPL, and called >>>> DataStoreFinder.getDataStore() >>>> multiple times, each time it seems to be reusing the same >>>> KerberosToken object, and it works fine each time. >>>> >>>> So my problem only happens when the token is used in accumulo's >>>> mapred >>>> package. Weird.. >>>> >>>> -Simon >>>> >>>> >>>> On Thu, Jun 11, 2015 at 5:29 PM, Josh >>>> Elser wrote: >>>> >>>> Simon, >>>> >>>> Can you reproduce this in plain-jane Java code? I don't >>>> know >>>> enough about >>>> spark/scala, much less what Geomesa is actually do, to know >>>> what the issue >>>> is. >>>> >>>> Also, which token are you referring to: A KerberosToken or >>>> a >>>> DelegationToken? Either of them should be usable as many >>>> times as you'd like >>>> (given the underlying credentials are still available for >>>> KT >>>> or the DT token >>>> hasn't yet expired). >>>> >>>> >>>> Xu (Simon) Chen wrote: >>>> >>>> Folks, >>>> >>>> I am working on geomesa+accumulo+spark integration. For >>>> some reason, I >>>> found that the same token cannot be used to >>>> authenticate >>>> twice. >>>> >>>> The workflow is that geomesa would try to create a >>>> hadoop rdd, during >>>> which it tries to create an AccumuloDataStore: >>>> >>>> >>>> >>>> https://github.com/locationtech/geomesa/blob/master/geomesa-compute/src/main/scala/org/locationtech/geomesa/compute/spark/GeoMesaSpark.scala#L81 >>>> >>>> During this process, a ZooKeeperInstance is created: >>>> >>>> >>>> >>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/locationtech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177 >>>> I modified geomesa such that it would use kerberos to >>>> authenticate >>>> here. This step works fine. >>>> >>>> But next, geomesa calls >>>> ConfigurationBase.setConnectorInfo: >>>> >>>> >>>> >>>> https://github.com/locationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-compute/src/main/scala/org/locationtech/geomesa/compute/spark/GeoMesaSpark.scala#L69 >>>> >>>> This is using the same token and the same zookeeper >>>> URI, >>>> for some >>>> reason it is stuck on spark-shell, and the following is >>>> outputted on >>>> tserver side: >>>> >>>> 2015-06-06 18:58:19,616 [server.TThreadPoolServer] >>>> ERROR: Error >>>> occurred during processing of message. >>>> java.lang.RuntimeException: >>>> org.apache.thrift.transport.TTransportException: >>>> java.net.SocketTimeoutException: Read >>>> >>>> timed out >>>> at >>>> >>>> >>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) >>>> at >>>> >>>> >>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:51) >>>> at >>>> >>>> >>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1.run(UGIAssumingTransportFactory.java:48) >>>> at >>>> java.security.AccessController.doPrivileged(Native >>>> Method) >>>> at >>>> javax.security.auth.Subject.doAs(Subject.java:356) >>>> at >>>> >>>> >>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1622) >>>> at >>>> >>>> >>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.getTransport(UGIAssumingTransportFactory.java:48) >>>> at >>>> >>>> >>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:208) >>>> at >>>> >>>> >>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >>>> at >>>> >>>> >>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >>>> at >>>> >>>> >>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35) >>>> at java.lang.Thread.run(Thread.java:745) >>>> Caused by: >>>> org.apache.thrift.transport.TTransportException: >>>> java.net.SocketTimeoutException: Read >>>> timed out >>>> at >>>> >>>> >>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129) >>>> at >>>> >>>> org.apache.thrift.transport.TTransport.readAll(TTransport.java:84) >>>> at >>>> >>>> >>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182) >>>> at >>>> >>>> >>>> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) >>>> at >>>> >>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253) >>>> at >>>> >>>> >>>> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) >>>> at >>>> >>>> >>>> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) >>>> ... 11 more >>>> Caused by: java.net >>>> .SocketTimeoutException: Read timed >>>> out >>>> at >>>> java.net.SocketInputStream.socketRead0(Native Method) >>>> at >>>> >>>> java.net.SocketInputStream.read(SocketInputStream.java:152) >>>> at >>>> >>>> java.net.SocketInputStream.read(SocketInputStream.java:122) >>>> at >>>> >>>> java.io.BufferedInputStream.read1(BufferedInputStream.java:273) >>>> at >>>> >>>> java.io.BufferedInputStream.read(BufferedInputStream.java:334) >>>> at >>>> >>>> >>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) >>>> ... 17 more >>>> >>>> Any idea why? >>>> >>>> Thanks. >>>> -Simon >>>> > --20cf307810b2993f2e0518519157 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Emm..=C2=A0I have ~/.accumulo/config with "instance.r= pc.sasl.enabled=3Dtrue". That property is indeed populated to ClientCo= nfiguration the first time - that's why I said the token worked initial= ly.=C2=A0

Apparently, in the Hadoop portion that propert= y is not set, as I added some debug message to ZooKeeperInstance class. I t= hink that's likely the issue.

So the zookeeper= instance is created in the following sequence:
https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apac= he/accumulo/core/client/mapreduce/lib/impl/ConfiguratorBase.java#L361

The getClientConfiguration function calls getDefaul= tSearchPath() eventually, so my ~/.accumulo/config should be searched. I th= ink we are close to the root cause... Will update when I find out more.

Thanks!
-Simon

On Thu, Jun 11, 2015 at 11:28 PM, Josh Elser <josh.elser@gmail.com> wrote:
> Are you sure that the spark tasks have the proper ClientConfiguration?= They
> need to have instance.rpc.sasl.enabled. I believe you should be able t= o set
> this via the AccumuloInputFormat
>
> You can turn up logging org.apache.accumulo.core.client=3DTRACE and/or= set the
> system property -Dsun.security.krb5.debug=3Dtrue to get some more info= rmation
> as to why the authentication is failing.
>
>
> Xu (Simon) Chen wrote:
>>
>> Josh,
>>
>> I am using this function:
>>
>>
>>
https://github.com/apache/accumulo/blob/master/core= /src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.j= ava#L106
>>
>> If I pass in a KerberosToken, it's stuck at line 111; if I pas= s in a
>> delegation token, the setConnectorInfo function finishes fine.
>>
>> But when I do something like queryRDD.count, spark eventually call= s
>> HadoopRDD.getPartitions, which calls the following and get stuck i= n
>> the last authenticate() function:
>>
>> https://github.com/apache/accumulo/blob/master/core= /src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.j= ava#L621
>>
>> https://github.com/apache/accumulo/blob/master/core= /src/main/java/org/apache/accumulo/core/client/mapred/AbstractInputFormat.j= ava#L348
>>
>> https://github.com/apache/accumulo/blob/master/core/src/main= /java/org/apache/accumulo/core/client/ZooKeeperInstance.java#L248
>>
>> https://github.com/apache/accumulo/blob/master/core/src/main= /java/org/apache/accumulo/core/client/impl/ConnectorImpl.java#L70
>>
>> Which essentially the same place where it would be stuck with
>> KerberosToken.
>>
>> -Simon
>>
>> On Thu, Jun 11, 2015 at 9:41 PM, Josh Elser<josh.elser@gmail= .com>=C2=A0 wrote:
>>>
>>> What are the Accumulo methods that you are calling and what is= the error
>>> you
>>> are seeing?
>>>
>>> A KerberosToken cannot be used in a MapReduce job which is why= a
>>> DelegationToken is automatically retrieved. You should still b= e able to
>>> provide your own DelegationToken -- if that doesn't work, = that's a bug.
>>>
>>> Xu (Simon) Chen wrote:
>>>>
>>>> I actually added a flag such that I can pass in either a K= erberosToken
>>>> or a DelegationTokenImpl to accumulo.
>>>>
>>>> Actually when a KerberosToken is passed in, accumulo conve= rts it to a
>>>> DelegationToken - the conversion is where I am having trou= ble. I tried
>>>> passing in a delegation token directly to bypass the conve= rsion, but a
>>>> similar problem happens, that I am stuck at authenticate o= n the client
>>>> side, and server side outputs the same output...
>>>>
>>>> On Thursday, June 11, 2015, Josh Elser<josh.elser@gm= ail.com
>>>> <mailto:josh.elser@gmail.com>>=C2=A0 wrote= :
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 Keep in mind that the authentication p= ath for DelegationToken
>>>>=C2=A0 =C2=A0 =C2=A0 (mapreduce) and KerberosToken are comp= letely different.
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 Since most mapreduce jobs have multipl= e mappers (or reducers), I
>>>>=C2=A0 =C2=A0 =C2=A0 expect we would have run into the case= that the same
>>>> DelegationToken
>>>>=C2=A0 =C2=A0 =C2=A0 was used multiple times. It would stil= l be good to narrow down the
>>>>=C2=A0 =C2=A0 =C2=A0 scope of the problem.
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 Xu (Simon) Chen wrote:
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Thanks Josh...
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 I tested this in scala R= EPL, and called
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 DataStoreFinder.getDataS= tore()
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 multiple times, each tim= e it seems to be reusing the same
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 KerberosToken object, an= d it works fine each time.
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 So my problem only happe= ns when the token is used in accumulo's
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mapred
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 package. Weird..
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 -Simon
>>>>
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 On Thu, Jun 11, 2015 at = 5:29 PM, Josh
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Elser<josh.elser@g= mail.com>=C2=A0 =C2=A0wrote:
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Simon,
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Can you re= produce this in plain-jane Java code? I don't
>>>> know
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 enough abo= ut
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 spark/scal= a, much less what Geomesa is actually do, to know
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 what the i= ssue
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 is.
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Also, whic= h token are you referring to: A KerberosToken or
>>>> a
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Delegation= Token? Either of them should be usable as many
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 times as y= ou'd like
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (given the= underlying credentials are still available for
>>>> KT
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 or the DT = token
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 hasn't= yet expired).
>>>>
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Xu (Simon)= Chen wrote:
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 Folks,
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 I am working on geomesa+accumulo+spark integration. For
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 some reason, I
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 found that the same token cannot be used to
>>>> authenticate
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 twice.
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 The workflow is that geomesa would try to create a
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 hadoop rdd, during
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 which it tries to create an AccumuloDataStore:
>>>>
>>>>
>>>>
>>>> https://github.com/locationtech/g= eomesa/blob/master/geomesa-compute/src/main/scala/org/locationtech/geomesa/= compute/spark/GeoMesaSpark.scala#L81
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 During this process, a ZooKeeperInstance is created:
>>>>
>>>>
>>>>
>>>> https://github.com/l= ocationtech/geomesa/blob/rc7_a1.7_h2.5/geomesa-core/src/main/scala/org/loca= tiontech/geomesa/core/data/AccumuloDataStoreFactory.scala#L177
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 I modified geomesa such that it would use kerberos to
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 authenticate
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 here. This step works fine.
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 But next, geomesa calls
>>>> ConfigurationBase.setConnectorInfo:
>>>>
>>>>
>>>>
>>>> https://github.com/locatio= ntech/geomesa/blob/rc7_a1.7_h2.5/geomesa-compute/src/main/scala/org/locatio= ntech/geomesa/compute/spark/GeoMesaSpark.scala#L69
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 This is using the same token and the same zookeeper
>>>> URI,
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 for some
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 reason it is stuck on spark-shell, and the following is
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 outputted on
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 tserver side:
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 2015-06-06 18:58:19,616 [server.TThreadPoolServer]
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 ERROR: Error
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 occurred during processing of message.
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 java.lang.RuntimeException:
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 org.apache.thrift.transport.TTransportException:
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 java.net<http://java.net>.SocketTimeou= tException: Read
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 timed out
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.g= etTransport(TSaslServerTransport.java:219)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1= .run(UGIAssumingTransportFactory.java:51)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory$1= .run(UGIAssumingTransportFactory.java:48)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 java.security.AccessController.doPrivileged(Native
>>>> Method)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 javax.security.auth.Subject.doAs(Subject.java:356)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserG= roupInformation.java:1622)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.accumulo.core.rpc.UGIAssumingTransportFactory.g= etTransport(UGIAssumingTransportFactory.java:48)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.r= un(TThreadPoolServer.java:208)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPo= olExecutor.java:1145)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadP= oolExecutor.java:615)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingR= unnable.java:35)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at java.lang.Thread.run(Thread= .java:745)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 Caused by:
>>>> org.apache.thrift.transport.TTransportException:
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 java.net<http://java.net>.SocketTimeou= tException: Read
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 timed out
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStr= eamTransport.java:129)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>> org.apache.thrift.transport.TTransport.readAll(TTransport.= java:84)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.thrift.transport.TSaslTransport.receiveSaslMess= age(TSaslTransport.java:182)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport.handleSas= lStartMessage(TSaslServerTransport.java:125)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>> org.apache.thrift.transport.TSaslTransport.open(TSaslTrans= port.java:253)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport.open(TSas= lServerTransport.java:41)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.thrift.transport.TSaslServerTransport$Factory.g= etTransport(TSaslServerTransport.java:216)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0... 11 more
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 Caused by: java.net >>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 <http://java.net&g= t;.SocketTimeoutException: Read timed
>>>> out
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 java.net.SocketInputStream.socketRead0(Native Method)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>> java.net.SocketInputStream.read(SocketInputStream.java:152= )
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>> java.net.SocketInputStream.read(SocketInputStream.java:122= )
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>> java.io.BufferedInputStream.read1(BufferedInputStream.java= :273)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>> java.io.BufferedInputStream.read(BufferedInputStream.java:= 334)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0at
>>>>
>>>>
>>>> org.apache.thrift.transport.TIOStreamTransport.read(TIOStr= eamTransport.java:127)
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0... 17 more
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 Any idea why?
>>>>
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 Thanks.
>>>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 -Simon
>>>>
>

--20cf307810b2993f2e0518519157--