flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fabian Wollert <fabian.woll...@zalando.de>
Subject Re: Flink Elasticsearch Connector: Lucene Error message
Date Mon, 17 Jul 2017 07:38:13 GMT
1.3.0, but i only need the ES 2.X connector working right now, since that's
the elasticsearch version we're using. another option would be to upgrade
to ES 5 (at elast on dev) to see if its working as well, but that sounds
not like fixing the problem for me :-D

Cheers
Fabian


--

*Fabian WollertZalando SE*

E-Mail: fabian.wollert@zalando.de
Location: ZMAP <http://zmap.zalando.net/?q=fabian.wollert@zalando.de>

2017-07-16 15:47 GMT+02:00 Aljoscha Krettek <aljoscha@apache.org>:

> Hi,
>
> There was also a problem in releasing the ES 5 connector with Flink 1.3.0.
> You only said you’re using Flink 1.3, would that be 1.3.0 or 1.3.1?
>
> Best,
> Aljoscha
>
> On 16. Jul 2017, at 13:42, Fabian Wollert <fabian.wollert@zalando.de>
> wrote:
>
> Hi Aljoscha,
>
> we are running Flink in Stand alone mode, inside Docker in AWS. I will
> check tomorrow the dependencies, although i'm wondering: I'm running Flink
> 1.3 averywhere and the appropiate ES connector which was only released with
> 1.3, so it's weird where this dependency mix up comes from ... let's see ...
>
> Cheers
> Fabian
>
>
> --
>
> *Fabian WollertZalando SE*
>
> E-Mail: fabian.wollert@zalando.de
> Location: ZMAP <http://zmap.zalando.net/?q=fabian.wollert@zalando.de>
>
> 2017-07-14 11:15 GMT+02:00 Aljoscha Krettek <aljoscha@apache.org>:
>
>> This kind of error almost always hints at a dependency clash, i.e. there
>> is some version of this code in the class path that clashed with the
>> version that the Flink program uses. That’s why it works in local mode,
>> where there are probably not many other dependencies and not in cluster
>> mode.
>>
>> How are you running it on the cluster? Standalone, YARN?
>>
>> Best,
>> Aljoscha
>>
>> On 13. Jul 2017, at 13:56, Fabian Wollert <fabian.wollert@zalando.de>
>> wrote:
>>
>> Hi Timo, Hi Gordon,
>>
>> thx for the reply! I checked the connection from both clusters to each
>> other, and i can telnet to the 9300 port of flink, so i think the
>> connection is not an issue here.
>>
>> We are currently using in our live env a custom elasticsearch connector,
>> which used some extra lib's deployed on the cluster. i found one lucene lib
>> and deleted it (since all dependencies should be in the flink job jar), but
>> that unfortunately did not help neither ...
>>
>> Cheers
>> Fabian
>>
>>
>> --
>>
>> *Fabian WollertData Engineering*
>> *Technology*
>>
>> E-Mail: fabian.wollert@zalando.de
>> Location: ZMAP <http://zmap.zalando.net/?q=fabian.wollert@zalando.de>
>>
>> 2017-07-13 13:46 GMT+02:00 Timo Walther <twalthr@apache.org>:
>>
>>> Hi Fabian,
>>>
>>> I loop in Gordon. Maybe he knows whats happening here.
>>>
>>> Regards,
>>> Timo
>>>
>>>
>>> Am 13.07.17 um 13:26 schrieb Fabian Wollert:
>>>
>>> Hi everyone,
>>>
>>> I'm trying to make use of the new Elasticsearch Connector
>>> <https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/connectors/elasticsearch.html>.
>>> I got a version running locally (with ssh tunnels to my Elasticsearch
>>> cluster in AWS) in my IDE, I see the data in Elasticsearch written
>>> perfectly, as I want it. As soon as I try to run this on our dev cluster
>>> (Flink 1.3.0, running in the same VPC like ) though, i get the following
>>> error message (in the sink):
>>>
>>> java.lang.NoSuchFieldError: LUCENE_5_5_0
>>> at org.elasticsearch.Version.<clinit>(Version.java:295)
>>> at org.elasticsearch.client.transport.TransportClient$Builder.b
>>> uild(TransportClient.java:129)
>>> at org.apache.flink.streaming.connectors.elasticsearch2.Elastic
>>> search2ApiCallBridge.createClient(Elasticsearch2ApiCallBridge.java:65)
>>> at org.apache.flink.streaming.connectors.elasticsearch.Elastics
>>> earchSinkBase.open(ElasticsearchSinkBase.java:272)
>>> at org.apache.flink.api.common.functions.util.FunctionUtils.ope
>>> nFunction(FunctionUtils.java:36)
>>> at org.apache.flink.streaming.api.operators.AbstractUdfStreamOp
>>> erator.open(AbstractUdfStreamOperator.java:111)
>>> at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllO
>>> perators(StreamTask.java:375)
>>> at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(S
>>> treamTask.java:252)
>>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>>> at java.lang.Thread.run(Thread.java:748)
>>>
>>> I first thought that this has something to do with mismatched versions,
>>> but it happens to me with Elasticsearch 2.2.2 (bundled with Lucene 5.4.1)
>>> and Elasticsearch 2.3 (bundled with Lucene 5.5.0).
>>>
>>> Can someone point to what exact version conflict is happening here (or
>>> where to investigate further)? Currently my set up looks like everything is
>>> actually running with Lucene 5.5.0, so I'm wondering where that error
>>> message is exactly coming from. And also why it is running locally, but not
>>> in the cluster. I'm still investigating if this is a general connection
>>> issue from the Flink cluster to the ES cluster, but that would be
>>> surprising, and also that error message would be then misleading ....
>>>
>>> Cheers
>>> Fabian
>>>
>>> --
>>> *Fabian Wollert*
>>> *Senior Data Engineer*
>>>
>>> *POSTAL ADDRESS*
>>> *Zalando SE*
>>> *11501 Berlin*
>>>
>>> *OFFICE*
>>> *Zalando SE*
>>> *Charlottenstraße 4*
>>> *10969 Berlin*
>>> *Germany*
>>>
>>> *Email: fabian.wollert@zalando.de <fabian.wollert@zalando.de>*
>>> *Web: corporate.zalando.com <http://corporate.zalando.com/>*
>>> *Jobs: jobs.zalando.de <http://jobs.zalando.de/>*
>>>
>>> *Zalando SE, Tamara-Danz-Straße 1, 10243 Berlin*
>>> *Company registration: Amtsgericht Charlottenburg, HRB 158855 B*
>>> *VAT registration number: DE 260543043*
>>> *Management Board: Robert Gentz, David Schneider, Rubin Ritter*
>>> *Chairperson of the Supervisory Board: Lothar Lanz*
>>> *Registered office: Berlin*
>>>
>>>
>>>
>>
>>
>
>

Mime
View raw message