flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Flavio Pompermaier <pomperma...@okkam.it>
Subject Re: Flink Mongodb
Date Tue, 04 Nov 2014 09:03:57 GMT
What do you mean for  "might lack support for local split assignment"?
You mean that InputFormat is not serializable? This instead is not true for
Mongodb?

On Tue, Nov 4, 2014 at 10:00 AM, Fabian Hueske <fhueske@apache.org> wrote:

> There's a page about Hadoop Compatibility that shows how to use the
> wrapper.
>
> The HBase format should work as well, but might lack support for local
> split assignment. In that case performance would suffer a lot.
>
> Am Dienstag, 4. November 2014 schrieb Flavio Pompermaier :
>
>> Should I start from
>> http://flink.incubator.apache.org/docs/0.7-incubating/example_connectors.html
>> ? Is it ok?
>> Thus, in principle, also the TableInputFormat of HBase could be used in a
>> similar way..isn't it?
>>
>> On Tue, Nov 4, 2014 at 9:42 AM, Fabian Hueske <fhueske@apache.org> wrote:
>>
>>> Hi,
>>>
>>> the blog post uses Flinks wrapper for Hadoop InputFormats.
>>> This has been ported to the new API and is described in the
>>> documentation.
>>>
>>> So you just need to take Mongos Hadoop IF and plug it into the new IF
>>> wrapper. :-)
>>>
>>> Fabian
>>>
>>> Am Dienstag, 4. November 2014 schrieb Flavio Pompermaier :
>>>
>>> Hi to all,
>>>>
>>>> I saw this post
>>>> https://flink.incubator.apache.org/news/2014/01/28/querying_mongodb.html
>>>> but it use the old APIs (HadoopDataSource instead of DataSource).
>>>> How can I use Mongodb with the new Flink APIs?
>>>>
>>>> Best,
>>>> Flavio
>>>>
>>>
>>

Mime
View raw message