flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Flavio Pompermaier <pomperma...@okkam.it>
Subject Hadoop compatibility and HBase bulk loading
Date Fri, 10 Apr 2015 09:55:41 GMT
Hi guys,

I have a nice question about Hadoop compatibility.
In https://flink.apache.org/news/2014/11/18/hadoop-compatibility.html you
say that you can reuse existing mapreduce programs.
Could it be possible to manage also complex mapreduce programs like HBase
BulkImport that use for example a custom partioner

In the bulk-import examples the call
HFileOutputFormat2.configureIncrementalLoadMap that sets a series of job
parameters (like partitioner, mapper, reducers, etc) ->
The full code of it can be seen at

Do you think there's any change to make it run in flink?


View raw message