flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Evgeny Kincharov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-4315) Remove Hadoop Dependency from flink-java
Date Fri, 30 Sep 2016 08:13:20 GMT

    [ https://issues.apache.org/jira/browse/FLINK-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15535380#comment-15535380

Evgeny Kincharov commented on FLINK-4315:

I have moved the org.apache.hadoop dependency from flink-java to flink-hadoop-compatibility.

My changes are in [https://github.com/apache/flink/compare/master...kenmy:FLINK-4315?expand=1]
I had to extract the hadoop dependency from flink-scala to avoid adding flink-hadoop-compatibility
as dependency into flink-scala. Since the flink-scala used some classes from flink-java, which
were transferred into flink-hadoop-compatibility.
What has changed:
* The FlinkHadoopEnvironment class has been created In flink-hadoop-compatibility, the following
methods from the class ExecutionEnvironment have been moved there:
** readHadoopFile
** readSequenceFile
** createHadoopInput
* The ExecutionEnvironment object is passed to the constructor of FlinkHadoopEnvironment.
* The classes that depend on hadoop were moved into the flink-hadoop-compatibility.
* The JAPICmp was disabled for flink-scala and flink-java due the API changes. Perhaps there
is a better solution.
Similar changes have been made for flink-scala.
In flink-scala remained one reference to hadoop: Writeble in the trait org.apache.flink.api.scala.codegen.TypeInformationGen.
Please review, and if everything is OK I'll create PR.

> Remove Hadoop Dependency from flink-java
> ----------------------------------------
>                 Key: FLINK-4315
>                 URL: https://issues.apache.org/jira/browse/FLINK-4315
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Java API
>            Reporter: Stephan Ewen
>            Assignee: Evgeny Kincharov
>             Fix For: 2.0.0
> The API projects should be independent of Hadoop, because Hadoop is not an integral part
of the Flink stack, and we should have the option to offer Flink without Hadoop dependencies.
> The current batch APIs have a hard dependency on Hadoop, mainly because the API has utility
methods like `readHadoopFile(...)`.
> I suggest to remove those methods and instead add helpers in the `flink-hadoop-compatibility`

This message was sent by Atlassian JIRA

View raw message