beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "huangjianhuang (JIRA)" <j...@apache.org>
Subject [jira] [Created] (BEAM-2995) can't read/write hdfs in Flink CLUSTER(Standalone)
Date Wed, 27 Sep 2017 13:45:00 GMT
huangjianhuang created BEAM-2995:
------------------------------------

             Summary: can't read/write hdfs in Flink CLUSTER(Standalone)
                 Key: BEAM-2995
                 URL: https://issues.apache.org/jira/browse/BEAM-2995
             Project: Beam
          Issue Type: Bug
          Components: runner-flink
    Affects Versions: 2.2.0
            Reporter: huangjianhuang
            Assignee: Aljoscha Krettek


i just write a simple demo like:

{code:java}
        Configuration conf = new Configuration();
        conf.set("fs.default.name", "hdfs://localhost:9000");
//other codes
        p.apply("ReadLines", TextIO.read().from("hdfs://localhost:9000/tmp/words"))
                .apply(TextIO.write().to("hdfs://localhost:9000/tmp/hdfsout"));
{code}

it works in flink local model with cmd:

{code:java}
mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS     -Pflink-runner     -Dexec.args="--runner=FlinkRunner
--filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar"
{code}

but not works in CLUSTER mode:

{code:java}
mvn exec:java -Dexec.mainClass=com.joe.FlinkWithHDFS     -Pflink-runner     -Dexec.args="--runner=FlinkRunner
--filesToStage=target/flinkBeam-2.2.0-SNAPSHOT-shaded.jar --flinkMaster=localhost:6123 "
{code}

it seems the flink cluster regard the hdfs as local file system. 
The input log from flink-jobmanger.log is:

{code:java}
2017-09-27 20:17:37,962 INFO  org.apache.flink.runtime.jobmanager.JobManager             
  - Successfully ran initialization on master in 136 ms.
2017-09-27 20:17:37,968 INFO  org.apache.beam.sdk.io.FileBasedSource                     
  - {color:red}Filepattern hdfs://localhost:9000/tmp/words2 matched 0 files with total size
0{color}
2017-09-27 20:17:37,968 INFO  org.apache.beam.sdk.io.FileBasedSource                     
  - Splitting filepattern hdfs://localhost:9000/tmp/words2 into bundles of size 0 took 0 ms
and produced 0 files a
nd 0 bundles

{code}

The output  error message is :

{code:java}
Caused by: java.lang.ClassCastException: {color:red}org.apache.beam.sdk.io.hdfs.HadoopResourceId
cannot be cast to org.apache.beam.sdk.io.LocalResourceId{color}
        at org.apache.beam.sdk.io.LocalFileSystem.create(LocalFileSystem.java:77)
        at org.apache.beam.sdk.io.FileSystems.create(FileSystems.java:256)
        at org.apache.beam.sdk.io.FileSystems.create(FileSystems.java:243)
        at org.apache.beam.sdk.io.FileBasedSink$Writer.open(FileBasedSink.java:922)
        at org.apache.beam.sdk.io.FileBasedSink$Writer.openUnwindowed(FileBasedSink.java:884)
        at org.apache.beam.sdk.io.WriteFiles.finalizeForDestinationFillEmptyShards(WriteFiles.java:909)
        at org.apache.beam.sdk.io.WriteFiles.access$900(WriteFiles.java:110)
        at org.apache.beam.sdk.io.WriteFiles$2.processElement(WriteFiles.java:858)

{code}

can somebody help me, i've try all the way just can't work it out [cry]
https://issues.apache.org/jira/browse/BEAM-2457






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message