flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Márton Balassi <mbala...@apache.org>
Subject Re: HDFS Clustering
Date Tue, 24 Feb 2015 10:21:48 GMT
Hey,

Just add the the right prefix pointing to your hdfs filepath:

bin/flink run -v flink-java-examples-*-WordCount.jar
hdfs://hostname:port/PATH/TO/INPUT hdfs://hostname:port/PATH/TO/OUTPUT

Best,

Marton

On Tue, Feb 24, 2015 at 11:13 AM, Giacomo Licari <giacomo.licari@gmail.com>
wrote:

> Hi guys,
> I'm Giacomo from Italy, I'm newbie with Flink.
>
> I setted up a cluster with Hadoop 1.2 and Flink.
>
> I would like to ask to you how to run the WordCount example taking the
> input file from hdfs (example myuser/testWordCount/hamlet.
> txt) and put the output also inside hdfs (example
> myuser/testWordCount/output.txt).
>
> I successfully run the example on my local filesystem, I would like to
> test it with HDSF.
>
> Thanks a lot guys,
> Giacomo
>

Mime
View raw message