flink-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matthias J. Sax" <mj...@informatik.hu-berlin.de>
Subject Re: question on flink-storm-examples
Date Tue, 01 Sep 2015 20:10:27 GMT
Hi Jerry,

WordCount-StormTopology uses a hard coded dop of 4. If you start up
Flink in local mode (bin/start-local-streaming.sh), you need to increase
the number of task slots to at least 4 in conf/flink-conf.yaml before
starting Flink -> taskmanager.numberOfTaskSlots

You should actually see the following exception in
log/flink-...-jobmanager-...log

> NoResourceAvailableException: Not enough free slots available to run the job. You can
decrease the operator parallelism or increase the number of slots per TaskManager in the configuration.

WordCount-StormTopology does use StormWordCountRemoteBySubmitter
internally. So, you do use it already ;)

I am not sure what you mean by "get rid of KafkaSource"? It is still in
the code base. Which version to you use? In flink-0.10-SNAPSHOT it is
located in submodule "flink-connector-kafka" (which is submodule of
"flink-streaming-connector-parent" -- which is submodule of
"flink-streamping-parent").


-Matthias


On 09/01/2015 09:40 PM, Jerry Peng wrote:
> Hello,
> 
> I have some questions regarding how to run one of the
> flink-storm-examples, the WordCountTopology.  How should I run the job? 
> On github its says I should just execute
> bin/flink run example.jar but when I execute:
> 
> bin/flink run WordCount-StormTopology.jar 
> 
> nothing happens.  What am I doing wrong? and How can I run the
> WordCounttopology via StormWordCountRemoteBySubmitter? 
> 
> Also why did you guys get rid of the KafkaSource class?  What is the API
> now for subscribing to a kafka source?
> 
> Best,
> 
> Jerry


Mime
View raw message