Return-Path: X-Original-To: apmail-flink-user-archive@minotaur.apache.org Delivered-To: apmail-flink-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9C701188FA for ; Sun, 13 Sep 2015 14:40:26 +0000 (UTC) Received: (qmail 2059 invoked by uid 500); 13 Sep 2015 14:40:26 -0000 Delivered-To: apmail-flink-user-archive@flink.apache.org Received: (qmail 1982 invoked by uid 500); 13 Sep 2015 14:40:26 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flink.apache.org Delivered-To: mailing list user@flink.apache.org Received: (qmail 1972 invoked by uid 99); 13 Sep 2015 14:40:26 -0000 Received: from Unknown (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 13 Sep 2015 14:40:26 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id DB060C0427 for ; Sun, 13 Sep 2015 14:40:25 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.898 X-Spam-Level: ** X-Spam-Status: No, score=2.898 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 6qQeTd94-5ss for ; Sun, 13 Sep 2015 14:40:24 +0000 (UTC) Received: from mail-qg0-f50.google.com (mail-qg0-f50.google.com [209.85.192.50]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 15E7542B46 for ; Sun, 13 Sep 2015 14:40:24 +0000 (UTC) Received: by qgx61 with SMTP id 61so97427822qgx.3 for ; Sun, 13 Sep 2015 07:40:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:content-type:subject:message-id:date:to:mime-version; bh=CD5Wn5FkdMpbNa952GiyJEo71WULhYDv3emhlxe8/tI=; b=ZqnU4bRqnUuGUI+3FLBIShEX2kiyEQ1zR1xzHHxiD+0EkO94Xvg0ynpqgQDQo99/em kGjdVaBFK/JUmVmTF87JxJKwql9/kBipQm1E2E82RmHxc0JXPC5e5rpImUXScf9A0SPl AQKFmUQy7p+J07fxfXOVspdkK5Bwe3WLOSwU4qzIa5HXmcUIXI47sGUv881+bwIrndKL lm9t6MaxuIyaP42nLDJMp/Gb19CLHeY1DPrVhyIwkHndF7cZ++V28b6e9BW/n33GpTt2 qTbpkBD0Fn6VEQr4MX17OluW/AnBi4TpatAeEIg4IT2bqBMUd/NxiCEHOyAg3/SCG3FE W5Qw== X-Received: by 10.140.135.197 with SMTP id 188mr15031419qhh.89.1442155218745; Sun, 13 Sep 2015 07:40:18 -0700 (PDT) Received: from [192.168.0.2] (207-237-10-243.c3-0.wsd-ubr1.qens-wsd.ny.cable.rcn.com. [207.237.10.243]) by smtp.gmail.com with ESMTPSA id d1sm4079675qkh.25.2015.09.13.07.40.17 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 13 Sep 2015 07:40:17 -0700 (PDT) From: Daniel Blazevski Content-Type: multipart/alternative; boundary="Apple-Mail=_940254D8-8B29-45D9-9C97-BCE4F58182EB" Subject: "Not enough free slots available to run the job" for word count example Message-Id: <7063386B-B441-4757-9788-F7BBF82B32F2@gmail.com> Date: Sun, 13 Sep 2015 10:40:16 -0400 To: user@flink.apache.org Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) X-Mailer: Apple Mail (2.2104) --Apple-Mail=_940254D8-8B29-45D9-9C97-BCE4F58182EB Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Hello, I am new to Flink, I setup a Flink cluster on 4 m4.large Amazon EC2 = instances, and set the following in link-conf.yaml: jobmanager.heap.mb: 4000 taskmanager.heap.mb: 5000 taskmanager.numberOfTaskSlots: 2 parallelism.default: 8 In the 8081 dashboard, it shows 4 for Task Manager and 5 for Processing = Slots ( I=E2=80=99m not sure if =E2=80=9C5=E2=80=9D is OK here?). I then tried to execute: ./bin/flink run ./examples/flink-java-examples-0.9.1-WordCount.jar and got the following error message: Error: java.lang.IllegalStateException: Could not schedule consumer = vertex CHAIN Reduce (SUM(1), at main(WordCount.java:72) -> FlatMap = (collect()) (7/8) at = org.apache.flink.runtime.executiongraph.Execution$3.call(Execution.java:48= 2) at = org.apache.flink.runtime.executiongraph.Execution$3.call(Execution.java:47= 2) at akka.dispatch.Futures$$anonfun$future$1.apply(Future.scala:94) at = scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Futur= e.scala:24) at = scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24= ) at = scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextIm= pl.scala:107) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at = scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java= :1339) at = scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at = scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.ja= va:107) Caused by: = org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException= : Not enough free slots available to run the job. You can decrease the = operator parallelism or increase the number of slots per TaskManager in = the configuration. Task to schedule: < Attempt #0 (CHAIN Reduce (SUM(1), = at main(WordCount.java:72) -> FlatMap (collect()) (7/8)) @ (unassigned) = - [SCHEDULED] > with groupID < 6adebf08c73e7f3adb6ea20f8950d627 > in = sharing group < SlotSharingGroup [02cac542946daf808c406c2b18e252e0, = d883aa4274b6cef49ab57aaf3078147c, 6adebf08c73e7f3adb6ea20f8950d627] >. = Resources available to scheduler: Number of instances=3D4, total number = of slots=3D5, available slots=3D0 at = org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Sched= uler.java:251) at = org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediatel= y(Scheduler.java:126) at = org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Exe= cution.java:271) at = org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecuti= on(ExecutionVertex.java:430) at = org.apache.flink.runtime.executiongraph.Execution$3.call(Execution.java:47= 8) ... 9 more More details about my setup: I am running Ubuntu on the master node and = 3 data nodes. If it matters, I already had hadoop 2.7.1 running and = downloaded and installed the latest version of Flink, which is = technically for hadoop 2.7.0. Thanks, Dan= --Apple-Mail=_940254D8-8B29-45D9-9C97-BCE4F58182EB Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 Hello,

I = am new to Flink, I setup a Flink cluster on 4 m4.large Amazon EC2 = instances, and set the following in link-conf.yaml:

jobmanager.heap.mb: 4000
taskmanager.heap.mb: 5000
taskmanager.numberOfTaskSlots: 2
parallelism.default: 8

In the 8081 dashboard, it shows 4 for = Task Manager and 5 for Processing Slots ( I=E2=80=99m not sure if = =E2=80=9C5=E2=80=9D is OK here?).

I then tried to execute:
./bin/flink run =
./examples/flink-java-examples-0.9.1-WordCount.jar
and = got the following error message:
Error: = java.lang.IllegalStateException: Could not schedule = consumer=20 vertex CHAIN Reduce (SUM(1), at main(WordCount.java:72) -> FlatMap=20 (collect()) (7/8)
at org.apache.flink.runtime.executiongraph.Execution$3.call(Execution.java:482)
at = org.apache.flink.runtime.executiongraph.Execution$3.call(Execution.java:472)
at = akka.dispatch.Futures$$anonfun$future$1.apply(Future.scala:94)
at = scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at = scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at = scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
= at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at = scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by:=20 org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not enough free slots available to run the job. You can decrease the=20 operator parallelism or increase the number of slots per TaskManager in=20= the configuration. Task to schedule: < Attempt #0 (CHAIN Reduce=20 (SUM(1), at main(WordCount.java:72) -> FlatMap (collect()) (7/8)) @=20= (unassigned) - [SCHEDULED] > with groupID <=20 6adebf08c73e7f3adb6ea20f8950d627 > in sharing group = <=20 SlotSharingGroup [02cac542946daf808c406c2b18e252e0,=20 d883aa4274b6cef49ab57aaf3078147c, = 6adebf08c73e7f3adb6ea20f8950d627]=20 >. Resources available to scheduler: Number of instances=3D4, total=20= number of slots=3D5, available slots=3D0
at = org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:251)
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:126)
at = org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:271)
at org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:430)
at = org.apache.flink.runtime.executiongraph.Execution$3.call(Execution.java:478)
... 9 = more


More details about my setup: I am = running Ubuntu on the master node and 3 data nodes.  If it matters, = I already had hadoop 2.7.1 running and downloaded and installed the = latest version of Flink, which is technically for hadoop = 2.7.0.

Thanks,
Dan
= --Apple-Mail=_940254D8-8B29-45D9-9C97-BCE4F58182EB--