Return-Path: X-Original-To: apmail-flink-user-archive@minotaur.apache.org Delivered-To: apmail-flink-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B82B518067 for ; Mon, 20 Jul 2015 14:21:16 +0000 (UTC) Received: (qmail 18698 invoked by uid 500); 20 Jul 2015 14:20:54 -0000 Delivered-To: apmail-flink-user-archive@flink.apache.org Received: (qmail 18622 invoked by uid 500); 20 Jul 2015 14:20:54 -0000 Mailing-List: contact user-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flink.apache.org Delivered-To: mailing list user@flink.apache.org Received: (qmail 18613 invoked by uid 99); 20 Jul 2015 14:20:54 -0000 Received: from mail-relay.apache.org (HELO mail-relay.apache.org) (140.211.11.15) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 20 Jul 2015 14:20:54 +0000 Received: from uce.fritz.box (ip5b421d7d.dynamic.kabel-deutschland.de [91.66.29.125]) by mail-relay.apache.org (ASF Mail Server at mail-relay.apache.org) with ESMTPSA id 4E0381A0040 for ; Mon, 20 Jul 2015 14:20:52 +0000 (UTC) Content-Type: text/plain; charset=us-ascii Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) Subject: Re: Too few memory segments provided exception From: Ufuk Celebi In-Reply-To: Date: Mon, 20 Jul 2015 16:20:50 +0200 Content-Transfer-Encoding: quoted-printable Message-Id: <17D3398D-0484-4E70-AB1C-CFE01294888E@apache.org> References: To: user@flink.apache.org X-Mailer: Apple Mail (2.1878.6) BTW we should add an entry for this to the faq and point to the = configuration or faq entry in the exception message. On 20 Jul 2015, at 15:15, Vasiliki Kalavri = wrote: > Hi Shivani, >=20 > why are you using a vertex-centric iteration to compute the = approximate Adamic-Adar? > It's not an iterative computation :)=20 >=20 > In fact, it should be as complex (in terms of operators) as the exact = Adamic-Adar, only more efficient because of the different neighborhood = representation. Are you having the same problem with the exact = computation? >=20 > Cheers, > Vasia. >=20 > On 20 July 2015 at 14:41, Maximilian Michels wrote: > Hi Shivani, >=20 > The issue is that by the time the Hash Join is executed, the = MutableHashTable cannot allocate enough memory segments. That means that = your other operators are occupying them. It is fine that this also = occurs on Travis because the workers there have limited memory as well. >=20 > Till suggested to change the memory fraction through the = ExuectionEnvironment. Can you try that?=20 >=20 > Cheers, > Max >=20 > On Mon, Jul 20, 2015 at 2:23 PM, Shivani Ghatge = wrote: > Hello Maximilian, >=20 > Thanks for the suggestion. I will use it to check the program. But = when I am creating a PR for the same implementation with a Test, I am = getting the same error even on Travis build. So for that what would be = the solution?=20 >=20 > Here is my PR https://github.com/apache/flink/pull/923 > And here is the Travis build status = https://travis-ci.org/apache/flink/builds/71695078 >=20 > Also on the IDE it is working fine in Collection execution mode. >=20 > Thanks and Regards, > Shivani=20 >=20 > On Mon, Jul 20, 2015 at 2:14 PM, Maximilian Michels = wrote: > Hi Shivani, >=20 > Flink doesn't have enough memory to perform a hash join. You need to = provide Flink with more memory. You can either increase the = "taskmanager.heap.mb" config variable or set = "taskmanager.memory.fraction" to some value greater than 0.7 and smaller = then 1.0. The first config variable allocates more overall memory for = Flink; the latter changes the ratio between Flink managed memory (e.g. = for hash join) and user memory (for you functions and Gelly's code). >=20 > If you run this inside an IDE, the memory is configured automatically = and you don't have control over that at the moment. You could, however, = start a local cluster (./bin/start-local) after you adjusted your = flink-conf.yaml and run your programs against that configured cluster. = You can do that either through your IDE using a RemoteEnvironment or by = submitting the packaged JAR to the local cluster using the command-line = tool (./bin/flink). >=20 > Hope that helps. >=20 > Cheers, > Max >=20 > On Mon, Jul 20, 2015 at 2:04 PM, Shivani Ghatge = wrote: > Hello, > I am working on a problem which implements Adamic Adar Algorithm = using Gelly. > I am running into this exception for all the Joins (including the one = that are part of the reduceOnNeighbors function) >=20 > Too few memory segments provided. Hash Join needs at least 33 memory = segments. >=20 >=20 > The problem persists even when I comment out some of the joins. >=20 > Even after using edg =3D edg.join(graph.getEdges(), = JoinOperatorBase.JoinHint.BROADCAST_HASH_SECOND).where(0,1).equalTo(0,1).w= ith(new JoinEdge()); >=20 > as suggested by @AndraLungu the problem persists. >=20 > The code is=20 >=20 >=20 > DataSet> degrees =3D graph.getDegrees(); >=20 > //get neighbors of each vertex in the HashSet for it's value > computedNeighbors =3D graph.reduceOnNeighbors(new = GatherNeighbors(), EdgeDirection.ALL); > =20 > //get vertices with updated values for the final Graph which = will be used to get Adamic Edges > Vertices =3D computedNeighbors.join(degrees, = JoinOperatorBase.JoinHint.BROADCAST_HASH_FIRST).where(0).equalTo(0).with(n= ew JoinNeighborDegrees()); >=20 > Graph, List>>, Double> updatedGraph =3D=20 > Graph.fromDataSet(Vertices, edges, env); > =20 > //configure Vertex Centric Iteration > VertexCentricConfiguration parameters =3D new = VertexCentricConfiguration(); >=20 > parameters.setName("Find Adamic Adar Edge Weights"); >=20 > parameters.setDirection(EdgeDirection.ALL); > =20 > //run Vertex Centric Iteration to get the Adamic Adar Edges = into the vertex Value=20 > updatedGraph =3D updatedGraph.runVertexCentricIteration(new = GetAdamicAdarEdges(), new NeighborsMessenger(), 1, = parameters); > =20 > //Extract Vertices of the updated graph > DataSet, = List>>>> vertices =3D = updatedGraph.getVertices(); > =20 > //Extract the list of Edges from the vertex values > DataSet> edg =3D = vertices.flatMap(new GetAdamicList()); > =20 > //Partial weights for the edges are added > edg =3D edg.groupBy(0,1).reduce(new AdamGroup()); >=20 > //Graph is updated with the Adamic Adar Edges > edg =3D edg.join(graph.getEdges(), = JoinOperatorBase.JoinHint.BROADCAST_HASH_SECOND).where(0,1).equalTo(0,1).w= ith(new JoinEdge()); >=20 > Any idea how I could tackle this Exception? >=20 >=20 >=20 >=20