Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id E2F52200B96 for ; Thu, 6 Oct 2016 15:22:33 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id E1BA6160AC6; Thu, 6 Oct 2016 13:22:33 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 0BF49160AAD for ; Thu, 6 Oct 2016 15:22:32 +0200 (CEST) Received: (qmail 66587 invoked by uid 500); 6 Oct 2016 13:22:32 -0000 Mailing-List: contact dev-help@flink.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@flink.apache.org Delivered-To: mailing list dev@flink.apache.org Received: (qmail 66562 invoked by uid 99); 6 Oct 2016 13:22:31 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 06 Oct 2016 13:22:31 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id B48371804C1 for ; Thu, 6 Oct 2016 13:22:30 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.379 X-Spam-Level: ** X-Spam-Status: No, score=2.379 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id Pr6PxUPA1oVV for ; Thu, 6 Oct 2016 13:22:27 +0000 (UTC) Received: from mail-wm0-f48.google.com (mail-wm0-f48.google.com [74.125.82.48]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with ESMTPS id 7EEAE5F393 for ; Thu, 6 Oct 2016 13:22:26 +0000 (UTC) Received: by mail-wm0-f48.google.com with SMTP id i130so3184304wmg.1 for ; Thu, 06 Oct 2016 06:22:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=hYSHP9eLIYAaR70oYu4zSw8tVGIss9AKtuPx3GtuO5c=; b=ZkuiRQpfEukaDem6gjDhuRTEdwt7/Hs0lIFW40+vz4UvYN1lWTeWXu4gY2ptHTjb9P J3BhpTzwHgRCaJ2GVCc55K+bVizrgqh0O+GJGv8YUx0ixXncojAp37/frW42ZURvDOAh yd4MwoSzPZTQRkApG8c9TvIlqJC+bmM+1XuwEBxyHtuvVineZODeJIaxurcFFSWrm4pA iNlTjaOjNBabW6CWZx1tL+5VzCAtAdJZg34bdVsVN1JjMwX7Whf/kCE9W4AYQuaBpNGb OMrMun4IWi4kWBVV1hFtvWiQONEx0hJX11HetCYJHtyBKA6/hNAlRu1gwYr40App0Qah KpcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=hYSHP9eLIYAaR70oYu4zSw8tVGIss9AKtuPx3GtuO5c=; b=BV0lraNumMggjaDkVDENC5KszdfaqzCgAhOvVfy6nvzVi1niY6fb+z7lgWQz5EwHfT rdl0TEby0rr0mxQ3Zhuu/aT+SwDAkYuhLRtSGMimFpwvSTM+zKrkNj3lXrYjObGmFv/3 LliGKGilOFy3QimqlDH6Mgynl3hz+PVYwgd7fp8x0FSP/powMW1U3qYDeZF9spnaXJhD Zv6//TszKqz96UJHVR34MUg6YKTYZ6elh8lmIy+Ff3Hp3WydErzGy8n0dtTmA1sFkKv6 IIyYkrNwvcnPU7wrz7ePc1f8pZ9yh8Kh0C8oYYTmMLvIKVKYekm9qteZIHoxBTwDPKPH Ynuw== X-Gm-Message-State: AA6/9Rl+T9m3XZYnfcoPuFv9aPr5XJssqB6a4dC/puhOf3lEJ+OFwfw8vR38TED2RZOszDnBmXWk7GwRzKYhaQ== X-Received: by 10.28.191.18 with SMTP id p18mr8900323wmf.37.1475760140187; Thu, 06 Oct 2016 06:22:20 -0700 (PDT) MIME-Version: 1.0 Received: by 10.28.173.146 with HTTP; Thu, 6 Oct 2016 06:22:19 -0700 (PDT) In-Reply-To: References: From: Vasiliki Kalavri Date: Thu, 6 Oct 2016 15:22:19 +0200 Message-ID: Subject: Re: Flink Gelly To: dev@flink.apache.org Content-Type: multipart/alternative; boundary=001a114d2d94edfd59053e322e95 archived-at: Thu, 06 Oct 2016 13:22:34 -0000 --001a114d2d94edfd59053e322e95 Content-Type: text/plain; charset=UTF-8 Hi Dennis, can you give us some details about your setup? e.g. where you are running your job, your input size, the configured memory, etc. It would also be helpful if you could share your code. Getting an out of memory error with just 100 nodes seems weird. Best, -Vasia. On 6 October 2016 at 13:29, wrote: > > Dear ladies and gentlemen, > > I got a problem using Gelly in Flink. Currently I am loading a Virtuoso > Graph into > Flink's Gelly and I want to analyze it for the different paths one can > take to link > the different nodes. Therefore I am using the ScatterGatherIteration. > However, my code just works with about ten to twenty nodes. When I try to > upload > a hundred nodes, the following error occurs: > > Exception in thread "main" org.apache.flink.runtime. > client.JobExecutionException: Job execution failed. > at org.apache.flink.runtime.jobmanager.JobManager$$ > anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply$ > mcV$sp(JobManager.scala:822) > at org.apache.flink.runtime.jobmanager.JobManager$$ > anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768) > at org.apache.flink.runtime.jobmanager.JobManager$$ > anonfun$handleMessage$1$$anonfun$applyOrElse$8.apply(JobManager.scala:768) > at scala.concurrent.impl.Future$PromiseCompletingRunnable. > liftedTree1$1(Future.scala:24) > at scala.concurrent.impl.Future$PromiseCompletingRunnable.run( > Future.scala:24) > at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) > at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec( > AbstractDispatcher.scala:401) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue. > runTask(ForkJoinPool.java:1339) > at scala.concurrent.forkjoin.ForkJoinPool.runWorker( > ForkJoinPool.java:1979) > at scala.concurrent.forkjoin.ForkJoinWorkerThread.run( > ForkJoinWorkerThread.java:107) > Caused by: java.lang.RuntimeException: Memory ran out. Compaction failed. > numPartitions: 32 minPartition: 1 maxPartition: 431 number of overflow > segments: 0 bucketSize: 251 Overall memory: 45613056 Partition memory: > 33685504 Message: null > at org.apache.flink.runtime.operators.hash.CompactingHashTable. > insertRecordIntoPartition(CompactingHashTable.java:457) > at org.apache.flink.runtime.operators.hash.CompactingHashTable. > insertOrReplaceRecord(CompactingHashTable.java:392) > at org.apache.flink.runtime.iterative.io.SolutionSetUpdateOutputCollect > or.collect(SolutionSetUpdateOutputCollector.java:54) > at org.apache.flink.graph.spargel.GatherFunction.setNewVertexValue( > GatherFunction.java:123) > at org.apache.flink.quickstart.PathRank$PathUpdateFunction. > updateVertex(PathRank.java:357) > at org.apache.flink.graph.spargel.ScatterGatherIteration$ > GatherUdfSimpleVV.coGroup(ScatterGatherIteration.java:389) > at org.apache.flink.runtime.operators.CoGroupWithSolutionSetSecondDr > iver.run(CoGroupWithSolutionSetSecondDriver.java:218) > at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:486) > at org.apache.flink.runtime.iterative.task.AbstractIterativeTask.run( > AbstractIterativeTask.java:146) > at org.apache.flink.runtime.iterative.task.IterationTailTask.run( > IterationTailTask.java:107) > at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:351) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:584) > at java.lang.Thread.run(Thread.java:745) > > > I tried to google it a bit, and this problems seems to occur often when > using Gelly. I hope you have any ideas or approaches how I can handle this > error. > > Thank you in advance! > All the best, > Dennis > --001a114d2d94edfd59053e322e95--