giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From José Luis Larroque <larroques...@gmail.com>
Subject Re: Problem Running Giraph application in a cluster
Date Fri, 17 Feb 2017 12:18:16 GMT
Maybe your problem isn't related to Giraph and Yarn, look at this:
http://stackoverflow.com/questions/22489398/unsupported-major-minor-version-52-0

You should look at what version of Giraph do you use for compiling it and
compare with the version that you are using at runtime in the cluster.

Bye

-- 
*José Luis Larroque*
Analista Programador Universitario - Facultad de Informática - UNLP
Desarrollador Java y .NET  en LIFIA

2017-02-17 2:56 GMT-03:00 Sai Ganesh Muthuraman <saiganeshpsn@gmail.com>:

> Hi Jose,
>
> In fact, this is the running status of the application
>
> 17/02/16 21:52:27 INFO yarn.GiraphYarnClient: Giraph:
> hu.elte.inf.mbalassi.msc.giraph.betweenness.BetweennessComputation,
> Elapsed: 0.86 secs
> 17/02/16 21:52:27 INFO yarn.GiraphYarnClient:
> appattempt_1487310728133_0001_000001, State: ACCEPTED, Containers used: 1
> 17/02/16 21:52:31 INFO yarn.GiraphYarnClient: Giraph:
> hu.elte.inf.mbalassi.msc.giraph.betweenness.BetweennessComputation,
> Elapsed: 4.87 secs
> 17/02/16 21:52:31 INFO yarn.GiraphYarnClient:
> appattempt_1487310728133_0001_000001, State: RUNNING, Containers used: 3
> 17/02/16 21:52:35 INFO yarn.GiraphYarnClient: Cleaning up HDFS distributed
> cache directory for Giraph job.
> 17/02/16 21:52:35 INFO yarn.GiraphYarnClient: *Completed Giraph:
> hu.elte.inf.mbalassi.msc.giraph.betweenness.BetweennessComputation: FAILED,
> total running time: 0 minutes, 8 seconds.*
>
> What I had sent before were the logs/
>
>
>  Sai Ganesh
>
>
>
> On Feb 17, 2017, at 10:53 AM, Sai Ganesh Muthuraman <
> saiganeshpsn@gmail.com> wrote:
>
> Hi Jose,
>
>
> As I said before, I am using the XSEDE comet cluster which has the
> following specifications
>
> *Number of cores per node - 24*
> *Memory per node  - 128 GB*
> The file system is NFS, hence there is nothing like number of disks per
> machine.
> I went through the previous discussions, but I could not get any clarity
> with respect to my current needs.
> I followed this link,* http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.3/bk_installing_manually_book/content/determine-hdp-memory-config.html
> <http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.3/bk_installing_manually_book/content/determine-hdp-memory-config.html>*,
> but still I got exception like this (found in the userlogs)
>
> 2017-02-16 20:50:19,002 *ERROR [main] yarn.GiraphYarnTask
> (GiraphYarnTask.java:main(187)) - GiraphYarnTask* threw a top-level
> exception, failing task
> java.lang.UnsupportedClassVersionError: hu/elte/inf/mbalassi/msc/
> giraph/betweenness/BetweennessComputation : Unsupported major.minor
> version 52.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
>         at java.security.SecureClassLoader.defineClass(
> SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
>         at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:278)
>         at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(
> Configuration.java:2013)
>         at org.apache.hadoop.conf.Configuration.getClassByName(
> Configuration.java:1978)
>         at org.apache.hadoop.conf.Configuration.getClass(
> Configuration.java:2072)
>         at org.apache.hadoop.conf.Configuration.getClass(
> Configuration.java:2098)
>         at org.apache.giraph.conf.ClassConfOption.get(
> ClassConfOption.java:128)
>         at org.apache.giraph.utils.ConfigurationUtils.getTypesHolderClass(
> ConfigurationUtils.java:178)
>         at org.apache.giraph.conf.GiraphTypes.readFrom(
> GiraphTypes.java:103)
>         at org.apache.giraph.conf.GiraphClasses.<init>(
> GiraphClasses.java:161)
>         at org.apache.giraph.conf.ImmutableClassesGiraphConfigur
> ation.<init>(ImmutableClassesGiraphConfiguration.java:138)
>         at org.apache.giraph.yarn.GiraphYarnTask.<init>(
> GiraphYarnTask.java:76)
>         at org.apache.giraph.yarn.GiraphYarnTask.main(
> GiraphYarnTask.java:182)
>
> In the yarn node manager logs, this is what I found,
>
> INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
> Memory usage of ProcessTree 4819 for container-id
> container_1487306992058_0001_01_000002: 51.3 MB of 10 GB physical memory
> used; 1.8 GB of 40 GB virtual memory used
> 2017-02-16 20:50:19,027 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exit code from container container_1487306992058_0001_01_000002 is : 2
> 2017-02-16 20:50:19,028 WARN *org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:
> Exception from container-launch with container ID:
> container_1487306992058_0001_01_000002 and exit code: 2*
> *ExitCodeException exitCode=2:*
>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
>         at org.apache.hadoop.util.Shell.run(Shell.java:455)
>         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:715)
>         at org.apache.hadoop.yarn.server.nodemanager.
> DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:
> 211)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>         at org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> 2017-02-16 20:50:19,031 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor:
> Exception from container-launch.
>
> I really have no idea, what the exact problem is.
>
> Sai Ganesh
>
>
>
> On Feb 17, 2017, at 6:18 AM, José Luis Larroque <user@giraph.apache.org>
> wrote:
>
> Hi Sai, your question is like "the question" for using Giraph.
>
> Those resources depends on how much memory do you have on every node, it
> depends if the cluster it's used for another user at the same time, depends
> on the type of program that you are running, etc. Virtual memory can be
> easily increased, but physical memory limit is a problem indeed.
>
> I recommend you to post how much memory do you have available on each node
> of your cluster to yarn, and maybe someone can give you a more precise
> advice on how to tune those parameters.
>
> You should look some old discussions about those values like this one:
> https://www.mail-archive.com/user@giraph.apache.org/msg02628.html
>
> Bye
>
> --
> *José Luis Larroque*
> Analista Programador Universitario - Facultad de Informática - UNLP
> Desarrollador Java y .NET  en LIFIA
>
> 2017-02-16 7:32 GMT-03:00 Sai Ganesh Muthuraman <saiganeshpsn@gmail.com>:
>
> Hi,
>
> I am trying to run a giraph application (computing betweenness centrality)
> in the XSEDE comet cluster. But everytime I get some error relating to
> container launch. Either the virtual memory or physical memory is running
> out.
>
>
> The avoid this, it looks like that the following parameters have to be set.
> i) The maximum memory yarn can utilize on every node
> ii) Breakup of total resources available into containers
> iii) Physical RAM limit for each Map And Reduce task
> iv) The JVM heap size limit for each task
> v) The amount of virtual memory each task will get
>
> If I were to use *N nodes* for computation, and I want to use *W workers*,
> what should the following parameters be?
>
> In mapred-site.xml
> mapreduce.map.memory.mb
> mapreduce.reduce.memory.mb
> mapreduce.map.cpu.vcores
> mapreduce.reduce.cpu.vcores
>
> In yarn-site.xml
> yarn.nodemanager.resource.memory-mb
> yarn.scheduler.minimum-allocation-mb
> yarn.scheduler.minimum-allocation-vcores
> yarn.scheduler.maximum-allocation-vcores
> yarn.nodemanager.resource.cpu-vcores
>
> Sai Ganesh
>
>
>
>
>
>
>
>

Mime
View raw message