incubator-s4-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matthieu Morel <mmo...@apache.org>
Subject Re: GC overhead limit exceeded
Date Thu, 18 Oct 2012 09:32:20 GMT
On 10/18/12 10:15 AM, JiHyoun Park wrote:
> Hi, Matthieu
>
> Can I ask how the integrating with Yarn is going on now?
> When can we expect to see its complete patch?


We are working on that right now and we plan to have a first version soon.

The relevant ticket is https://issues.apache.org/jira/browse/S4-25

Matthieu

>
> Best Regards
> Jihyoun Park
>
>
>
> On Thu, Oct 18, 2012 at 3:56 PM, Matthieu Morel <mmorel@apache.org
> <mailto:mmorel@apache.org>> wrote:
>
>     On 10/18/12 9:27 AM, Dingyu Yang wrote:
>
>         I don't know how and where to define the partition and cluster.
>         Every time I need to start zkServer and create cluster2 like this
>         ./s4 newCluster -c=testCluster1 -nbTasks=5 -flp=12000 -zk=myMaster
>
>
>     --> testCluster1 is the name of the cluster and nbTasks is the
>     number of partitions.
>
>
>            ./s4 newCluster -c=testCluster2 -nbTasks=2 -flp=13000
>         -zk=myMaster
>         then start all the node (seven nodes)
>         ./s4 node -c=testCluster1 -zk=myMaster
>         ./s4 node -c=testCluster1 -zk=myMaster
>         ...
>         ./s4 node -c=testCluster2 -p=s4.adapter.output.stream=__datarow
>         -zk=myMaster
>         ..
>         And deploy the S4r programs:
>         ./s4 deploy -s4r=../build/libs/app.s4r -c=testCluster1 -appName=app
>            -zk=myMaster
>         ...
>
>         Then I want to change the app because I modify the program.
>         I have to restart previous steps.
>
>
>     Yes, and I suggest to script the operations. Note that we are also
>     working on integrating with Yarn (= new Hadoop), in order to ease
>     deployment (provided you run a Yarn cluster)
>
>     Matthieu
>
>
>
>
>         So, How can define the partition?
>
>
>         2012/10/18 Matthieu Morel <mmorel@apache.org
>         <mailto:mmorel@apache.org> <mailto:mmorel@apache.org
>         <mailto:mmorel@apache.org>>>
>
>
>
>              On Thu, Oct 18, 2012 at 8:16 AM, 杨定裕
>         <yangdingyu@gmail.com <mailto:yangdingyu@gmail.com>
>              <mailto:yangdingyu@gmail.com
>         <mailto:yangdingyu@gmail.com>>> wrote:
>
>                  Hi, all
>                  When My adapter send large data to APP, adapter app
>         occurs a
>                  error like this:
>                  Maybe the memory is limited and  the following
>         processing is slow?
>
>
>              you´d have to be more specific about your app. What is the
>              approximate size of your messages? What is the available
>         memory in
>              the JVM? How many messages are you creating per second and per
>              adapter node?
>              Note that the culprit may not be the serialization but
>         could also be
>              intermediate objects created by Netty, the comm layer
>         library. It
>              will be useful to get more feedback on your issue.
>
>
>                  How can configure file in a real cluster.
>                  I found that it is complex in a 20 nodes to create the
>         topology.
>
>
>              I´m sure sure what complexity you are referring to. To use
>         20 nodes,
>              you just need to define a logical cluster with 20
>         partitions, and
>              start 20 S4 nodes that point to that cluster configuration
>              (Zookeeper ensemble + cluster name).
>
>
>                  And How can i remove a application in a cluster?
>
>
>              Currently you simply clean up zookeeper and kill the S4
>         nodes. (We
>              plan to add a more convenient way, like a command you issue to
>              Zookeeper).
>
>
>                  I saw s4-0.3 has a configure file "clusters.xml", but
>         s-0.5 has not.
>
>
>              This is not needed. In S4 0.5, you define a minimal number of
>              parameter for the cluster (number of partitions, name) and
>         you start
>              S4 nodes independently.
>
>              Matthieu
>
>
>                  -----error-------------
>                  Oct 18, 2012 1:50:04 PM
>
>         org.jboss.netty.channel.__socket.nio.__NioServerSocketPipelineSink
>                  WARNING: Failed to accept a connection.
>                  java.lang.OutOfMemoryError: GC overhead limit exceeded
>                           at
>         java.util.HashMap.__newKeyIterator(HashMap.java:__840)
>                           at
>         java.util.HashMap$KeySet.__iterator(HashMap.java:874)
>                           at java.util.HashSet.iterator(__HashSet.java:153)
>                           at
>
>         sun.nio.ch.SelectorImpl.__processDeregisterQueue(__SelectorImpl.java:127)
>                           at
>
>         sun.nio.ch.EPollSelectorImpl.__doSelect(EPollSelectorImpl.__java:69)
>                           at
>
>         sun.nio.ch.SelectorImpl.__lockAndDoSelect(SelectorImpl.__java:69)
>                           at
>         sun.nio.ch.SelectorImpl.__select(SelectorImpl.java:80)
>                           at
>
>         org.jboss.netty.channel.__socket.nio.__NioServerSocketPipelineSink$__Boss.run(__NioServerSocketPipelineSink.__java:240)
>                           at
>
>         java.util.concurrent.__ThreadPoolExecutor$Worker.__runTask(ThreadPoolExecutor.__java:886)
>                           at
>
>         java.util.concurrent.__ThreadPoolExecutor$Worker.run(__ThreadPoolExecutor.java:908)
>                           at java.lang.Thread.run(Thread.__java:619)
>                  Exception in thread "Thread-7"
>         java.lang.OutOfMemoryError: GC
>                  overhead limit exceeded
>                           at java.lang.reflect.Array.get(__Native Method)
>                           at
>
>         com.esotericsoftware.kryo.__serialize.ArraySerializer.__writeArray(ArraySerializer.__java:110)
>                           at
>
>         com.esotericsoftware.kryo.__serialize.ArraySerializer.__writeObjectData(__ArraySerializer.java:88)
>                           at
>
>         com.esotericsoftware.kryo.__Serializer.writeObject(__Serializer.java:43)
>                           at
>
>         com.esotericsoftware.kryo.__serialize.FieldSerializer.__writeObjectData(__FieldSerializer.java:182)
>                           at
>
>         com.esotericsoftware.kryo.__Kryo.writeClassAndObject(Kryo.__java:489)
>                           at
>
>         com.esotericsoftware.kryo.__ObjectBuffer.__writeClassAndObject(__ObjectBuffer.java:230)
>                           at
>
>         org.apache.s4.comm.serialize.__KryoSerDeser.serialize(__KryoSerDeser.java:90)
>                           at
>
>         org.apache.s4.comm.tcp.__TCPEmitter.send(TCPEmitter.__java:178)
>                           at
>
>         org.apache.s4.core.__RemoteSender.send(__RemoteSender.java:44)
>                           at
>
>         org.apache.s4.core.__RemoteSenders.send(__RemoteSenders.java:81)
>                           at
>
>         org.apache.s4.core.__RemoteStream.put(RemoteStream.__java:74)
>                           at
>         OLAAdapter.Adapter$Dequeuer.__run(Adapter.java:64)
>                           at java.lang.Thread.run(Thread.__java:619)
>
>
>
>
>


Mime
View raw message