giraph-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Carmen Manzulli <carmenmanzu...@gmail.com>
Subject Re: Couldn't instantiate
Date Wed, 02 Jul 2014 13:53:58 GMT
i've red in the web that "error child" could mean this:
Possible reason: the memory allocated for the tasks trackers (sum of
mapred.*.child.java.opt in mapred-site.xml) is more than the nodes actual
memory .


2014-07-02 15:52 GMT+02:00 Carmen Manzulli <carmenmanzulli@gmail.com>:

> ok course :) !
>
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
> and from the command line:
>
>
> /../lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/libexec/../lib/commons-cli-1.2.jar:/usr/local/hadoop/libexec/../lib/commons-codec-1.4.jar:/usr/local/hadoop/libexec/../lib/commons-collections-3.2.1.jar:/usr/local/hadoop/libexec/../lib/commons-configuration-1.6.jar:/usr/local/hadoop/libexec/../lib/commons-daemon-1.0.1.jar:/usr/local/hadoop/libexec/../lib/commons-digester-1.8.jar:/usr/local/hadoop/libexec/../lib/commons-el-1.0.jar:/usr/local/hadoop/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/local/hadoop/libexec/../lib/commons-io-2.1.jar:/usr/local/hadoop/libexec/../lib/commons-lang-2.4.jar:/usr/local/hadoop/libexec/../lib/commons-logging-1.1.1.jar:/usr/local/hadoop/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/local/hadoop/libexec/../lib/commons-math-2.1.jar:/usr/local/hadoop/libexec/../lib/commons-net-3.1.jar:/usr/local/hadoop/libexec/../lib/core-3.1.1.jar:/usr/local/hadoop/libexec/../lib/hadoop-capacity-scheduler-1.2.1.jar:/usr/local/hadoop/libexec/../lib/hadoop-fairscheduler-1.2.1.jar:/usr/local/hadoop/libexec/../lib/hadoop-thriftfs-1.2.1.jar:/usr/local/hadoop/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/local/hadoop/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/local/hadoop/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/local/hadoop/libexec/../lib/jdeb-0.8.jar:/usr/local/hadoop/libexec/../lib/jersey-core-1.8.jar:/usr/local/hadoop/libexec/../lib/jersey-json-1.8.jar:/usr/local/hadoop/libexec/../lib/jersey-server-1.8.jar:/usr/local/hadoop/libexec/../lib/jets3t-0.6.1.jar:/usr/local/hadoop/libexec/../lib/jetty-6.1.26.jar:/usr/local/hadoop/libexec/../lib/jetty-util-6.1.26.jar:/usr/local/hadoop/libexec/../lib/jsch-0.1.42.jar:/usr/local/hadoop/libexec/../lib/junit-4.5.jar:/usr/local/hadoop/libexec/../lib/kfs-0.2.2.jar:/usr/local/hadoop/libexec/../lib/log4j-1.2.15.jar:/usr/local/hadoop/libexec/../lib/mockito-all-1.8.5.jar:/usr/local/hadoop/libexec/../lib/oro-2.0.8.jar:/usr/local/hadoop/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/local/hadoop/libexec/../lib/slf4j-api-1.4.3.jar:/usr/local/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/local/hadoop/libexec/../lib/xmlenc-0.52.jar:/usr/local/hadoop/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/local/hadoop/libexec/../lib/jsp-2.1/jsp-api-2.1.jar
> 2014-07-02 15:49:17,492 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/local/hadoop/libexec/../lib/native/Linux-amd64-64:/app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201407021315_0003/attempt_201407021315_0003_m_000000_0/work
> 2014-07-02 15:49:17,492 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201407021315_0003/attempt_201407021315_0003_m_000000_0/work/tmp
> 2014-07-02 15:49:17,492 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
> 2014-07-02 15:49:17,492 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux
> 2014-07-02 15:49:17,492 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64
> 2014-07-02 15:49:17,492 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=3.11.0-24-generic
> 2014-07-02 15:49:17,493 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=hduser
> 2014-07-02 15:49:17,493 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/hduser
> 2014-07-02 15:49:17,493 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201407021315_0003/attempt_201407021315_0003_m_000000_0/work
> 2014-07-02 15:49:17,493 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection,
connectString=carmen-HP-Pavilion-Sleekbook-15:22181 sessionTimeout=60000 watcher=org.apache.giraph.master.BspServiceMaster@465962c4
> 2014-07-02 15:49:17,509 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection
to server carmen-HP-Pavilion-Sleekbook-15/127.0.1.1:22181. Will not attempt to authenticate
using SASL (unknown error)
> 2014-07-02 15:49:17,509 INFO org.apache.zookeeper.ClientCnxn: Socket connection established
to carmen-HP-Pavilion-Sleekbook-15/127.0.1.1:22181, initiating session
> 2014-07-02 15:49:17,515 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete
on server carmen-HP-Pavilion-Sleekbook-15/127.0.1.1:22181, sessionid = 0x146f756106b0001,
negotiated timeout = 600000
> 2014-07-02 15:49:17,516 INFO org.apache.giraph.bsp.BspService: process: Asynchronous
connection complete.
> 2014-07-02 15:49:17,530 INFO org.apache.giraph.graph.GraphTaskManager: map: No need to
do anything when not a worker
> 2014-07-02 15:49:17,530 INFO org.apache.giraph.graph.GraphTaskManager: cleanup: Starting
for MASTER_ZOOKEEPER_ONLY
> 2014-07-02 15:49:17,561 INFO org.apache.giraph.bsp.BspService: getJobState: Job state
already exists (/_hadoopBsp/job_201407021315_0003/_masterJobState)
> 2014-07-02 15:49:17,568 INFO org.apache.giraph.master.BspServiceMaster: becomeMaster:
First child is '/_hadoopBsp/job_201407021315_0003/_masterElectionDir/carmen-HP-Pavilion-Sleekbook-15_00000000000'
and my bid is '/_hadoopBsp/job_201407021315_0003/_masterElectionDir/carmen-HP-Pavilion-Sleekbook-15_00000000000'
> 2014-07-02 15:49:17,570 INFO org.apache.giraph.bsp.BspService: getApplicationAttempt:
Node /_hadoopBsp/job_201407021315_0003/_applicationAttemptsDir already exists!
> 2014-07-02 15:49:17,625 INFO org.apache.giraph.comm.netty.NettyServer: NettyServer: Using
execution group with 8 threads for requestFrameDecoder.
> 2014-07-02 15:49:17,674 INFO org.apache.giraph.comm.netty.NettyServer: start: Started
server communication server: carmen-HP-Pavilion-Sleekbook-15/127.0.1.1:30000 with up to 16
threads on bind attempt 0 with sendBufferSize = 32768 receiveBufferSize = 524288
> 2014-07-02 15:49:17,679 INFO org.apache.giraph.comm.netty.NettyClient: NettyClient: Using
execution handler with 8 threads after request-encoder.
> 2014-07-02 15:49:17,682 INFO org.apache.giraph.master.BspServiceMaster: becomeMaster:
I am now the master!
> 2014-07-02 15:49:17,684 INFO org.apache.giraph.bsp.BspService: getApplicationAttempt:
Node /_hadoopBsp/job_201407021315_0003/_applicationAttemptsDir already exists!
> 2014-07-02 15:49:17,717 ERROR org.apache.giraph.master.MasterThread: masterThread: Master
algorithm failed with NullPointerException
> java.lang.NullPointerException
> 	at org.apache.giraph.master.BspServiceMaster.generateInputSplits(BspServiceMaster.java:330)
> 	at org.apache.giraph.master.BspServiceMaster.createInputSplits(BspServiceMaster.java:619)
> 	at org.apache.giraph.master.BspServiceMaster.createVertexInputSplits(BspServiceMaster.java:686)
> 	at org.apache.giraph.master.MasterThread.run(MasterThread.java:108)
> 2014-07-02 15:49:17,718 FATAL org.apache.giraph.graph.GraphMapper: uncaughtException:
OverrideExceptionHandler on thread org.apache.giraph.master.MasterThread, msg = java.lang.NullPointerException,
exiting...
> java.lang.IllegalStateException: java.lang.NullPointerException
> 	at org.apache.giraph.master.MasterThread.run(MasterThread.java:193)
> Caused by: java.lang.NullPointerException
> 	at org.apache.giraph.master.BspServiceMaster.generateInputSplits(BspServiceMaster.java:330)
> 	at org.apache.giraph.master.BspServiceMaster.createInputSplits(BspServiceMaster.java:619)
> 	at org.apache.giraph.master.BspServiceMaster.createVertexInputSplits(BspServiceMaster.java:686)
> 	at org.apache.giraph.master.MasterThread.run(MasterThread.java:108)
> 2014-07-02 15:49:17,722 INFO org.apache.giraph.zk.ZooKeeperManager: run: Shutdown hook
started.
> 2014-07-02 15:49:17,727 WARN org.apache.giraph.zk.ZooKeeperManager: onlineZooKeeperServers:
Forced a shutdown hook kill of the ZooKeeper process.
> 2014-07-02 15:49:18,049 INFO org.apache.zookeeper.ClientCnxn: Unable to read additional
data from server sessionid 0x146f756106b0001, likely server has closed socket, closing socket
connection and attempting reconnect
> 2014-07-02 15:49:18,050 INFO org.apache.giraph.zk.ZooKeeperManager: onlineZooKeeperServers:
ZooKeeper process exited with 143 (note that 143 typically means killed).
>
>
>
>
> 2014-07-02 13:52 GMT+02:00 John Yost <soozandjohnyost@gmail.com>:
>
> Hi Carmen,
>>
>> Please post more of the exception stack trace, not enough here for me to
>> figure anything out. :)
>>
>> Thanks
>>
>> --John
>>
>>
>> On Wed, Jul 2, 2014 at 7:33 AM, <soozandjohnyost@gmail.com> wrote:
>>
>>> Hi Carmen,
>>>
>>> Glad that one problem is fixed, and I can take a look at this one as
>>> well.
>>>
>>> --John
>>>
>>> Sent from my iPhone
>>>
>>> On Jul 2, 2014, at 6:50 AM, Carmen Manzulli <carmenmanzulli@gmail.com>
>>> wrote:
>>>
>>>
>>> ok; i've done what you have told me...but now i've got this problem..
>>>
>>> ava.lang.Throwable: Child Error
>>> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>>> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>> this is my Computation code:
>>> import org.apache.giraph.GiraphRunner;
>>> import org.apache.giraph.graph.BasicComputation;
>>> import org.apache.giraph.graph.Vertex;
>>> import org.apache.giraph.edge.Edge;
>>>
>>>
>>>
>>> import org.apache.hadoop.io.Text;
>>> import org.apache.hadoop.io.NullWritable;
>>> import org.apache.hadoop.util.ToolRunner;
>>>
>>>
>>>
>>> public class SimpleSelectionComputation extends BasicComputation<Text,NullWritable,Text,NullWritable>
{
>>>
>>>
>>>
>>> 	
>>> @Override
>>> public void compute(Vertex<Text,NullWritable,Text> vertex,Iterable<NullWritable>
messages){
>>> 	
>>> 	
>>> 	Text source = new Text("http://dbpedia.org/resource/1040s");
>>>
>>>
>>>
>>> 	
>>> 	if (getSuperstep()==0)
>>> 	{
>>> 		if(vertex.getId()==source)
>>> 		{
>>> 			System.out.println("il soggetto "+vertex.getId()+" ha i seguenti predicati
e oggetti:");
>>> 			for(Edge<Text,Text> e : vertex.getEdges())
>>>
>>>
>>>
>>> 			{
>>> 				System.out.println(e.getValue()+"\t"+e.getTargetVertexId());
>>> 			}
>>> 		}
>>> 		vertex.voteToHalt();
>>> 	}
>>> 	
>>> }
>>>
>>> public static void main(String[] args) throws Exception {
>>>     System.exit(ToolRunner.run(new GiraphRunner(), args));
>>>
>>>
>>>
>>>   }
>>>
>>> 	
>>> }
>>>
>>>
>>
>

Mime
View raw message