hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gao, Jie (Kyrie, HPIT-DS-CDC)" <jie....@hp.com>
Subject cannot run c++ programs on hadoop 2.2.0
Date Tue, 25 Mar 2014 06:23:32 GMT
Hi guys,

Currently I wanted to run a C++ application(through using a pipes API) on hadoop 2.2.0 but
got some java errors. I am not very familiar with Java so I was blocked by it. Hope someone
can help me to figure it out. :)

My linux version is:
Uname -a
Linux Ubuntu 3.11.0-15-generic #25~precise1-Ubuntu SMP Thu Jan 30 17:39:31 UTC 2014 x86_64
x86_64  x86_64 GNU/Linux

Linux distributor id is Ubuntu and release version is 12.04.

First I downloaded an installation package for hadoop 2.2.0 which is 32 bit. I configured
the mapred-site.xml, hdfs-site.xml and core-site.xml. And also I can successfully run start-dfs.sh
and start-yarn.sh.

Then I started to build my own program according to this link: http://cs.smith.edu/dftwiki/index.php/Hadoop_Tutorial_2.2_--_Running_C%2B%2B_Programs_on_Hadoop

After I compiled it successfully, I got an runtime error. See below,

DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.

14/03/23 05:46:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
14/03/23 05:46:26 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/03/23 05:46:26 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker,
sessionId=
14/03/23 05:46:26 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker,
sessionId= - already initialized
14/03/23 05:46:26 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not
be found. See Job or Job#setJar(String).
14/03/23 05:46:26 INFO mapred.FileInputFormat: Total input paths to process : 1
14/03/23 05:46:26 INFO mapreduce.JobSubmitter: number of splits:1
14/03/23 05:46:26 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.cache.files.filesizes is deprecated.
Instead, use mapreduce.job.cache.files.filesizes
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.cache.files is deprecated. Instead,
use mapreduce.job.cache.files
14/03/23 05:46:26 INFO Configuration.deprecation: hadoop.pipes.java.recordreader is deprecated.
Instead, use mapreduce.pipes.isjavarecordreader
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.output.value.class is deprecated.
Instead, use mapreduce.job.output.value.class
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.mapoutput.value.class is deprecated.
Instead, use mapreduce.map.output.value.class
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead,
use mapreduce.input.fileinputformat.inputdir
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead,
use mapreduce.output.fileoutputformat.outputdir
14/03/23 05:46:26 INFO Configuration.deprecation: hadoop.pipes.java.recordwriter is deprecated.
Instead, use mapreduce.pipes.isjavarecordwriter
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead,
use mapreduce.job.maps
14/03/23 05:46:26 INFO Configuration.deprecation: hadoop.pipes.partitioner is deprecated.
Instead, use mapreduce.pipes.partitioner
14/03/23 05:46:26 INFO Configuration.deprecation: hadoop.pipes.executable is deprecated. Instead,
use mapreduce.pipes.executable
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.cache.files.timestamps is deprecated.
Instead, use mapreduce.job.cache.files.timestamps
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead,
use mapreduce.job.output.key.class
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.mapoutput.key.class is deprecated.
Instead, use mapreduce.map.output.key.class
14/03/23 05:46:26 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead,
use mapreduce.job.working.dir
14/03/23 05:46:26 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1002331250_0001
14/03/23 05:46:27 WARN conf.Configuration: file:/home/hduser/hadoop/tmp/hadoop-hduser/mapred/staging/hduser1002331250/.staging/job_local1002331250_0001/job.xml:an
attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/23 05:46:27 WARN conf.Configuration: file:/home/hduser/hadoop/tmp/hadoop-hduser/mapred/staging/hduser1002331250/.staging/job_local1002331250_0001/job.xml:an
attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/23 05:46:28 INFO mapred.LocalDistributedCacheManager: Creating symlink: /home/hduser/hadoop/tmp/hadoop-hduser/mapred/local/1395578787432/maxTemperature
<- /home/hduser/hadoop/maxTemperature
14/03/23 05:46:28 INFO mapred.LocalDistributedCacheManager: Localized hdfs://localhost:8010/capp/maxTemperature
as file:/home/hduser/hadoop/tmp/hadoop-hduser/mapred/local/1395578787432/maxTemperature
14/03/23 05:46:28 INFO Configuration.deprecation: mapred.cache.localFiles is deprecated. Instead,
use mapreduce.job.cache.local.files
14/03/23 05:46:28 WARN conf.Configuration: file:/home/hduser/hadoop/tmp/hadoop-hduser/mapred/local/localRunner/hduser/job_local1002331250_0001/job_local1002331250_0001.xml:an
attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
14/03/23 05:46:28 WARN conf.Configuration: file:/home/hduser/hadoop/tmp/hadoop-hduser/mapred/local/localRunner/hduser/job_local1002331250_0001/job_local1002331250_0001.xml:an
attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
14/03/23 05:46:28 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
14/03/23 05:46:28 INFO mapreduce.Job: Running job: job_local1002331250_0001
14/03/23 05:46:28 INFO mapred.LocalJobRunner: OutputCommitter set in config null
14/03/23 05:46:28 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
14/03/23 05:46:28 INFO mapred.LocalJobRunner: Waiting for map tasks
14/03/23 05:46:28 INFO mapred.LocalJobRunner: Starting task: attempt_local1002331250_0001_m_000000_0
14/03/23 05:46:28 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
14/03/23 05:46:28 INFO mapred.MapTask: Processing split: hdfs://localhost:8010/capp/input:0+77
14/03/23 05:46:28 INFO mapred.MapTask: numReduceTasks: 1
14/03/23 05:46:28 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
14/03/23 05:46:28 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
14/03/23 05:46:28 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
14/03/23 05:46:28 INFO mapred.MapTask: soft limit at 83886080
14/03/23 05:46:28 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
14/03/23 05:46:28 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
14/03/23 05:46:29 INFO mapred.LocalJobRunner: Map task executor complete.
14/03/23 05:46:29 WARN mapred.LocalJobRunner: job_local1002331250_0001
java.lang.Exception: java.lang.NullPointerException
                at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:403)
Caused by: java.lang.NullPointerException
                at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:104)
                at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69)
                at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
                at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
                at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
                at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
                at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
                at java.util.concurrent.FutureTask.run(FutureTask.java:138)
                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
                at java.lang.Thread.run(Thread.java:662)
14/03/23 05:46:29 INFO mapreduce.Job: Job job_local1002331250_0001 running in uber mode :
false
14/03/23 05:46:29 INFO mapreduce.Job:  map 0% reduce 0%
14/03/23 05:46:29 INFO mapreduce.Job: Job job_local1002331250_0001 failed with state FAILED
due to: NA
14/03/23 05:46:29 INFO mapreduce.Job: Counters: 0
Exception in thread "main" java.io.IOException: Job failed!
                at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
                at org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:264)
                at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:503)
                at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:518)

I suspected that that's because my linux system is 64bit and hadoop is 32bit. There may be
some conflicts between them. But then I tried to build a 64bit hadoop installation for source
code and got the same error. So now I really don't know what to do next. If anyone can give
me some suggestions on that I would be very grateful.


Thanks,
Kyrie


Gao, Jie (Kyrie)
BIP, DS CDC, HPIT
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-
Email: jie.gao@hp.com


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message