hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kiru Pakkirisamy <kirupakkiris...@yahoo.com>
Subject Re: lzo error while running mr job
Date Wed, 28 Oct 2015 20:33:07 GMT
Harish,Thank you very much for your valuable/assertive suggestion :-)I was able to identify
the problem and fix it.Else where in the code, we were setting a different mapred-site.xml
in the configuration.I still do not know why it is using the DefaultCodec for compression
(instead of the one I set - SnappyCodec), but I am hopeful I will get there. Thanks again. Regards, -
      From: Harsh J <harsh@cloudera.com>
 To: "user@hadoop.apache.org" <user@hadoop.apache.org> 
 Sent: Tuesday, October 27, 2015 8:34 AM
 Subject: Re: lzo error while running mr job
The stack trace is pretty certain you do, as it clearly tries to load a class not belonging
within Apache Hadoop. Try looking at the XML files the application uses? Perhaps you've missed
all the spots.
If I had to guess, given the JobSubmitter entry in the trace, it'd be in the submitting host's
/etc/hadoop/conf/* files, or in the dir pointed by $HADOOP_CONF_DIR (if thats specifically
set). Alternatively, it'd be in the code.
If you have control over the code, you can also make it dump the XML before submit via: job.getConfiguration().writeXml(System.out);.
The XML dump will carry the source of all properties along with their value.

On Tue, Oct 27, 2015 at 8:52 PM Kiru Pakkirisamy <kirupakkirisamy@yahoo.com> wrote:

| Harish,We don't have lzo in the io.compression.codecs list.That is what is puzzling me.Regards, Kiru 

|  From:"Harsh J" <harsh@cloudera.com>
Date:Mon, Oct 26, 2015 at 11:39 PM
Subject:Re: lzo error while running mr job



|  Every codec in the io.compression.codecs list of classes will be initialised, regardless
of actual further use. Since the Lzo*Codec classes require the native library to initialise,
the failure is therefore expected.
On Tue, Oct 27, 2015 at 11:42 AM Kiru Pakkirisamy <kirupakkirisamy@yahoo.com> wrote:

I am seeing a weird error after we moved to the new hadoop mapreduce java packages in 2.4We
are not using lzo (as in io.compression.codecs) but we still get this error. Does it mean
we have to have lzo installed even though we are not using ? Thanks.
Regards,- kiru
2015-10-27 00:18:57,994 ERROR com.hadoop.compression.lzo.GPLNativeCodeLoader | Could not load
native gpl libraryjava.lang.UnsatisfiedLinkError: no gplcompression in java.library.path at
java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886) ~[?:1.7.0_85] at java.lang.Runtime.loadLibrary0(Runtime.java:849)
~[?:1.7.0_85] at java.lang.System.loadLibrary(System.java:1088) ~[?:1.7.0_85] at com.hadoop.compression.lzo.GPLNativeCodeLoader.<clinit>(GPLNativeCodeLoader.java:31)
[flow-trunk.242-470787.jar:?] at com.hadoop.compression.lzo.LzoCodec.<clinit>(LzoCodec.java:60)
[flow-trunk.242-470787.jar:?] at java.lang.Class.forName0(Native Method) [?:1.7.0_85] at java.lang.Class.forName(Class.java:278)
[?:1.7.0_85] at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:1834)
[flow-trunk.242-470787.jar:?] at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1799)
[flow-trunk.242-470787.jar:?] at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:128)
[flow-trunk.242-470787.jar:?] at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:175)
[flow-trunk.242-470787.jar:?] at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.isSplitable(CombineFileInputFormat.java:159)
[flow-trunk.242-470787.jar:?] at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getMoreSplits(CombineFileInputFormat.java:283)
[flow-trunk.242-470787.jar:?] at org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat.getSplits(CombineFileInputFormat.java:243)
[flow-trunk.242-470787.jar:?] at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
 Regards, - kiru


View raw message