Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 97A49200B8B for ; Tue, 4 Oct 2016 22:49:06 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 95FF0160ACC; Tue, 4 Oct 2016 20:49:06 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 330E9160AC7 for ; Tue, 4 Oct 2016 22:49:05 +0200 (CEST) Received: (qmail 96731 invoked by uid 500); 4 Oct 2016 20:49:03 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 96716 invoked by uid 99); 4 Oct 2016 20:49:02 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Oct 2016 20:49:02 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 84A1CC1C82 for ; Tue, 4 Oct 2016 20:49:02 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.16 X-Spam-Level: *** X-Spam-Status: No, score=3.16 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, HTML_OBFUSCATE_10_20=1.162, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=cloudera-com.20150623.gappssmtp.com Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 4D50mPcJhTQb for ; Tue, 4 Oct 2016 20:49:00 +0000 (UTC) Received: from mail-pf0-f177.google.com (mail-pf0-f177.google.com [209.85.192.177]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTPS id 453705FB18 for ; Tue, 4 Oct 2016 20:48:59 +0000 (UTC) Received: by mail-pf0-f177.google.com with SMTP id i85so28065203pfa.3 for ; Tue, 04 Oct 2016 13:48:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudera-com.20150623.gappssmtp.com; s=20150623; h=mime-version:subject:from:in-reply-to:date:cc:message-id:references :to; bh=gJMeEB6lbmuYGZoKwIDCgSbr1COQyC/aDZAr7ew6qJY=; b=2AnFDVw0JknxJqc+1Bh0QMNVQzROvUPmS3RlFGjeBv26gTm48MEtRDlH86u5HR2ZO8 +FTLbUDSQ+XYI/1pEZ/aswFGl/s95EFsPykXxw7M3bmRuoSXJi/DzkiFt4UnfWztfMrQ yINKP0hPRkzo/YrfdgDEF1H5+khfPptCP/SfhDqu91WFojIQRefxx39IrFSacLKz1sA5 lBg3Z+TVgF8T0lCfeVC7/Y5cZLgec/4Kcs2wmiwfe+YK3txUWTAG4i7pkxcEBGh1azyV FsMsI8Iuiy2OYHtwO0aV8Dsg8L+UPeqioL/jAASSF2cHHYsscTgDziqtQe5IHXuXpzOR axMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=gJMeEB6lbmuYGZoKwIDCgSbr1COQyC/aDZAr7ew6qJY=; b=CueyAUHsYmG89nlB9y3C+Q6f/xN6eo/8bQf1RQAuY6b7nxpXhu+UcR/8o0Wl2gv9fC yZ4/fFm04p4KwTZPE+AYNiUGaT2yWu0PgJGXrIcB1GIn/gnaceayRnM84CHZ9ggFj1jC SolVOZQEjtXn0qN1ePj/8B3Pa6O873HhV/15JtmOoiqyJ1GGUEuxy8nnT/ZiHtIK3T7F H1GHxzWy+/wDHG9WJO8ptSOtNVLyne0F+VfxSLqBihZhbCzbgsPkNGGvWsy396hvRXD2 fB4LkUaVkw2f3I+c+NbHcpsjAjlBWr/9cjvW7CzNrFVseM8Yq5R9X8trLjKndY/LKc67 ABGw== X-Gm-Message-State: AA6/9Rn4ssM+3zgJPgUMuYCT2WDEAiSQnSoJuDqJBUqNSBzuE8oOFKOL49ch7hTkbd90kjPd X-Received: by 10.98.50.1 with SMTP id y1mr576533pfy.62.1475614132004; Tue, 04 Oct 2016 13:48:52 -0700 (PDT) Received: from [172.16.1.88] ([74.217.76.101]) by smtp.gmail.com with ESMTPSA id b4sm8107216paw.10.2016.10.04.13.48.50 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 04 Oct 2016 13:48:51 -0700 (PDT) Content-Type: multipart/alternative; boundary="Apple-Mail=_30737173-076D-465E-99CA-715A8FA40D14" Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: native snappy library not available: this version of libhadoop was built without snappy support. From: Wei-Chiu Chuang In-Reply-To: Date: Tue, 4 Oct 2016 13:48:49 -0700 Cc: user@hadoop.apache.org Message-Id: <8E38FEC5-AB2D-44FA-9865-2F0084F5EBAC@cloudera.com> References: <76D18AF2-F938-4701-8CE0-EBDCFF39E2D8@cloudera.com> To: Uthayan Suthakar X-Mailer: Apple Mail (2.2104) archived-at: Tue, 04 Oct 2016 20:49:06 -0000 --Apple-Mail=_30737173-076D-465E-99CA-715A8FA40D14 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 It seems to me this issue is the direct result of MAPREDUCE-6577 = Since you=E2=80=99re on a CDH cluster, I would suggest you to move up to = CDH5.7.2 or above where this bug is fixed. Best, Wei-Chiu Chuang > On Oct 4, 2016, at 1:26 PM, Wei-Chiu Chuang = wrote: >=20 > I see. Sorry for the confusion. >=20 > It seems to me the warning message a bit misleading. This message may = also be printed if libhadoop can not be loaded for any reason. > Can you turn on debug log and see if the log contains either "Loaded = the native-hadoop library=E2=80=9D or "Failed to load native-hadoop with = error=E2=80=9D? >=20 >=20 > Wei-Chiu Chuang >=20 >> On Oct 4, 2016, at 1:12 PM, Uthayan Suthakar = > wrote: >>=20 >> Hi Wei-Chiu, >>=20 >> My Hadoop version is Hadoop 2.6.0-cdh5.7.0. >>=20 >> But when I checked the native, it shows that it is installed: >>=20 >> hadoop checknative >> 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded & = initialized native-bzip2 library system-native >> 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & = initialized native-zlib library >> Native library checking: >> hadoop: true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0 >> zlib: true /lib64/libz.so.1 >> snappy: true /usr/lib/hadoop/lib/native/libsnappy.so.1 >> lz4: true revision:99 >> bzip2: true /lib64/libbz2.so.1 >> openssl: true /usr/lib64/libcrypto.so >>=20 >> Thanks. >>=20 >> Uthay >>=20 >>=20 >> On 4 October 2016 at 21:05, Wei-Chiu Chuang > wrote: >> Hi Uthayan, >> what=E2=80=99s the version of Hadoop you have? Hadoop 2.7.3 binary = does not ship with snappy precompiled. If this is the version you have = you may have to rebuild Hadoop yourself to include it. >>=20 >> Wei-Chiu Chuang >>=20 >>> On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar = > wrote: >>>=20 >>> Hello guys, >>>=20 >>> I have a job that reads compressed (Snappy) data but when I run the = job, it is throwing an error "native snappy library not available: this = version of libhadoop was built without snappy support". >>> . =20 >>> I followed this instruction but it did not resolve the issue: >>> = https://community.hortonworks.com/questions/18903/this-version-of-libhadoo= p-was-built-without-snappy.html = >>>=20 >>> The check native command show that snappy is installed. >>> hadoop checknative >>> 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded & = initialized native-bzip2 library system-native >>> 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & = initialized native-zlib library >>> Native library checking: >>> hadoop: true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0 >>> zlib: true /lib64/libz.so.1 >>> snappy: true /usr/lib/hadoop/lib/native/libsnappy.so.1 >>> lz4: true revision:99 >>> bzip2: true /lib64/libbz2.so.1 >>> openssl: true /usr/lib64/libcrypto.so >>>=20 >>> I also have a code in the job to check whether native snappy is = loaded, which is returning true. >>>=20 >>> Now, I have no idea why I'm getting this error. Also, I had no issue = reading Snappy data using MapReduce job on the same cluster, Could = anyone tell me what is wrong? >>>=20 >>>=20 >>>=20 >>> Thank you. >>>=20 >>> Stack: >>>=20 >>>=20 >>> java.lang.RuntimeException: native snappy library not available: = this version of libhadoop was built without snappy support. >>> at = org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCode= c.java:65) >>> at = org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.= java:193) >>> at = org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178= ) >>> at = org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:111= ) >>> at = org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.j= ava:67) >>> at = org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:237) >>> at = org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208) >>> at = org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101) >>> at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) >>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) >>> at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) >>> at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) >>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) >>> at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) >>> at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) >>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) >>> at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) >>> at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) >>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) >>> at = org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)= >>> at = org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)= >>> at org.apache.spark.scheduler.Task.run(Task.scala:89) >>> at = org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) >>> at = java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:= 1145) >>> at = java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java= :615) >>> at java.lang.Thread.run(Thread.java:745) >>=20 >>=20 >=20 --Apple-Mail=_30737173-076D-465E-99CA-715A8FA40D14 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 It seems to me this issue is the direct result of MAPREDUCE-6577
Since you=E2=80=99re on a CDH = cluster, I would suggest you to move up to CDH5.7.2 or above where this = bug is fixed.

Best,
Wei-Chiu Chuang

On Oct 4, 2016, at 1:26 PM, Wei-Chiu Chuang = <weichiu@cloudera.com> wrote:

I see. Sorry = for the confusion.

It = seems to me the warning message a bit misleading. This message may also = be printed if libhadoop can not be loaded for any reason.
Can you turn on debug log and see if the log contains = either "Loaded the native-hadoop library=E2=80=9D or "Failed = to load native-hadoop with error=E2=80=9D?


Wei-Chiu Chuang

On Oct 4, 2016, at 1:12 PM, = Uthayan Suthakar <uthayan.suthakar@gmail.com> wrote:

Hi Wei-Chiu,

My Hadoop version is Hadoop 2.6.0-cdh5.7.0.

But when I checked the = native, it shows that it is installed:

hadoop checknative
16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully = loaded & initialized native-bzip2 library system-native
16/10/04 21:01:30 INFO = zlib.ZlibFactory: Successfully loaded & initialized native-zlib = library
Native library = checking:
hadoop: =  true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true = /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4:     true = revision:99
bzip2: =   true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so

Thanks.

Uthay


On 4 October 2016 at 21:05, Wei-Chiu Chuang <weichiu@cloudera.com> = wrote:
Hi = Uthayan,
what=E2=80=99s the version of Hadoop you = have? Hadoop 2.7.3 binary does not ship with snappy precompiled. If this = is the version you have you may have to rebuild Hadoop yourself to = include it.

Wei-Chiu = Chuang

On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar <uthayan.suthakar@gmail.com> wrote:

Hello guys,
I have a = job that reads compressed (Snappy) data but when I run the job, it is = throwing an error "native snappy library not available: this version = of libhadoop was built without snappy support".
.  

The check native command show that = snappy is installed.
hadoop checknative
16/10/04 21:01:30 = INFO bzip2.Bzip2Factory: Successfully loaded & initialized = native-bzip2 library system-native
16/10/04 = 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized = native-zlib library
Native library = checking:
hadoop:  true = /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4:     true = revision:99
bzip2:   true = /lib64/libbz2.so.1
openssl: true = /usr/lib64/libcrypto.so

I also have a code in the job to check whether native snappy = is loaded, which is returning true.

Now, I have no idea why I'm getting this error. Also, I had = no issue reading Snappy data using MapReduce job on the same cluster, = Could anyone tell me what is wrong?



Thank you.

Stack:


java.lang.RuntimeException: native snappy library not = available: this version of libhadoop was built without snappy = support.
    =     at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
        at = org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:193)
        at = org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178)
        at = org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:111)
        at = org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
        at = org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
        at = org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
        at = org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
        at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at = org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at = org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at = org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at = org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at = org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at = org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at = org.apache.spark.scheduler.Task.run(Task.scala:89)
        at = org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at = java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at = java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at = java.lang.Thread.run(Thread.java:745)




= --Apple-Mail=_30737173-076D-465E-99CA-715A8FA40D14--