Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id EBA4B200B8B for ; Tue, 4 Oct 2016 22:05:28 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id EA4EC160ACC; Tue, 4 Oct 2016 20:05:28 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D25E0160AC7 for ; Tue, 4 Oct 2016 22:05:27 +0200 (CEST) Received: (qmail 33492 invoked by uid 500); 4 Oct 2016 20:05:25 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 33476 invoked by uid 99); 4 Oct 2016 20:05:25 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Oct 2016 20:05:25 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id D52CDC146D for ; Tue, 4 Oct 2016 20:05:24 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.979 X-Spam-Level: * X-Spam-Status: No, score=1.979 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=cloudera-com.20150623.gappssmtp.com Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 5oeghd3yZ0L6 for ; Tue, 4 Oct 2016 20:05:21 +0000 (UTC) Received: from mail-pa0-f54.google.com (mail-pa0-f54.google.com [209.85.220.54]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with ESMTPS id 58C5C5FB41 for ; Tue, 4 Oct 2016 20:05:20 +0000 (UTC) Received: by mail-pa0-f54.google.com with SMTP id rz1so21105560pab.1 for ; Tue, 04 Oct 2016 13:05:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cloudera-com.20150623.gappssmtp.com; s=20150623; h=mime-version:subject:from:in-reply-to:date:cc:message-id:references :to; bh=eCrbHyMQzbWDpAwn4ZuNpREzbNG4PuKYcknT5VgK784=; b=2KvsLpw9qMsIEr50lnIr0+6LpjXKdhMEUjrmpJWQ0UF7U+3Do9J7FfikeN8lqOlY6l LTaZh6wyFJCX4AqRXzvtpbQxbv4SifkjQfraAFN30lSJblTkXiRX7jp6mjdns5s0eIOH XoDwVlbGbv8XSEils+3/33Lm2ZNHqtK7MXqvTjDXtYu/hYzqGpLrIq8V4zuMkyFSqo2H zJi2O3dHUcr7q3nrNIJD4mi0prcw3vLsvDC8BQa1sA9S++jz6FGACIe9pjoNb9fpCyMe ufgqxUkdDxgiHF69NZiOhy6waHEnwxs1/luCJrBbJer+3d2aBWyRjKNQMNhsLgQ2kkhh MtjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :message-id:references:to; bh=eCrbHyMQzbWDpAwn4ZuNpREzbNG4PuKYcknT5VgK784=; b=iO9V4Li9Op06/G/TsTSt+HgJ+c3Q6MxD499axjnH6TfuXNipatXr+ALokgwbP50R/Y 1svBnMtbUh+IO5t9pQoC0tbIh1yOrhoL+NTVtHAhUclIc+MgUfahGW8Pn2FrI860SJ+f 4DZXAoHfJ/dJRChVZLDQIb7WoSCwGJWANX8Pe+kpboSmbA4+L+nFdXJf7uz2DV8rhNft wFhcJCSqEEVskzQyd0kIhjbOZmGs/qakGfyWPyCk2fh28u+TyxCiN3n/OvcfVUCg8Kvg wULX6w4y49rYkcEJzaDFDIUGW60SNzXMZPJx3XLZ1/QHfrRTEY8u1Ejb4cbMvCiavbDr 5biQ== X-Gm-Message-State: AA6/9RlW91zcrYL3GM4gBU/PtBHdZp8eZt3V04l8kU/PPxREBGjNLWzMuDOps5fmzyFktufJ X-Received: by 10.66.178.78 with SMTP id cw14mr7521273pac.206.1475611518974; Tue, 04 Oct 2016 13:05:18 -0700 (PDT) Received: from [172.16.1.88] ([74.217.76.101]) by smtp.gmail.com with ESMTPSA id r77sm57223843pfg.16.2016.10.04.13.05.17 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 04 Oct 2016 13:05:18 -0700 (PDT) Content-Type: multipart/alternative; boundary="Apple-Mail=_3C75F3BB-651B-4E1A-9BAE-414CB13A0F7B" Mime-Version: 1.0 (Mac OS X Mail 8.2 \(2104\)) Subject: Re: native snappy library not available: this version of libhadoop was built without snappy support. From: Wei-Chiu Chuang In-Reply-To: Date: Tue, 4 Oct 2016 13:05:16 -0700 Cc: user@hadoop.apache.org Message-Id: <76D18AF2-F938-4701-8CE0-EBDCFF39E2D8@cloudera.com> References: To: Uthayan Suthakar X-Mailer: Apple Mail (2.2104) archived-at: Tue, 04 Oct 2016 20:05:29 -0000 --Apple-Mail=_3C75F3BB-651B-4E1A-9BAE-414CB13A0F7B Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8 Hi Uthayan, what=E2=80=99s the version of Hadoop you have? Hadoop 2.7.3 binary does = not ship with snappy precompiled. If this is the version you have you = may have to rebuild Hadoop yourself to include it. Wei-Chiu Chuang > On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar = wrote: >=20 > Hello guys, >=20 > I have a job that reads compressed (Snappy) data but when I run the = job, it is throwing an error "native snappy library not available: this = version of libhadoop was built without snappy support". > . =20 > I followed this instruction but it did not resolve the issue: > = https://community.hortonworks.com/questions/18903/this-version-of-libhadoo= p-was-built-without-snappy.html = >=20 > The check native command show that snappy is installed. > hadoop checknative > 16/10/04 21:01:30 INFO bzip2.Bzip2Factory: Successfully loaded & = initialized native-bzip2 library system-native > 16/10/04 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & = initialized native-zlib library > Native library checking: > hadoop: true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0 > zlib: true /lib64/libz.so.1 > snappy: true /usr/lib/hadoop/lib/native/libsnappy.so.1 > lz4: true revision:99 > bzip2: true /lib64/libbz2.so.1 > openssl: true /usr/lib64/libcrypto.so >=20 > I also have a code in the job to check whether native snappy is = loaded, which is returning true. >=20 > Now, I have no idea why I'm getting this error. Also, I had no issue = reading Snappy data using MapReduce job on the same cluster, Could = anyone tell me what is wrong? >=20 >=20 >=20 > Thank you. >=20 > Stack: >=20 >=20 > java.lang.RuntimeException: native snappy library not available: this = version of libhadoop was built without snappy support. > at = org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCode= c.java:65) > at = org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.= java:193) > at = org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178= ) > at = org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:111= ) > at = org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.j= ava:67) > at = org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:237) > at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208) > at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101) > at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) > at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) > at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) > at = org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at = org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:270) > at = org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)= > at = org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)= > at org.apache.spark.scheduler.Task.run(Task.scala:89) > at = org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) > at = java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:= 1145) > at = java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java= :615) > at java.lang.Thread.run(Thread.java:745) --Apple-Mail=_3C75F3BB-651B-4E1A-9BAE-414CB13A0F7B Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8
Hi Uthayan,
what=E2=80=99s= the version of Hadoop you have? Hadoop 2.7.3 binary does not ship with = snappy precompiled. If this is the version you have you may have to = rebuild Hadoop yourself to include it.

Wei-Chiu Chuang

On Oct 4, 2016, at 12:59 PM, Uthayan Suthakar = <uthayan.suthakar@gmail.com> wrote:

Hello = guys,

I have a job = that reads compressed (Snappy) data but when I run the job, it is = throwing an error "native snappy library not available: this version = of libhadoop was built without snappy support".
.  

The check native command show that = snappy is installed.
hadoop checknative
16/10/04 21:01:30 = INFO bzip2.Bzip2Factory: Successfully loaded & initialized = native-bzip2 library system-native
16/10/04 = 21:01:30 INFO zlib.ZlibFactory: Successfully loaded & initialized = native-zlib library
Native library = checking:
hadoop:  true = /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4:     true = revision:99
bzip2:   true = /lib64/libbz2.so.1
openssl: true = /usr/lib64/libcrypto.so

I also have a code in the job to check whether native snappy = is loaded, which is returning true.

Now, I have no idea why I'm getting this error. Also, I had = no issue reading Snappy data using MapReduce job on the same cluster, = Could anyone tell me what is wrong?



Thank you.

Stack:


java.lang.RuntimeException: native snappy library not = available: this version of libhadoop was built without snappy = support.
    =     at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:65)
        at = org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:193)
        at = org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:178)
        at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:111)
        at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
        at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:237)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
        at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
        at = org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at = org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at = org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at = java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at = java.lang.Thread.run(Thread.java:745)

= --Apple-Mail=_3C75F3BB-651B-4E1A-9BAE-414CB13A0F7B--