hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <j...@apache.org>
Subject [jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec
Date Fri, 25 Sep 2020 18:52:00 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=491359&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-491359
]

ASF GitHub Bot logged work on HADOOP-17125:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 25/Sep/20 18:51
            Start Date: 25/Sep/20 18:51
    Worklog Time Spent: 10m 
      Work Description: sunchao commented on a change in pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#discussion_r495158362



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -291,9 +283,17 @@ public long getBytesWritten() {
   public void end() {
   }
 
-  private native static void initIDs();
-
-  private native int compressBytesDirect();
-
-  public native static String getLibraryName();
+  private int compressDirectBuf() throws IOException {
+    if (uncompressedDirectBufLen == 0) {
+      return 0;
+    } else {
+      // Set the position and limit of `uncompressedDirectBuf` for reading
+      uncompressedDirectBuf.limit(uncompressedDirectBufLen).position(0);
+      int size = Snappy.compress((ByteBuffer) uncompressedDirectBuf,
+              (ByteBuffer) compressedDirectBuf);
+      uncompressedDirectBufLen = 0;
+      uncompressedDirectBuf.limit(uncompressedDirectBuf.capacity()).position(0);

Review comment:
       nit: this seems unnecessary as `clear` is called shortly after at the call site? 

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##########
@@ -276,10 +268,20 @@ public void end() {
     // do nothing
   }
 
-  private native static void initIDs();
+  private int decompressDirectBuf() throws IOException {
+    if (compressedDirectBufLen == 0) {
+      return 0;
+    } else {
+      // Set the position and limit of `compressedDirectBuf` for reading
+      compressedDirectBuf.limit(compressedDirectBufLen).position(0);
+      int size = Snappy.uncompress((ByteBuffer) compressedDirectBuf,
+              (ByteBuffer) uncompressedDirectBuf);
+      compressedDirectBufLen = 0;
+      compressedDirectBuf.limit(compressedDirectBuf.capacity()).position(0);

Review comment:
       nit: can we just call `compressedDirectBuf.clear()`?

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
##########
@@ -446,4 +442,49 @@ public void doWork() throws Exception {
 
     ctx.waitFor(60000);
   }
+
+  @Test
+  public void testSnappyCompatibility() throws Exception {
+    // HADOOP-17125. Using snappy-java in SnappyCodec. These strings are raw data and compressed
data
+    // using previous native Snappy codec. We use updated Snappy codec to decode it and check
if it
+    // matches.
+    String rawData = "010a06030a040a0c0109020c0a010204020d02000b010701080605080b090902060a08050206"
+
+            "0a0d06070908080a0c0105030904090d05090800040c090c0d0d0804000d00040b0b0d010d060907020a0"
+
+            "30a0c0900040905080107040d0c01060a0b09070a04000b01040b09000e0e00020b06050b060e030e0a07"
+
+            "050d06050d";
+    String compressed = "8001f07f010a06030a040a0c0109020c0a010204020d02000b010701080605080b0909020"
+
+            "60a080502060a0d06070908080a0c0105030904090d05090800040c090c0d0d0804000d00040b0b0d010d"
+
+            "060907020a030a0c0900040905080107040d0c01060a0b09070a04000b01040b09000e0e00020b06050b0"
+
+            "60e030e0a07050d06050d";
+
+    byte[] rawDataBytes = Hex.decodeHex(rawData);
+    byte[] compressedBytes = Hex.decodeHex(compressed);
+
+    ByteBuffer inBuf = ByteBuffer.allocateDirect(compressedBytes.length);
+    inBuf.put(compressedBytes, 0, compressedBytes.length);
+    inBuf.flip();
+
+    ByteBuffer outBuf = ByteBuffer.allocateDirect(rawDataBytes.length);
+    ByteBuffer expected = ByteBuffer.wrap(rawDataBytes);
+
+    SnappyDecompressor.SnappyDirectDecompressor decompressor = new SnappyDecompressor.SnappyDirectDecompressor();

Review comment:
       nit: long lines (80 chars).

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java
##########
@@ -495,19 +479,16 @@ public String getName() {
     Compressor compressor = pair.compressor;
 
     if (compressor.getClass().isAssignableFrom(Lz4Compressor.class)
-            && (NativeCodeLoader.isNativeCodeLoaded()))

Review comment:
       nit: unrelated changes :)

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyDecompressor.java
##########
@@ -45,30 +46,21 @@
   private int userBufOff = 0, userBufLen = 0;
   private boolean finished;
 
-  private static boolean nativeSnappyLoaded = false;
-
-  static {
-    if (NativeCodeLoader.isNativeCodeLoaded() &&
-        NativeCodeLoader.buildSupportsSnappy()) {
-      try {
-        initIDs();
-        nativeSnappyLoaded = true;
-      } catch (Throwable t) {
-        LOG.error("failed to load SnappyDecompressor", t);
-      }
-    }
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-    return nativeSnappyLoaded;
-  }
-  
   /**
    * Creates a new compressor.
    *
    * @param directBufferSize size of the direct buffer to be used.
    */
   public SnappyDecompressor(int directBufferSize) {
+    // `snappy-java` is provided scope. We need to check if it is available.
+    try {
+      SnappyLoader.getVersion();

Review comment:
       nit: `SnappyLoader` is marked as "internal use-only" though so not sure if there is
better alternative here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 491359)
    Time Spent: 18h 10m  (was: 18h)

> Using snappy-java in SnappyCodec
> --------------------------------
>
>                 Key: HADOOP-17125
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17125
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: common
>    Affects Versions: 3.3.0
>            Reporter: DB Tsai
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 18h 10m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system *LD_LIBRARY_PATH*,
and they have to be installed separately on each node of the clusters, container images, or
local test environments which adds huge complexities from deployment point of view. In some
environments, it requires compiling the natives from sources which is non-trivial. Also, this
approach is platform dependent; the binary may not work in different platform, so it requires
recompilation.
>  * It requires extra configuration of *java.library.path* to load the natives, and it
results higher application deployment and maintenance cost for users.
> Projects such as *Spark* and *Parquet* use [snappy-java|[https://github.com/xerial/snappy-java]]
which is JNI-based implementation. It contains native binaries for Linux, Mac, and IBM in
jar file, and it can automatically load the native binaries into JVM from jar without any
setup. If a native implementation can not be found for a platform, it can fallback to pure-java
implementation of snappy based on [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message