hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [hadoop] viirya commented on a change in pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec
Date Fri, 25 Sep 2020 13:25:25 GMT

viirya commented on a change in pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#discussion_r494069586



##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
##########
@@ -446,4 +442,43 @@ public void doWork() throws Exception {
 
     ctx.waitFor(60000);
   }
+
+  @Test
+  public void testSnappyCompatibility() throws Exception {
+    // HADOOP-17125. Using snappy-java in SnappyCodec. These strings are raw data and compressed
data
+    // using previous native Snappy codec. We use updated Snappy codec to decode it and check
if it
+    // matches.
+    String rawData = "010a06030a040a0c0109020c0a010204020d02000b010701080605080b090902060a080502060a0d06070908080a0c0105030904090d05090800040c090c0d0d0804000d00040b0b0d010d060907020a030a0c0900040905080107040d0c01060a0b09070a04000b01040b09000e0e00020b06050b060e030e0a07050d06050d";

Review comment:
       String is to make the test as simple as possible. Maybe further shorten the string?

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -291,9 +282,17 @@ public long getBytesWritten() {
   public void end() {
   }
 
-  private native static void initIDs();
-
-  private native int compressBytesDirect();
-
-  public native static String getLibraryName();
+  private int compressBytesDirect() throws IOException {

Review comment:
       This `compressBytesDirect` and `decompressBytesDirect` basically are copied from original
method names. `compressDirectBuf` and `decompressDirectBuf` looks good to me.

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -48,30 +49,20 @@
   private long bytesRead = 0L;
   private long bytesWritten = 0L;
 
-  private static boolean nativeSnappyLoaded = false;
-  
-  static {
-    if (NativeCodeLoader.isNativeCodeLoaded() &&
-        NativeCodeLoader.buildSupportsSnappy()) {
-      try {
-        initIDs();
-        nativeSnappyLoaded = true;
-      } catch (Throwable t) {
-        LOG.error("failed to load SnappyCompressor", t);
-      }
-    }
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-    return nativeSnappyLoaded;
-  }
-  
   /**
    * Creates a new compressor.
    *
    * @param directBufferSize size of the direct buffer to be used.
    */
   public SnappyCompressor(int directBufferSize) {
+    // `snappy-java` is provided scope. We need to check if its availability.
+    try {
+      SnappyLoader.getVersion();
+    } catch (Throwable t) {
+      throw new RuntimeException("native snappy library not available: " +

Review comment:
       It is java-snappy jar, yeah, I will review the message.

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -48,30 +49,20 @@
   private long bytesRead = 0L;
   private long bytesWritten = 0L;
 
-  private static boolean nativeSnappyLoaded = false;
-  
-  static {
-    if (NativeCodeLoader.isNativeCodeLoaded() &&
-        NativeCodeLoader.buildSupportsSnappy()) {
-      try {
-        initIDs();
-        nativeSnappyLoaded = true;
-      } catch (Throwable t) {
-        LOG.error("failed to load SnappyCompressor", t);
-      }
-    }
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-    return nativeSnappyLoaded;
-  }
-  
   /**
    * Creates a new compressor.
    *
    * @param directBufferSize size of the direct buffer to be used.
    */
   public SnappyCompressor(int directBufferSize) {
+    // `snappy-java` is provided scope. We need to check if its availability.

Review comment:
       Oops, thanks. 

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -48,30 +49,20 @@
   private long bytesRead = 0L;
   private long bytesWritten = 0L;
 
-  private static boolean nativeSnappyLoaded = false;
-  
-  static {
-    if (NativeCodeLoader.isNativeCodeLoaded() &&
-        NativeCodeLoader.buildSupportsSnappy()) {
-      try {
-        initIDs();
-        nativeSnappyLoaded = true;
-      } catch (Throwable t) {
-        LOG.error("failed to load SnappyCompressor", t);
-      }
-    }
-  }
-  
-  public static boolean isNativeCodeLoaded() {
-    return nativeSnappyLoaded;
-  }
-  
   /**
    * Creates a new compressor.
    *
    * @param directBufferSize size of the direct buffer to be used.
    */
   public SnappyCompressor(int directBufferSize) {
+    // `snappy-java` is provided scope. We need to check if its availability.
+    try {
+      SnappyLoader.getVersion();
+    } catch (Throwable t) {
+      throw new RuntimeException("native snappy library not available: " +

Review comment:
       It is java-snappy jar, yeah, I will revise the message.

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/snappy/TestSnappyCompressorDecompressor.java
##########
@@ -446,4 +442,43 @@ public void doWork() throws Exception {
 
     ctx.waitFor(60000);
   }
+
+  @Test
+  public void testSnappyCompatibility() throws Exception {
+    // HADOOP-17125. Using snappy-java in SnappyCodec. These strings are raw data and compressed
data
+    // using previous native Snappy codec. We use updated Snappy codec to decode it and check
if it
+    // matches.
+    String rawData = "010a06030a040a0c0109020c0a010204020d02000b010701080605080b090902060a080502060a0d06070908080a0c0105030904090d05090800040c090c0d0d0804000d00040b0b0d010d060907020a030a0c0900040905080107040d0c01060a0b09070a04000b01040b09000e0e00020b06050b060e030e0a07050d06050d";

Review comment:
       Ok, I split the long string. Thanks.

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/CompressDecompressTester.java
##########
@@ -432,7 +412,11 @@ public void assertCompression(String name, Compressor compressor,
               joiner.join(name, "byte arrays not equals error !!!"),
               originalRawData, decompressOut.toByteArray());
         } catch (Exception ex) {
-          fail(joiner.join(name, ex.getMessage()));
+          if (ex.getMessage() != null) {
+            fail(joiner.join(name, ex.getMessage()));
+          } else {
+            fail(joiner.join(name, ExceptionUtils.getStackTrace(ex)));

Review comment:
       When I first took over this change, the test failed with NPE without any details. It
is because the exception thrown returns null from `getMessage()`. `joiner.join(name, null)`
causes the NPE, so I changed it to print stack trace once `getMessage()` returns null. It's
better for debugging.

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/snappy/SnappyCompressor.java
##########
@@ -291,9 +283,17 @@ public long getBytesWritten() {
   public void end() {
   }
 
-  private native static void initIDs();
-
-  private native int compressBytesDirect();
-
-  public native static String getLibraryName();
+  private int compressDirectBuf() throws IOException {
+    if (uncompressedDirectBufLen == 0) {
+      return 0;
+    } else {
+      // Set the position and limit of `uncompressedDirectBuf` for reading
+      uncompressedDirectBuf.limit(uncompressedDirectBufLen).position(0);
+      int size = Snappy.compress((ByteBuffer) uncompressedDirectBuf,
+              (ByteBuffer) compressedDirectBuf);
+      uncompressedDirectBufLen = 0;
+      uncompressedDirectBuf.limit(directBufferSize).position(0);

Review comment:
       done. thanks.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message