hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Manoj Govindassamy (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10871) DiskBalancerWorkItem should not import jackson relocated by htrace
Date Mon, 19 Sep 2016 21:51:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15504793#comment-15504793
] 

Manoj Govindassamy commented on HDFS-10871:
-------------------------------------------

I am able to recreate the problem with htrace upstream. Now, below are the fix proposals.
Both these fixes seem to solve the problem for the latest upstream htrace.

1. Change the dependency library 

* Add {{com.fasterxml.jackson.core}} as a dependent artifact in {{hadoop-hdfs-project/hadoop-hdfs-client/}}
project. Remember, {{hadoop-hdfs-project/}} project is already dependent on {{com.fasterxml.jackson.core}}
** hadoop-hdfs-project/hadoop-hdfs-client/pom.xml {code}
  109     </dependency>
  110       <dependency>                                                           
                                                                   
  111           <groupId>com.fasterxml.jackson.core</groupId>                
                                                                         
  112           <artifactId>jackson-annotations</artifactId>                 
                                                                         
  113       </dependency>
  114   </dependencies> {code}

* In {{DiskBalancerWorkItem}} and {{MoveStep}}, replace the import {{org.apache.htrace.fasterxml.jackson}}
with {{com.fasterxml.jackson}} {code}
-import org.apache.htrace.fasterxml.jackson.annotation.JsonInclude;
+import com.fasterxml.jackson.annotation.JsonInclude;
{code}

* No other {{JsonInclude}} annotation changes are needed



2. Make use of {{org.codehaus.jackson.map.annotate}} instead of {{org.apache.htrace.fasterxml.jackson}}

* {{DiskBalancerWorkItem}} Object Mapper can have an explicit serialization inclusion instead
of class level annotation {code}
diff --git a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
index 592a89facf16bb3d046e0f87c83f571c2d68443a..edd2801c072968f5a2dc46941e847bd8cba9157a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancerWorkItem.java
@@ -19,12 +19,13 @@
 
 package org.apache.hadoop.hdfs.server.datanode;
 
+import com.fasterxml.jackson.annotation.JsonInclude;
 import com.google.common.base.Preconditions;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
-import org.apache.htrace.fasterxml.jackson.annotation.JsonInclude;
 import org.codehaus.jackson.map.ObjectMapper;
 import org.codehaus.jackson.map.ObjectReader;
+import org.codehaus.jackson.map.annotate.JsonSerialize.Inclusion;
 
 import java.io.IOException;
 
@@ -33,7 +34,6 @@
  */
 @InterfaceAudience.Private
 @InterfaceStability.Unstable
-@JsonInclude(JsonInclude.Include.NON_DEFAULT)
 public class DiskBalancerWorkItem {
   private static final ObjectMapper MAPPER = new ObjectMapper();
   private static final ObjectReader READER =
@@ -52,6 +52,13 @@
   private long bandwidth;
 
   /**
+   * Initialization block for static members
+   */
+  static {
+    MAPPER.setSerializationInclusion(Inclusion.NON_DEFAULT);
+  }
+
+  /**
    * Empty constructor for Json serialization.
    */
   public DiskBalancerWorkItem() {
{code}
* {{MoveStep}} annotation can be removed as there are no Object Mappers for the class. Wondering
if from/to json strings is necessary for this class at all ?  {code}
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
index b5f68fd8ad3ee8a803f039c422c23145935e589d..97fd650808d2b3e29c5d8132c755ef9096fc68de 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/planner/MoveStep.java
@@ -17,20 +17,10 @@
 
 package org.apache.hadoop.hdfs.server.diskbalancer.planner;
 
 import org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume;
 import org.apache.hadoop.util.StringUtils;
-import org.apache.htrace.fasterxml.jackson.annotation.JsonInclude;
 
-/**
- * Ignore fields with default values. In most cases Throughtput, diskErrors
- * tolerancePercent and bandwidth will be the system defaults.
- * So we will avoid serializing them into JSON.
- */
-@JsonInclude(JsonInclude.Include.NON_DEFAULT)
 /**
  * Move step is a step that planner can execute that will move data from one
  * volume to another.
{code}

[~eddyxu], [~anu]  any thoughts on the above proposals ?



> DiskBalancerWorkItem should not import jackson relocated by htrace
> ------------------------------------------------------------------
>
>                 Key: HDFS-10871
>                 URL: https://issues.apache.org/jira/browse/HDFS-10871
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Masatake Iwasaki
>            Assignee: Manoj Govindassamy
>
> Compiling trunk against upstream htrace fails since it does not bundle the {{org.apache.htrace.fasterxml.jackson.annotation.JsonInclude}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message