hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "HadoopJavaVersions" by SteveLoughran
Date Fri, 25 Oct 2013 09:21:33 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "HadoopJavaVersions" page has been changed by SteveLoughran:
https://wiki.apache.org/hadoop/HadoopJavaVersions?action=diff&rev1=28&rev2=29

Comment:
add more java7 versions, including openjdk-7

  Hadoop is built and tested on Oracle JDKs.  Here are the known JDKs in use or have been
tested and their status:
  
   ||'''Version''' ||'''Status''' ||'''Reported By''' ||
-  ||1.6.0_16 ||Avoid (1)||Cloudera ||
+  ||oracle 1.6.0_16 ||Avoid (1)||Cloudera ||
-  ||1.6.0_18 ||Avoid ||Many ||
+  ||oracle 1.6.0_18 ||Avoid ||Many ||
-  ||1.6.0_19 ||Avoid ||Many ||
+  ||oracle 1.6.0_19 ||Avoid ||Many ||
-  ||1.6.0_20 ||Good (2) || !LinkedIn, Cloudera ||
+  ||oracle 1.6.0_20 ||Good (2) || !LinkedIn, Cloudera ||
-  ||[[http://www.oracle.com/technetwork/java/javase/downloads/jdk6-jsp-136632.html|1.6.0_21]]
||Good (2)||Yahoo!, Cloudera ||
+  ||oracle [[http://www.oracle.com/technetwork/java/javase/downloads/jdk6-jsp-136632.html|1.6.0_21]]
||Good (2)||Yahoo!, Cloudera ||
-  ||1.6.0_24 || Good || Cloudera ||
+  ||oracle 1.6.0_24 || Good || Cloudera ||
-  ||1.6.0_26 || Good(2) || Hortonworks, Cloudera ||
+  ||oracle 1.6.0_26 || Good(2) || Hortonworks, Cloudera ||
-  ||1.6.0_28 || Good || !LinkedIn ||
+  ||oracle 1.6.0_28 || Good || !LinkedIn ||
-  ||1.6.0_31 || Good(3) || Cloudera, Hortonworks ||
+  ||oracle 1.6.0_31 || Good(3, 4) || Cloudera, Hortonworks ||
-  ||1.7.0_15 || Good || Cloudera ||
+  ||oracle 1.7.0_15 || Good || Cloudera ||
+  ||oracle 1.7.0_21 || Good (4)|| Hortonworks ||
+  ||openjdk 1.7.0_09-icedtea|| Good (5) || Hortonworks ||
+ 
  
  1 - Hadoop works well with update 16 however there is a bug in JDK versions before update
19 that has been seen on HBase. See [[https://issues.apache.org/jira/browse/HBASE-4367|HBASE-4367]]
for details.
  
  2 - If the grid is running in secure mode with MIT Kerberos 1.8 and higher, the Java version
should be 1.6.0_27 or higher in order to avoid [[http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6979329|Java
bug 6979329]].
  
- 3 - Hortonworks has certified JDK 1.6.0_31 under RHEL5/CentOS5, RHEL6/CentOS6, and SLES11,
with Hadoop, HBase, Pig, Hive, HCatalog, Oozie, Sqoop, and Ambari.
+ 3 - Hortonworks has certified JDK 1.6.0_31 under RHEL5/CentOS5, RHEL6/CentOS6, and SLES11,
with Hadoop 1.x, HBase, Pig, Hive, HCatalog, Oozie, Sqoop, and Ambari.
+ 
+ 3 - Hortonworks has certified JDK 1.6.0_31 and Oracle 1.7.0.21 under RHEL5/CentOS5, RHEL6/CentOS6,
and SLES11, with Hadoop 2.2.0, HBase 0.96, Pig, Hive, HCatalog, Oozie, Sqoop, and Ambari.

+ 
+ 3 - Hortonworks has certified openjdk 1.7.0_09-icedtea  on RHEL6 with Hadoop 2.2.0, HBase
0.96, Pig, Hive, HCatalog, Oozie, Sqoop, and Ambari. 
+ 
+ 
+ === Compressed Object pointers and Java 6 ===
  
  The Sun JVM has 32-bit and 64-bit modes. In a large cluster the NameNode and JobTracker
need to run in 64-bit mode to keep all their data structures in memory. The workers can be
set up for either 32-bit or 64-bit operation, depending upon preferences and how much memory
the individual tasks need.
  
- Using the Compressed Object References JVM feature (-XX:+UseCompressedOops) reduces memory
consumed and increases performance on 64 bit Sun JVMs.  This feature was first introduced
in 1.6.0_14 but problems have been reported with its use on versions prior to 1.6.0_20.  Several
have reported success using it on 1.6.0_21 and above.  It is the default in 1.6.0_24 and above
on 64 bit JVMs.
+ Using the Compressed Object References JVM feature (-XX:+UseCompressedOops) reduces memory
consumed and increases performance on 64 bit Sun JVMs.  This feature was first introduced
in 1.6.0_14 but problems were been reported with its use on versions prior to 1.6.0_20.  Several
have reported success using it on 1.6.0_21 and above.  It is the default in 1.6.0_24 and above
on 64 bit JVMs, and appears now to be stable.
+ 
  
  Useful tips for discovering and inspecting Sun JVM confuguration flags are in the following
blog post: [[http://q-redux.blogspot.com/2011/01/inspecting-hotspot-jvm-options.html|inspecting-hotspot-jvm-options]]
  
  == OpenJDK ==
- Hadoop does build and run on OpenJDK (OpenJDK is based on the Sun JDK).
  
+ OpenJDK has been used to qualify Hadoop 2.2 -and the rest of the Hortonworks bundle- on
RHEL6. No problems were noted.
- OpenJDK is handy to have on a development system as it has more source for you to step into
when debugging something. OpenJDK and Sun JDK mainly differ in (native?) rendering/AWT/Swing
code, which is not relevant for any MapReduce Jobs that aren't creating images as part of
their work.
- 
- Note*: OpenJDK6 has some open bugs w.r.t handling of generics (https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/611284,
https://bugs.launchpad.net/ubuntu/+source/openjdk-6/+bug/716959), so OpenJDK cannot be used
to compile hadoop mapreduce code in branch-0.23 and beyond, please use other JDKs.
  
  == Oracle JRockit ==
  Oracle's JRockit JVM is not the same as the Sun JVM: it has very different heap and memory
management behavior. Hadoop has been used on JRockit, though not at "production" scale.
@@ -51, +59 @@

  
  Hadoop 0.20.20x, 0.21.0, and trunk use a few Sun-specific APIs which IBM Java does not provide.
[[https://issues.apache.org/jira/browse/HADOOP-6941|HADOOP-6941]] and [[https://issues.apache.org/jira/browse/HADOOP-7211|HADOOP-7211]]
have been filed to support non-Sun/Oracle Java.
  
+ == OSX Java 6 ==
+ 
+ Apple Java 6 on OS/X has been used for development and local machine testing, but has never
been qualified for production used for a simple reason: OS/X isn't a supported OS for production
systems, especially on Hadoop 2.
+ 
+ 
  == A request for help from JVM/JDK developers ==
  We would strongly encourage anyone who produces a JVM/JDK to test compiling and running
Hadoop with it. It makes for a fantastic performance and stress test. As Hadoop is becoming
a key back-end datacenter application, good Hadoop support matters.
  

Mime
View raw message