hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Trevor Robinson <tre...@scurrilous.com>
Subject Re: Hadoop native builds fail on ARM due to -m32
Date Mon, 23 May 2011 17:09:15 GMT
Hi Bharath,

Sorry if I was unclear: I meant that tests/benchmarks run much faster
on the Sun JRE than with OpenJDK, not necessarily faster than on other
processors. Of course, this is not surprising given that the Sun JRE
has a full JIT compiler, and OpenJDK for ARM just has the Zero C++
interpreter.

There are other, higher-performance plug-in VMs for OpenJDK (CACAO,
JamVM, Shark), but they currently have stability issues in some Linux
distributions. I believe that CACAO and JamVM are code-copying JITs,
so they place a lot of constraints on the compiler used to build them.
Shark uses the LLVM JIT, which has some serious bugs on non-x86
processors (including ARM and PPC); I believe these have not been
fixed yet because effort is focused on building a new JIT engine (the
"MC JIT") that shares more of the static compilation code:
http://blog.llvm.org/2010/04/intro-to-llvm-mc-project.html

All that said, once we have server-grade ARM hardware in the lab, I'll
certainly be looking for and sharing any performance advantages I can
find. Given the low-power, scale-out focus of these processors, it's
unlikely that we'll see higher single-thread performance than a
power-hungry x86, but we certainly to expect better performance per
watt and performance per hardware cost.

Regards,
Trevor

On Mon, May 23, 2011 at 10:46 AM, Bharath Mundlapudi
<bharathwork@yahoo.com> wrote:
>
> Adding ARM processor support to Hadoop is great. Reducing power consumption on Hadoop
grids is a plus.
>
>
> Hi Trevor,
>
> You have mentioned that - "other tests/benchmarks have run much faster". This information
is good to know. Can you please tell us which areas you are seeing improvements on w.r.t ARM
compared to others. Is this public information?
>
>
> -Bharath
>
>
>
> ________________________________
> From: Eli Collins <eli@cloudera.com>
> To: common-dev@hadoop.apache.org
> Sent: Sunday, May 22, 2011 8:38 PM
> Subject: Re: Hadoop native builds fail on ARM due to -m32
>
> Hey Trevor,
>
> Thanks for all the info.  I took a quick look at HADOOP-7276 and
> HDFS-1920, haven't gotten a chance for a full review yet but they
> don't look like they'll be a burden, and if they get Hadoop running on
> ARM that's great!
>
> Thanks,
> Eli
>
> On Fri, May 20, 2011 at 4:27 PM, Trevor Robinson <trevor@scurrilous.com> wrote:
> > Hi Eli,
> >
> > On Thu, May 19, 2011 at 1:39 PM, Eli Collins <eli@cloudera.com> wrote:
> >> Thanks for contributing.   Supporting ARM on Hadoop will require a
> >> number of different changes right? Eg given that Hadoop currently
> >> depends on some Sun-specific classes, and requires a Sun-compatible
> >> JVM you'll have to work around this dependency somehow, there's not a
> >> Sun JVM for ARM right?
> >
> > Actually, there is a Sun JVM for ARM, and it works quite well:
> >
> > http://www.oracle.com/technetwork/java/embedded/downloads/index.html
> >
> > Currently, it's just a JRE, so you have to use another JDK for javac,
> > etc., but I'm optimistic that we'll see a Sun Java SE JDK for ARM
> > servers one of these days, given all the ARM server activity from
> > Calxeda [http://www.theregister.co.uk/2011/03/14/calxeda_arm_server/],
> > Marvell, and nVidia
> > [http://www.channelregister.co.uk/2011/01/05/nvidia_arm_pc_server_chip/].
> >
> > With the patches I submitted, Hadoop builds completely and nearly all
> > of the Commons and HDFS unit tests pass with OpenJDK on ARM. (Some of
> > the Map/Reduce unit tests have some crashes due to a bug in the
> > OpenJDK build I'm using.) I need to re-run the unit tests with the Sun
> > JRE and see if they pass; other tests/benchmarks have run much faster
> > and more reliably with the Sun JRE, so I anticipate better results.
> > I've run tests like TestDFSIO with the Sun JRE and have had no
> > problems.
> >
> >> If there's a handful of additional changes then let's make an umbrella
> >> jira for Hadoop ARM support and make the issues you've already filed
> >> sub-tasks. You can ping me off-line on how to that if you want.
> >> Supporting non-x86 processors and non-gcc compilers is an additional
> >> maintenance burden on the project so it would be helpful to have an
> >> end-game figured out so these patches don't bitrot in the meantime.
> >
> > I really don't anticipate any additional changes at this point. No
> > Java or C++ code changes have been necessary; it's simply removing
> > -m32 from CFLAGS/LDFLAGS and adding ARM to the list of processors in
> > apsupport.m4 (which contains lots of other unsupported processors
> > anyway). And just to be clear, pretty much everyone uses gcc for
> > compilation on ARM, so supporting another compiler is unnecessary for
> > this.
> >
> > I certainly don't want to increase maintenance burden at this point,
> > especially given that data center-grade ARM servers are still in the
> > prototype stage. OTOH, these changes seem pretty trivial to me, and
> > allow other developers (particularly those evaluating ARM and those
> > involved in the Ubuntu ARM Server 11.10 release this fall:
> > https://blueprints.launchpad.net/ubuntu/+spec/server-o-arm-server) to
> > get Hadoop up and running without having to patch the build.
> >
> > I'll follow up offline though, so I can better understand any concerns
> > you may still have.
> >
> > Thanks,
> > Trevor
> >
> >> On Tue, May 10, 2011 at 5:13 PM, Trevor Robinson <trevor@scurrilous.com>
wrote:
> >>> Is the native build failing on ARM (where gcc doesn't support -m32) a
> >>> known issue, and is there a workaround or fix pending?
> >>>
> >>> $ ant -Dcompile.native=true
> >>> ...
> >>>      [exec] make  all-am
> >>>      [exec] make[1]: Entering directory
> >>> `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
> >>>      [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc
> >>> -DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native
> >>> -I/usr/lib/jvm/java-6-openjdk/include
> >>> -I/usr/lib/jvm/java-6-openjdk/include/linux
> >>> -I/home/trobinson/dev/hadoop-common/src/native/src
> >>> -Isrc/org/apache/hadoop/io/compress/zlib
> >>> -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
> >>> -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
> >>> .deps/ZlibCompressor.Tpo -c -o ZlibCompressor.lo `test -f
> >>> 'src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c' || echo
> >>> '/home/trobinson/dev/hadoop-common/src/native/'`src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
> >>>      [exec] libtool: compile:  gcc -DHAVE_CONFIG_H -I.
> >>> -I/home/trobinson/dev/hadoop-common/src/native
> >>> -I/usr/lib/jvm/java-6-openjdk/include
> >>> -I/usr/lib/jvm/java-6-openjdk/include/linux
> >>> -I/home/trobinson/dev/hadoop-common/src/native/src
> >>> -Isrc/org/apache/hadoop/io/compress/zlib
> >>> -Isrc/org/apache/hadoop/security -Isrc/org/apache/hadoop/io/nativeio/
> >>> -g -Wall -fPIC -O2 -m32 -g -O2 -MT ZlibCompressor.lo -MD -MP -MF
> >>> .deps/ZlibCompressor.Tpo -c
> >>> /home/trobinson/dev/hadoop-common/src/native/src/org/apache/hadoop/io/compress/zlib/ZlibCompressor.c
> >>>  -fPIC -DPIC -o .libs/ZlibCompressor.o
> >>>      [exec] make[1]: Leaving directory
> >>> `/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32'
> >>>      [exec] cc1: error: unrecognized command line option "-m32"
> >>>      [exec] make[1]: *** [ZlibCompressor.lo] Error 1
> >>>      [exec] make: *** [all] Error 2
> >>>
> >>> This closest issue I can find is
> >>> https://issues.apache.org/jira/browse/HADOOP-6258 (Native compilation
> >>> assumes gcc), as well as other issues regarding where and how to
> >>> specify -m32/64. However, there doesn't seem to be a specific issue
> >>> covering build failure on systems using gcc where the gcc target does
> >>> not support -m32/64 (such as ARM).
> >>>
> >>> I've attached a patch that disables specifying -m$(JVM_DATA_MODEL)
> >>> when $host_cpu starts with "arm". (For instance, host_cpu = armv7l for
> >>> my system.) To any maintainers on this list, please let me know if
> >>> you'd like me to open a new issue and/or attach this patch to an
> >>> issue.
> >>>
> >>> Thanks,
> >>> Trevor
> >>>
> >>
> >

Mime
View raw message