hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gregor Willemsen <gregor.willem...@googlemail.com>
Subject Re: Hadoop - Solaris
Date Sun, 17 Oct 2010 10:18:33 GMT
I have no recent experiences with Solaris nor with Hadoop on Solaris,
but there is an blog: http://blogs.sun.com/george/category/Hadoop with
three postings dedicated to Hadoop (although maybe a bit outdated). In
2007 or 2008 GNU and Solaris did not play together very well as things
as tar, gcc are concerned. But these issues should have gone now.

Sun Grid Engine might be a starting point to look for as Hadoop is built in.

2010/10/16 Allen Wittenauer <awittenauer@linkedin.com>:
> On Oct 16, 2010, at 1:08 PM, Bruce Williams wrote:
>> I am doing a student Independent Study  Project and Harvery Mudd has given
>> me 13 Sun Netra X1 I can use as a dedicated Hadoop cluster. Right now they
>> are without an OS.
>> If anyone with experience with Hadoop and Solaris can contact me off list,
>> even to just say I am doing it and it is OK it would be appreciated.
>        That's my cue! :)
>        We have a few grids that are running Solaris. It mostly works out of the box
as long as you are aware of three things:
>                - There are some settings in hadoop-env.sh and in the path that
need to be dealt with.  Rather than re-quote, these were added to the Hadoop FAQ a week or
two ago so definitely take a look at that.
>                - The native compression libraries will need to be compiled.  Depending
upon what you are doing/how performant the machines are, this may or may not make a big difference.
 Compiling under Solaris with gcc will work fine (but it is gcc... ugh!).  Only a few minor
changes are required to compile it with SUNWspro.  [I have patches laying around here somewhere
if anyone wants to play with them.]
>                - The Solaris JRE is a mixed-mode implementation.  So keep in
mind that -d32 and -d64 have meaning and do work as advertised.  You'll likely want to pick
a bitsize and use that for all your Hadoop daemons and tasks, especially if you plan on using
any JNI like the compression libraries.

View raw message