hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Allen Wittenauer <awittena...@linkedin.com>
Subject Re: Hadoop on Solaris 10 and in OSGi bundles
Date Wed, 28 Apr 2010 19:10:44 GMT

On Apr 28, 2010, at 9:30 AM, kovachev wrote:
> we are trying to set up Hadoop to run on Solaris 10 within Containers.
> However, we encounter many problems. 
> Could you please write here down all the extra settings needed for running
> Hadoop on Solaris?

The two big ones:

	- whoami needs to be in the path because Hadoop is too stupid to figure out who is using
it without a shell out
	- For HADOOP_IDENT_STRING we use `/usr/xpg4/bin/id -u -n`

Also note that Hadoop can get confused about pooled storage.  In the case of ZFS, creating
FS's such as:

u007                   914G    24K   345G     1%    /mnt/u007
u007/mapred            200G   7.2G   193G     4%    /mnt/u007/mapred
u007/dfs-data          914G   369G   345G    52%    /mnt/u007/dfs-data

will cause HDFS to miscalculate the total capacity of a datanode and the entire cluster. 
So just remember there is less storage there than what it tells you.  [I think there is a
JIRA on this somewhere, but because so few of us use pooled storage, it is unlikely to get
fixed.]

	- If you use the Cloudera dist, you'll likely need to back some of their changes out in order
for it to work properly.  [We don't anymore, so dunno if that is still the case.]
Mime
View raw message