hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Roelofs <roel...@yahoo-inc.com>
Subject Re: Java->native .so->seg fault->core dump file?
Date Mon, 24 Jan 2011 21:58:09 GMT
Keith Wiley wrote:

> On Jan 21, 2011, at 16:47 , Greg Roelofs wrote:

>> No clue about 0.19, but does the owner of the process(es) in question
>> have permission to write to the directory in question?  We've seen a
>> similar issue in which root or ops or somebody owns the HADOOP_HOME
>> dir (which, IIRC, is where many of the processes get started), so
>> neither mapred nor hdfs has permission to write anything there.

> Hmmm, I wouldn't have expected task core dumps to have
> any dependence on HADOOP_HOME.	I believe HADOOP_HOME is
> primarily a accessed by the driver while the tasks primarily
> use hadoop.tmp.dir, dfs.name.dir, and dfs.data.dir.

Our focus was more on the JT and NN, which are invoked via a script that
does "cd $HADOOP_HOME" at the outset (hadoop-daemon.sh).  But TTs and DNs
are started the same way, aren't they?  (It's been several months since
I looked, so I might be misremembering.)  In any case tasks run within
TTs (or are forked from them or whatever), so I suspect they're subject
to the same restrictions.

Should be trivial to test in any case...

Greg

Mime
View raw message