hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Segel, Mike" <mse...@navteq.com>
Subject RE: IOException: Owner 'mapred' for path XY not match expected owner 'AB'
Date Tue, 26 Oct 2010 13:06:26 GMT
Yeah...

You need to go through each node and check to make sure all of your ownerships and permission
levels are set correctly. 
It's a pain in the ass, but look on the bright side. You only have to do it once. :-)

-Mike


-----Original Message-----
From: patrickangeles@gmail.com [mailto:patrickangeles@gmail.com] On Behalf Of Patrick Angeles
Sent: Tuesday, October 26, 2010 8:04 AM
To: common-dev@hadoop.apache.org
Subject: Re: IOException: Owner 'mapred' for path XY not match expected owner 'AB'

Hi Matthias,

Best I can guess, you have uneven permissions on some of your
mapred.local.dir, causing tasks that run using those directories to fail.
See if these are all owned by user:group 'mapred:hadoop', and have
drwxr-xr-x permissions.

Regards,

- Patrick
On Tue, Oct 26, 2010 at 3:34 AM, Mathias Walter <mathias.walter@gmx.net>wrote:

> Hi Guys,
>
> recently I upgraded to the recent Claudera Hadoop distribution. It contains
> hadoop-core-0.20.2+737.jar. If I now run my map job, I
> get the following exception for a few tasks:
>
> java.io.IOException: Owner 'mapred' for path
>
> /hadoop/hdfs5/tmp/taskTracker/mathias.walter/jobcache/job_201010210928_0005/attempt_201010210928_0005_m_000000_0/output/spill437.out
> .index did not match expected owner 'mathias.walter'
>        at
> org.apache.hadoop.io.SecureIOUtils.checkStat(SecureIOUtils.java:182)
>        at
> org.apache.hadoop.io.SecureIOUtils.openForRead(SecureIOUtils.java:108)
>        at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:62)
>        at org.apache.hadoop.mapred.SpillRecord.<init>(SpillRecord.java:55)
>        at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:1480)
>        at
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1172)
>        at
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:574)
>        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:641)
>        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:315)
>        at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1063)
>        at org.apache.hadoop.mapred.Child.main(Child.java:211)
>
> A total of 8 tasks are running in parallel. They are finished after about 8
> hours, but some of them (19) were crashed with the above
> exception.
>
> Why are so many tasks crashed, but some not?
>
> --
> Kind regards,
> Mathias
>
>


The information contained in this communication may be CONFIDENTIAL and is intended only for
the use of the recipient(s) named above.  If you are not the intended recipient, you are hereby
notified that any dissemination, distribution, or copying of this communication, or any of
its contents, is strictly prohibited.  If you have received this communication in error, please
notify the sender and delete/destroy the original message and any copy of it from your computer
or paper files.

Mime
View raw message