hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jason Lowe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-6708) Nodemanager container crash after ext3 folder limit
Date Wed, 05 Jul 2017 21:27:00 GMT

    [ https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16075457#comment-16075457
] 

Jason Lowe commented on YARN-6708:
----------------------------------

Thanks for updating the patch!  We're almost there, just some cleanup needed in the unit test.

A prior review comment was missed in the patch update:
bq. There be an After method that deletes basedir so we don't leave cruft on the filesystem
if a unit test fails.

On a related note, the unit test should be using {{basedir}} rather than making up its own
path under {{target}} to benefit from that cleanup.  Otherwise the unit test is leaving cruft
around on the filesystem after it runs.

Also the unit test is passing for me even without the code change.  It will only fail if the
umask of the user running the test is more restrictive than 022 which is a typical default.
 One way to work around that is to explicitly create one of the parent directories with the
wrong permissions first, e.g.: filecache/0 with permissions 0700.  Then we can call the localizer
and verify the permissions were fixed afterwards.


> Nodemanager container crash after ext3 folder limit
> ---------------------------------------------------
>
>                 Key: YARN-6708
>                 URL: https://issues.apache.org/jira/browse/YARN-6708
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: Bibin A Chundatt
>            Assignee: Bibin A Chundatt
>            Priority: Critical
>         Attachments: YARN-6708.001.patch, YARN-6708.002.patch, YARN-6708.003.patch, YARN-6708.004.patch,
YARN-6708.005.patch, YARN-6708.006.patch
>
>
> Configure umask as *027* for nodemanager service user
> and {{yarn.nodemanager.local-cache.max-files-per-directory}} as {{40}}. After 4  *private*
dir localization next directory will be *0/14*
> Local Directory cache manager 
> {code}
> vm2:/opt/hadoop/release/data/nmlocal/usercache/mapred/filecache # l
> total 28
> drwx--x--- 7 mapred hadoop 4096 Jun 10 14:35 ./
> drwxr-s--- 4 mapred hadoop 4096 Jun 10 12:07 ../
> drwxr-x--- 3 mapred users  4096 Jun 10 14:36 0/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:15 10/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:22 11/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:27 12/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:31 13/
> {code}
> *drwxr-x---* 3 mapred users  4096 Jun 10 14:36 0/ is only *750*
> Nodemanager user will not be able check for localization path exists or not.
> {{LocalResourcesTrackerImpl}}
> {code}
>     case REQUEST:
>       if (rsrc != null && (!isResourcePresent(rsrc))) {
>         LOG.info("Resource " + rsrc.getLocalPath()
>             + " is missing, localizing it again");
>         removeResource(req);
>         rsrc = null;
>       }
>       if (null == rsrc) {
>         rsrc = new LocalizedResource(req, dispatcher);
>         localrsrc.put(req, rsrc);
>       }
>       break;
> {code}
> *isResourcePresent* will always return false and same resource will be localized to {{0}}
to next unique number



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-issues-help@hadoop.apache.org


Mime
View raw message