hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hitarth trivedi <t.hita...@gmail.com>
Subject yarn cache settings
Date Tue, 27 Jan 2015 23:46:02 GMT

We have yarn.nodemanager.local-dirs set to
/var/lib/hadoop/tmp/nm-local-dir. This is the directory where the mapreduce
jobs store temporary data. On restart of nodemanager, the contents of the
directory are deleted. I see the following definitions for
(default  to 10240MB) and
yarn.nodemanager.localizer.cache.cleanup.interval-ms (600000ms, which is 10

·  *yarn.nodemanager.localizer.cache.target-size-mb*: This decides the
maximum disk space to be used for localizing resources. (At present there
is no individual limit for PRIVATE / APPLICATION / PUBLIC cache. YARN-882
<https://issues.apache.org/jira/browse/YARN-882>). Once the total disk size
of the cache exceeds this then Deletion service will try to remove files
which are not used by any running containers. At present there is no limit
(quota) for user cache / public cache / private cache. This limit is
applicable to all the disks as a total and is not based on per disk basis.

·  *yarn.nodemanager.localizer.cache.cleanup.interval-ms*: After this
interval resource localization service will try to delete the unused
resources if total cache size exceeds the configured max-size. Unused
resources are those resources which are not referenced by any running
container. Every time container requests a resource, container is added
into the resources’ reference list. It will remain there until container
finishes avoiding accidental deletion of this resource. As a part of
container resource cleanup (when container finishes) container will be
removed from resources’ reference list. That is why when reference count
drops to zero it is an ideal candidate for deletion. The resources will be
deleted on LRU basis until current cache size drops below target size.

My */var/lib/hadoop/tmp/nm-local-dir *has the allocated size of 5GB. So
what I wanted to do is set yarn.nodemanager.localizer.cache.target-size-mb
to a lower size of 1GB for testing purposes and let the service delete the
folder when it reaches this limit. I was expecting the service to delete
the contents once it crosses this limit. But I see that the size is growing
beyond this limit, on every run of mapreduce jobs, but the service is not
kicking in to delete the contents. The jobs are succeeded and completed. Do
I need to do something else?



View raw message