hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Varun Vasudev (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (YARN-90) NodeManager should identify failed disks becoming good back again
Date Wed, 24 Sep 2014 21:14:35 GMT

     [ https://issues.apache.org/jira/browse/YARN-90?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Varun Vasudev updated YARN-90:
    Attachment: apache-yarn-90.5.patch

Uploaded a new patch to address [~jlowe]'s comments.

It's a bit odd to have a hash map to map disk error types to lists of directories, fill them
all in, but we only in practice actually look at one type in the map and that's DISK_FULL.
It'd be simpler (and faster and less space since there's no hashmap involved) to just track
full disks as a separate collection like we already do for localDirs and failedDirs.

Fixed. I renamed failedDirs to errorDirs and added a list for fullDirs. The getFailedDirs()
function returns a union of the two.

Nit: DISK_ERROR_CAUSE should be DiskErrorCause (if we keep the enum) to match the style of
other enum types in the code.


In verifyDirUsingMkdir, if an error occurs during the finally clause then that exception will
mask the original exception


isDiskUsageUnderPercentageLimit is named backwards. Disk usage being under the configured
limit shouldn't be a full disk error, and the error message is inconsistent with the method
name (method talks about being under but error message says its above).
        if (isDiskUsageUnderPercentageLimit(testDir)) {
          msg =
              "used space above threshold of "
                  + diskUtilizationPercentageCutoff
                  + "%, removing from the list of valid directories.";

Yep, thanks for catching it. Fixed.

We should only call getDisksHealthReport() once in the following code:
+    String report = getDisksHealthReport();
+    if (!report.isEmpty()) {
+      LOG.info("Disk(s) failed. " + getDisksHealthReport());

Should updateDirsAfterTest always say "Disk(s) failed" if the report isn't empty? Thinking
of the case where two disks go bad, then one later is restored. The health report will still
have something, but that last update is a disk turning good not failing. Before this code
was only called when a new disk failed, and now that's not always the case. Maybe it should
just be something like "Disk health update: " instead?

I've changed it to "Disk(s) health report: ". My only concern with this is that there might
be scripts looking for the "Disk(s) failed" log line for monitoring. What do you think?

Is it really necessary to stat a directory before we try to delete it? Seems like we can just
try to delete it.

Just wanted to avoid an unnecessary attempt. If a disk is comes back as good when a container
is running, it won't have the container directories leading to an unnecessary delete.

The idiom of getting the directories and adding the full directories seems pretty common.
Might be good to have dirhandler methods that already do this, like getLocalDirsForCleanup
or getLogDirsForCleanup.


I'm a bit worried that getInitializedLocalDirs could potentially try to delete an entire directory
tree for a disk. If this fails in some sector-specific way but other containers are currently
using their files from other sectors just fine on the same disk, removing these files from
underneath active containers could be very problematic and difficult to debug.

Fixed. Directories are only cleaned up during startup. The code tests for existence of the
directories and the correct permissions. This does mean that container directories left behind
for any reason won't get cleaned up unit the NodeManager is restarted. Is that ok?

> NodeManager should identify failed disks becoming good back again
> -----------------------------------------------------------------
>                 Key: YARN-90
>                 URL: https://issues.apache.org/jira/browse/YARN-90
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: nodemanager
>            Reporter: Ravi Gummadi
>            Assignee: Varun Vasudev
>         Attachments: YARN-90.1.patch, YARN-90.patch, YARN-90.patch, YARN-90.patch, YARN-90.patch,
apache-yarn-90.0.patch, apache-yarn-90.1.patch, apache-yarn-90.2.patch, apache-yarn-90.3.patch,
apache-yarn-90.4.patch, apache-yarn-90.5.patch
> MAPREDUCE-3121 makes NodeManager identify disk failures. But once a disk goes down, it
is marked as failed forever. To reuse that disk (after it becomes good), NodeManager needs
restart. This JIRA is to improve NodeManager to reuse good disks(which could be bad some time

This message was sent by Atlassian JIRA

View raw message