hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wei-Chiu Chuang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11711) DN should not delete the block On "Too many open files" Exception
Date Wed, 07 Jun 2017 03:28:18 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16040096#comment-16040096

Wei-Chiu Chuang commented on HDFS-11711:

+1 too. Thanks for the patch. I think the fix is good. But I wish there's a more portable
way to check for Too many open files error.

> DN should not delete the block On "Too many open files" Exception
> -----------------------------------------------------------------
>                 Key: HDFS-11711
>                 URL: https://issues.apache.org/jira/browse/HDFS-11711
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>            Priority: Critical
>         Attachments: HDFS-11711-002.patch, HDFS-11711-003.patch, HDFS-11711-004.patch,
HDFS-11711-branch-2-002.patch, HDFS-11711.patch
>  *Seen the following scenario in one of our customer environment* 
> * while jobclient writing {{"job.xml"}} there are pipeline failures and written to only
one DN.
> * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as system
exceed limit) and block got deleted. Hence mapper failed to read and job got failed.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message