hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brahma Reddy Battula (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11711) DN should not delete the block On "Too many open files" Exception
Date Wed, 07 Jun 2017 01:36:18 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Brahma Reddy Battula updated HDFS-11711:
----------------------------------------
    Attachment: HDFS-11711-004.patch

Updated the patch to increase the third param value.

> DN should not delete the block On "Too many open files" Exception
> -----------------------------------------------------------------
>
>                 Key: HDFS-11711
>                 URL: https://issues.apache.org/jira/browse/HDFS-11711
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>            Priority: Critical
>         Attachments: HDFS-11711-002.patch, HDFS-11711-003.patch, HDFS-11711-004.patch,
HDFS-11711-branch-2-002.patch, HDFS-11711.patch
>
>
>  *Seen the following scenario in one of our customer environment* 
> * while jobclient writing {{"job.xml"}} there are pipeline failures and written to only
one DN.
> * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as system
exceed limit) and block got deleted. Hence mapper failed to read and job got failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message