hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brahma Reddy Battula (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-11711) DN should not delete the block On "Too many open files" Exception
Date Thu, 27 Apr 2017 14:05:04 GMT
Brahma Reddy Battula created HDFS-11711:
-------------------------------------------

             Summary: DN should not delete the block On "Too many open files" Exception
                 Key: HDFS-11711
                 URL: https://issues.apache.org/jira/browse/HDFS-11711
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: datanode
            Reporter: Brahma Reddy Battula
            Assignee: Brahma Reddy Battula


 *Seen the following scenario in one of our customer environment* 


* while jobclient writing {{"job.xml"}} there are pipeline failures and written to only one
DN.
* when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as system exceed
limit) and block got deleted. Hence mapper failed to read and job got failed.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Mime
View raw message