hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Brahma Reddy Battula (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11711) DN should not delete the block On "Too many open files" Exception
Date Wed, 07 Jun 2017 09:14:18 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Brahma Reddy Battula updated HDFS-11711:
----------------------------------------
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 2.8.2
                   2.8.1
                   3.0.0-alpha4
                   2.9.0
           Status: Resolved  (was: Patch Available)

Committed to {{trunk}},{{branch-2}},{{branch-2.8}} and {{branch-2.8.1}}. Test failures are
unrelated

Thanks to all for your great reviews.

bq.Must throw File not found would be better updated to Must throw FileNotFoundException,
bq.Should throw too many open would be better updated to Should throw too many open files

Addressed these minor nits while committing.

bq. But I wish there's a more portable way to check for Too many open files error.

do you mean, it's already there..?

> DN should not delete the block On "Too many open files" Exception
> -----------------------------------------------------------------
>
>                 Key: HDFS-11711
>                 URL: https://issues.apache.org/jira/browse/HDFS-11711
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Brahma Reddy Battula
>            Assignee: Brahma Reddy Battula
>            Priority: Critical
>             Fix For: 2.9.0, 3.0.0-alpha4, 2.8.1, 2.8.2
>
>         Attachments: HDFS-11711-002.patch, HDFS-11711-003.patch, HDFS-11711-004.patch,
HDFS-11711-branch-2-002.patch, HDFS-11711-branch-2-003.patch, HDFS-11711.patch
>
>
>  *Seen the following scenario in one of our customer environment* 
> * while jobclient writing {{"job.xml"}} there are pipeline failures and written to only
one DN.
> * when mapper reading the {{"job.xml"}}, DN got {{"Too many open files"}} (as system
exceed limit) and block got deleted. Hence mapper failed to read and job got failed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message