hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shashikant Banerjee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-247) Handle CLOSED_CONTAINER_IO exception in ozoneClient
Date Sat, 25 Aug 2018 10:18:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16592540#comment-16592540

Shashikant Banerjee commented on HDDS-247:

Thanks [~msingh], for the review .patch v11 addresses your review comments.

> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---------------------------------------------------
>                 Key: HDDS-247
>                 URL: https://issues.apache.org/jira/browse/HDDS-247
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Client
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>            Priority: Blocker
>             Fix For: 0.2.1
>         Attachments: HDDS-247.00.patch, HDDS-247.01.patch, HDDS-247.02.patch, HDDS-247.03.patch,
HDDS-247.04.patch, HDDS-247.05.patch, HDDS-247.06.patch, HDDS-247.07.patch, HDDS-247.08.patch,
HDDS-247.09.patch, HDDS-247.10.patch, HDDS-247.11.patch
> In case of ongoing writes by Ozone client to a container, the container might get closed
on the Datanodes because of node loss, out of space issues etc. In such cases, the operation
will fail with CLOSED_CONTAINER_IO exception. In cases as such, ozone client should try to
get the committed length of the block from the Datanodes, and update the OM. This Jira aims 
to address this issue.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message