hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shashikant Banerjee (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDDS-247) Handle CLOSED_CONTAINER_IO exception in ozoneClient
Date Fri, 24 Aug 2018 13:20:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Shashikant Banerjee updated HDDS-247:
-------------------------------------
    Attachment: HDDS-247.10.patch

> Handle CLOSED_CONTAINER_IO exception in ozoneClient
> ---------------------------------------------------
>
>                 Key: HDDS-247
>                 URL: https://issues.apache.org/jira/browse/HDDS-247
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Client
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>            Priority: Blocker
>             Fix For: 0.2.1
>
>         Attachments: HDDS-247.00.patch, HDDS-247.01.patch, HDDS-247.02.patch, HDDS-247.03.patch,
HDDS-247.04.patch, HDDS-247.05.patch, HDDS-247.06.patch, HDDS-247.07.patch, HDDS-247.08.patch,
HDDS-247.09.patch, HDDS-247.10.patch
>
>
> In case of ongoing writes by Ozone client to a container, the container might get closed
on the Datanodes because of node loss, out of space issues etc. In such cases, the operation
will fail with CLOSED_CONTAINER_IO exception. In cases as such, ozone client should try to
get the committed length of the block from the Datanodes, and update the OM. This Jira aims 
to address this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message