hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Li Bo (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8704) Erasure Coding: client fails to write large file when one datanode fails
Date Thu, 02 Jul 2015 03:06:05 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14611387#comment-14611387
] 

Li Bo commented on HDFS-8704:
-----------------------------

Oh, I see, HDFS-8254 had discussed this problem before its commit. The logic of client writing
is complex and there're many situations that may cause the writing fail. I think we should
fix different situation in different jira, not all in HDFS-8383.

> Erasure Coding: client fails to write large file when one datanode fails
> ------------------------------------------------------------------------
>
>                 Key: HDFS-8704
>                 URL: https://issues.apache.org/jira/browse/HDFS-8704
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Li Bo
>            Assignee: Li Bo
>         Attachments: HDFS-8704-000.patch
>
>
> I test current code on a 5-node cluster using RS(3,2).  When a datanode is corrupt, client
succeeds to write a file smaller than a block group but fails to write a large one. {{TestDFSStripeOutputStreamWithFailure}}
only tests files smaller than a block group, this jira will add more test situations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message