hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhe Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-9342) Erasure coding: client should update and commit block based on acknowledged size
Date Mon, 16 Nov 2015 22:53:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-9342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007561#comment-15007561
] 

Zhe Zhang commented on HDFS-9342:
---------------------------------

Thanks Walter. The fix looks OK for closing file and allocation new blocks. How about {{updatePipeline}}?

> Erasure coding: client should update and commit block based on acknowledged size
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-9342
>                 URL: https://issues.apache.org/jira/browse/HDFS-9342
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: erasure-coding
>    Affects Versions: 3.0.0
>            Reporter: Zhe Zhang
>            Assignee: Walter Su
>         Attachments: HDFS-9342.01.patch
>
>
> For non-EC files, we have:
> {code}
> protected ExtendedBlock block; // its length is number of bytes acked
> {code}
> For EC files, the size of {{DFSStripedOutputStream#currentBlockGroup}} is incremented
in {{writeChunk}} without waiting for ack. And both {{updatePipeline}} and {{commitBlock}}
are based on size of {{currentBlockGroup}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message