hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mukul Kumar Singh (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-419) ChunkInputStream bulk read api does not read from all the chunks
Date Tue, 11 Sep 2018 06:09:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16610142#comment-16610142

Mukul Kumar Singh commented on HDDS-419:

Thanks for reviewing the patch [~xyao]. The Chunk(Input/Output)Stream are responsible for
reading/writing the entire block in Ozone.
However with the current code, the we were skipping the next few chunks in ChunkInputStream.

Also if we would look into ChunkGroupInputStream
      int readLen = Math.min(len, (int)current.getRemaining());
The above lines expect, ChunkInputStream to read all the chunks for the blocks.

> ChunkInputStream bulk read api does not read from all the chunks
> ----------------------------------------------------------------
>                 Key: HDDS-419
>                 URL: https://issues.apache.org/jira/browse/HDDS-419
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Client
>    Affects Versions: 0.2.1
>            Reporter: Mukul Kumar Singh
>            Assignee: Mukul Kumar Singh
>            Priority: Blocker
>             Fix For: 0.2.1
>         Attachments: HDDS-419.001.patch
> After enabling of bulk reads with HDDS-408, testDataValidate started failing because
the bulk read api does not read all the chunks from the block.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message