hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hanisha Koneru (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-1529) BlockInputStream: Avoid buffer copy if the whole chunk is being read
Date Wed, 10 Jul 2019 20:39:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-1529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16882434#comment-16882434
] 

Hanisha Koneru commented on HDDS-1529:
--------------------------------------

[~hgadre], HDDS-1496 adds support to read partial chunks. When chunks are read from disk,
they are stored in a local buffer and then the required part of the chunk is copied to the
client buffer. This is required when the chunk boundary to be read does not coincide with
the checksum boundary. But when we are reading the whole chunk, we do not need to do double
copy i.e. copy from disk to local buffer and then to client buffer. We can directly copy the
date from disk to client buffer.
Please let me know if this makes sense or if you have any questions.

> BlockInputStream: Avoid buffer copy if the whole chunk is being read
> --------------------------------------------------------------------
>
>                 Key: HDDS-1529
>                 URL: https://issues.apache.org/jira/browse/HDDS-1529
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: Hanisha Koneru
>            Assignee: Hrishikesh Gadre
>            Priority: Major
>
> Currently, BlockInputStream reads chunk data from DNs and puts it in a local buffer and
then copies the data to clients buffer. This is required for partial chunk reads where extra
chunk data than requested might have to be read so that checksum verification can be done.
But if the whole chunk is being read, we can copy the data directly into client buffer and
avoid double buffer copies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message