hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Work logged] (HDDS-1496) Support partial chunk reads and checksum verification
Date Thu, 06 Jun 2019 20:45:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-1496?focusedWorklogId=255422&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255422
]

ASF GitHub Bot logged work on HDDS-1496:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Jun/19 20:44
            Start Date: 06/Jun/19 20:44
    Worklog Time Spent: 10m 
      Work Description: hanishakoneru commented on pull request #804: HDDS-1496. Support partial
chunk reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804#discussion_r291363760
 
 

 ##########
 File path: hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/storage/ChunkInputStream.java
 ##########
 @@ -0,0 +1,531 @@
+package org.apache.hadoop.hdds.scm.storage;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import org.apache.hadoop.fs.Seekable;
+import org.apache.hadoop.hdds.client.BlockID;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ContainerCommandResponseProto;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ChunkInfo;
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos.ReadChunkResponseProto;
+import org.apache.hadoop.hdds.scm.XceiverClientReply;
+import org.apache.hadoop.hdds.scm.XceiverClientSpi;
+import org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.common.Checksum;
+import org.apache.hadoop.ozone.common.ChecksumData;
+import org.apache.ratis.thirdparty.com.google.protobuf.ByteString;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ExecutionException;
+
+/**
+ * An {@link InputStream} used by the REST service in combination with the
+ * SCMClient to read the value of a key from a sequence of container chunks.
+ * All bytes of the key value are stored in container chunks. Each chunk may
+ * contain multiple underlying {@link ByteBuffer} instances.  This class
 
 Review comment:
   Updated
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 255422)
    Time Spent: 9h 20m  (was: 9h 10m)

> Support partial chunk reads and checksum verification
> -----------------------------------------------------
>
>                 Key: HDDS-1496
>                 URL: https://issues.apache.org/jira/browse/HDDS-1496
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>            Reporter: Hanisha Koneru
>            Assignee: Hanisha Koneru
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> BlockInputStream#readChunkFromContainer() reads the whole chunk from disk even if we
need to read only a part of the chunk.
> This Jira aims to improve readChunkFromContainer so that only that part of the chunk
file is read which is needed by client plus the part of chunk file which is required to verify
the checksum.
> For example, lets say the client is reading from index 120 to 450 in the chunk. And let's
say checksum is stored for every 100 bytes in the chunk i.e. the first checksum is for bytes
from index 0 to 99, the next for bytes from index 100 to 199 and so on. To verify bytes from
120 to 450, we would need to read from bytes 100 to 499 so that checksum verification can
be done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message