hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shashikant Banerjee (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container
Date Mon, 20 Nov 2017 19:35:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Shashikant Banerjee updated HDFS-12794:
    Attachment: HDFS-12794-HDFS-7240.003.patch

Thanks [~anu] , for the review comments.
As per discussion with [~anu], here are few conclusions:
code {
//make sure all the data in the ChunkoutputStreams is written to the
      //  container
          semaphore.availablePermits() == getMaxOutstandingChunks());

# While doing close on the groupOutputStream, we do chunkOutputstream.close, where we do future.get()
on response obtained after the write completes from the xceiver server which makes sure the
response is received from the xceiver server. While closing the groupStream, semaphorePermiCount
should be equal to no of available permits which is equal to max no of outstanding chunks
at any given pint of time.

2.  code {
throw new CompletionException(
              "Unexpected Storage Container Exception: " + e.toString(), e);

Hardcoding the exception when the writeChunkToConatiner calls completes in the sceiverServer
shows that , the exception is caught in the chunkoutputGroupStream.close path which is expected.

Code {
 try {
      String requestID =
          traceID + chunkIndex + ContainerProtos.Type.WriteChunk.name();
      //add the chunk write traceId to the queue
      LOG.warn("calling async");
      response =
          writeChunkAsync(xceiverClient, chunk, key, data, requestID);
      response = response.thenApply(reply -> {
        try {
          throw new IOException("Exception while validating response");
         // ContainerProtocolCalls.validateContainerResponse(reply);
         // return reply;
        } catch (IOException e) {
          LOG.info("coming here to throw exception");
          throw new CompletionException(
              "Unexpected Storage Container Exception: " + e.toString(),

code {
java.io.IOException: Unexpected Storage Container Exception: java.util.concurrent.ExecutionException:
java.io.IOException: Exception while validating response

  at org.apache.hadoop.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:174)
  at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:468)
  at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:291)

This is as expected.  Idea was to write a mocktest while validatingContainerResposne calls
which is static method of a final class, and this requires powerMockrunner which leads to
issues while bringing up the miniOzoneCluster.Will address the unit test to vertify the same
later in a different jira.

Patch v3 addresses the remaining review comments.
[~anu]/others, please have a look.

> Ozone: Parallelize ChunkOutputSream Writes to container
> -------------------------------------------------------
>                 Key: HDFS-12794
>                 URL: https://issues.apache.org/jira/browse/HDFS-12794
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>    Affects Versions: HDFS-7240
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>             Fix For: HDFS-7240
>         Attachments: HDFS-12794-HDFS-7240.001.patch, HDFS-12794-HDFS-7240.002.patch,
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets written,
the next chunk write is blocked until the previous chunk is written to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the OutputStream
should ensure flushing of all dirty buffers to the container.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message