hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shashikant Banerjee (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-12794) Ozone: Parallelize ChunkOutputSream Writes to container
Date Mon, 20 Nov 2017 19:58:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259728#comment-16259728
] 

Shashikant Banerjee edited comment on HDFS-12794 at 11/20/17 7:57 PM:
----------------------------------------------------------------------

Thanks [~anu] , for the review comments.
As per discussion with [~anu], here are few conclusions:
1)
{code}
//make sure all the data in the ChunkoutputStreams is written to the
      //  container
      Preconditions.checkArgument(
          semaphore.availablePermits() == getMaxOutstandingChunks());
}
{code}

While doing close on the groupOutputStream, we do chunkOutputstream.close, where we do future.get()
on response obtained after the write completes from the xceiver server which makes sure the
response is received from the xceiver server. While closing the groupStream, semaphorePermiCount
should be equal to no of available permits which is equal to max no of outstanding chunks
at any given point of time.


2.  {code}
throw new CompletionException(
              "Unexpected Storage Container Exception: " + e.toString(), e);
}
{code}
Hardcoding the exception when the writeChunkToConatiner calls completes in the xceiverServer
shows that , the exception is caught in the chunkoutputGroupStream.close path which is expected.

{code}
      response = response.thenApply(reply -> {
        try{
          throw new IOException("Exception while validating response");
         // ContainerProtocolCalls.validateContainerResponse(reply);
         // return reply;
        }catch (IOException e){
          throw new CompletionException(
              "Unexpected Storage Container Exception: " + e.toString(),
e)
{code}
java.io.IOException: Unexpected Storage Container Exception: java.util.concurrent.ExecutionException:
java.io.IOException: Exception while validating response

  at org.apache.hadoop.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:174)
  at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:468)
  at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:291)
{code}

This is as expected.  Idea was to write a mocktest while validatingContainerResposne calls
which is static method of a final class, and this requires powerMockrunner which leads to
issues while bringing up the miniOzoneCluster.Will address the unit test to vertify the same
later in a different jira.

Patch v3 addresses the remaining review comments.
[~anu]/others, please have a look.



was (Author: shashikant):
Thanks [~anu] , for the review comments.
As per discussion with [~anu], here are few conclusions:
1)
code {}
//make sure all the data in the ChunkoutputStreams is written to the
      //  container
      Preconditions.checkArgument(
          semaphore.availablePermits() == getMaxOutstandingChunks());
}
code {}

While doing close on the groupOutputStream, we do chunkOutputstream.close, where we do future.get()
on response obtained after the write completes from the xceiver server which makes sure the
response is received from the xceiver server. While closing the groupStream, semaphorePermiCount
should be equal to no of available permits which is equal to max no of outstanding chunks
at any given point of time.


2.  code {}
throw new CompletionException(
              "Unexpected Storage Container Exception: " + e.toString(), e);
}
code {}
Hardcoding the exception when the writeChunkToConatiner calls completes in the xceiverServer
shows that , the exception is caught in the chunkoutputGroupStream.close path which is expected.

code{}
      response = response.thenApply(reply -> {
        try{
          throw new IOException("Exception while validating response");
         // ContainerProtocolCalls.validateContainerResponse(reply);
         // return reply;
        }catch (IOException e){
          throw new CompletionException(
              "Unexpected Storage Container Exception: " + e.toString(),
e)
code{}
java.io.IOException: Unexpected Storage Container Exception: java.util.concurrent.ExecutionException:
java.io.IOException: Exception while validating response

  at org.apache.hadoop.scm.storage.ChunkOutputStream.close(ChunkOutputStream.java:174)
  at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream$ChunkOutputStreamEntry.close(ChunkGroupOutputStream.java:468)
  at org.apache.hadoop.ozone.client.io.ChunkGroupOutputStream.close(ChunkGroupOutputStream.java:291)
code{}

This is as expected.  Idea was to write a mocktest while validatingContainerResposne calls
which is static method of a final class, and this requires powerMockrunner which leads to
issues while bringing up the miniOzoneCluster.Will address the unit test to vertify the same
later in a different jira.

Patch v3 addresses the remaining review comments.
[~anu]/others, please have a look.


> Ozone: Parallelize ChunkOutputSream Writes to container
> -------------------------------------------------------
>
>                 Key: HDFS-12794
>                 URL: https://issues.apache.org/jira/browse/HDFS-12794
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>    Affects Versions: HDFS-7240
>            Reporter: Shashikant Banerjee
>            Assignee: Shashikant Banerjee
>             Fix For: HDFS-7240
>
>         Attachments: HDFS-12794-HDFS-7240.001.patch, HDFS-12794-HDFS-7240.002.patch,
HDFS-12794-HDFS-7240.003.patch
>
>
> The chunkOutPutStream Write are sync in nature .Once one chunk of data gets written,
the next chunk write is blocked until the previous chunk is written to the container.
> The ChunkOutputWrite Stream writes should be made async and Close on the OutputStream
should ensure flushing of all dirty buffers to the container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message