hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ajay Kumar (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDDS-297) Add pipeline actions in Ozone
Date Tue, 21 Aug 2018 19:34:00 GMT

    [ https://issues.apache.org/jira/browse/HDDS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16587908#comment-16587908

Ajay Kumar commented on HDDS-297:

[~msingh] thanks for updating the patch. LGTM. Few minor comments and questions:
* ContainerMapping#handlePipelineClose log pipeline id if its not found?
* ContainerStateMachine L119 rename server to ratisServer
* HDDSConfigKeys Just curious about 20 limit. For a medium to big cluster this may be too
* RatisHelper L58. Is this "_" in peerId generated by Ratis API? If yes, than may be we should
replace it with an internal config constant. (To avoid any potential breakage by ratis in
* StateContext L275 Shall we add unordered() before limit? This is what javadoc says about
{code}Using an unordered
     * stream source (such as {@link #generate(Supplier)}) or removing the
     * ordering constraint with {@link #unordered()} may result in significant
     * speedups of {@code limit()} in parallel pipelines, if the semantics of
     * your situation permit.  If consistency with encounter order is required,
     * and you are experiencing poor performance or memory utilization with
     * {@code limit()} in parallel pipelines, switching to sequential execution
     * with {@link #sequential()} may improve performance.{code}
* StorageContainerDatanodeProtocol.proto Could you please share why ClosePipelineInfo in PipelineAction
is optional? In PipelineEventHandler that seems to be required field.
* PipelineEventHandler: Rename class to PipelineActionEventHandler?
* TestNodeFailure#testPipelineFail Shall we assert that ratisContainer1 pipeline is Open before
we shutdown the datanode in that pipeline? Also shall we test failure for leader and follower

> Add pipeline actions in Ozone
> -----------------------------
>                 Key: HDDS-297
>                 URL: https://issues.apache.org/jira/browse/HDDS-297
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: SCM
>            Reporter: Mukul Kumar Singh
>            Assignee: Mukul Kumar Singh
>            Priority: Major
>             Fix For: 0.2.1
>         Attachments: HDDS-297.001.patch, HDDS-297.002.patch, HDDS-297.003.patch
> Pipeline in Ozone are created out of a group of nodes depending upon the replication
factor and type. These pipeline provide a transport protocol for data transfer.
> Inorder to detect any failure of pipeline, SCM should receive pipeline reports from Datanodes
and process it to identify various raft rings.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message