Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id C5D2120049B for ; Mon, 14 Aug 2017 20:50:07 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id C42E4165A4E; Mon, 14 Aug 2017 18:50:07 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id E3493165A49 for ; Mon, 14 Aug 2017 20:50:06 +0200 (CEST) Received: (qmail 51930 invoked by uid 500); 14 Aug 2017 18:50:04 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 51910 invoked by uid 99); 14 Aug 2017 18:50:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 14 Aug 2017 18:50:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 10CA518030B for ; Mon, 14 Aug 2017 18:50:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -100.002 X-Spam-Level: X-Spam-Status: No, score=-100.002 tagged_above=-999 required=6.31 tests=[RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id X-mgnDqEaYJ9 for ; Mon, 14 Aug 2017 18:50:03 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 32EE75FC72 for ; Mon, 14 Aug 2017 18:50:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 8D536E0DF4 for ; Mon, 14 Aug 2017 18:50:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id E537E218F3 for ; Mon, 14 Aug 2017 18:50:00 +0000 (UTC) Date: Mon, 14 Aug 2017 18:50:00 +0000 (UTC) From: "Anu Engineer (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HDFS-12159) Ozone: SCM: Add create replication pipeline RPC MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Mon, 14 Aug 2017 18:50:08 -0000 [ https://issues.apache.org/jira/browse/HDFS-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12159: -------------------------------- Attachment: HDFS-12159-HDFS-7240.004.patch [~xyao] Thank you for the detailed review. I have fixed all the issues and all checkstyle warnings that are applicable. Please take a look at patch 4. Details of changes below bq. Ozone.proto: Line 78: Do we need a separate STAND_ALONE? Stand alone is used for test purpose today and we can remove it in future if needed. bq. Do we need a separate STAND_ALONE? I was hoping to develop a HDFS like pipeline in future. That is what Chained meant. Stand alone is the used only for test purposes. bq. StorageContainerLocationProtocol.proto: Line 121: should we name it nodepool to be consistent? Fixed. bq. StorageContainerLocationProtocol.java Line 38-39: We are changing to use protobuf types like OzoneProtos.ReplicationFactor/ReplicationType I don't see much value in converting PB types to enums. For more complex types, I do get that protobuf generated code does not feel very natural. I am open to doing a Java Enum and then converting them to Protobuf for RPC purposes, but it is an enum with a switch case for converation. bq. ContainerOperationClient.java NIT: line 108-109 java doc needs update Fixed. bq. ScmClient.java NIT: line 102 java doc incomplete Fixed. bq. StorageContainerLocationProtocolClientSideTranslatorPB.java Line 219: NIT: should we call allocatePipeline instead of createReplicationPipeline to be consistent? Fixed. bq. StorageContainerLocationProtocol.proto Line 166: based on the comments here, we seems to let SCM talks to datanodes to create the pipeline, Fixed the wrong comment. bq. DatanodeStateMachine.javaLine 91-94: these can be removed. Fixed. bq. StateContext.java Line 102: NIT: can we define something like INVALID_PORT = -1 somewhere like OzoneConsts.java? Fixed. bq. XceiverServerRatis.java Line 98-100: can be removed or change output to debug log Fixed. bq. Line 111: should we have a special configuration keys for remapping of the storageDir for Ratis in MinOzoneCluster mode? Not really, the localPort is taken care by the DatanodeID. If want we can map the storage directory under datanode ID or localPort permanantley, so we don't need any special casing. bq. OzoneContainer.java Line 196: use OzoneConsts#INVALID_PORT? Fixed. bq. ContainerMapping.java Line 165: NIT: unnecessary line break Fixed. bq. PipelineManager.java: Line 41: do we assume the Pipleline returned in OPERATIONAL state? Currently we do assume all pipelines are operational since currently we will only return STAND_ALONE pipelines. When Ratis pipelines are are fully operational we will have to handle other states too. Added a comment to clarify this. bq. Line 43: NIT: can we rename getReplicationPipeline to getPipeline? Fixed. bq. Line 63: NIT: can we call it getMembers? That matches well with the javadoc above it. Fixed. bq. Line 68: NIT: rename to updatePipeline()? we may also need updatePipeline with states from PipelineManager. Fixed. bq. Line 59: do we need more than one datanodes on standalong replication? I would suggest we make this a ChainedManagerImpl.java and support standalone You are right, we should probably remove stand alone replication in the long run. Right now, this is the code path that is functional and we need to enable testing for the pluggable framework. The chained replication at some point will replicate to other machines. Stand alone, is what we have now, without the ability to replicate. > Ozone: SCM: Add create replication pipeline RPC > ----------------------------------------------- > > Key: HDFS-12159 > URL: https://issues.apache.org/jira/browse/HDFS-12159 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone > Affects Versions: HDFS-7240 > Reporter: Anu Engineer > Assignee: Anu Engineer > Fix For: HDFS-7240 > > Attachments: HDFS-12159-HDFS-7240.001.patch, HDFS-12159-HDFS-7240.002.patch, HDFS-12159-HDFS-7240.003.patch, HDFS-12159-HDFS-7240.004.patch > > > Add an API that allows users to create replication pipelines using SCM. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org