From dev-return-53132-archive-asf-public=cust-asf.ponee.io@phoenix.apache.org Fri Jul 20 04:59:05 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id D922118067A for ; Fri, 20 Jul 2018 04:59:04 +0200 (CEST) Received: (qmail 15089 invoked by uid 500); 20 Jul 2018 02:59:03 -0000 Mailing-List: contact dev-help@phoenix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@phoenix.apache.org Delivered-To: mailing list dev@phoenix.apache.org Received: (qmail 15074 invoked by uid 99); 20 Jul 2018 02:59:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 20 Jul 2018 02:59:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 6FB091809B3 for ; Fri, 20 Jul 2018 02:59:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.301 X-Spam-Level: X-Spam-Status: No, score=-110.301 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id Suc86X99pfOM for ; Fri, 20 Jul 2018 02:59:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 04E535F1B7 for ; Fri, 20 Jul 2018 02:59:02 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id E181DE18C4 for ; Fri, 20 Jul 2018 02:59:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 4CA3227143 for ; Fri, 20 Jul 2018 02:59:00 +0000 (UTC) Date: Fri, 20 Jul 2018 02:59:00 +0000 (UTC) From: "ASF GitHub Bot (JIRA)" To: dev@phoenix.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (PHOENIX-3817) VerifyReplication using SQL MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/PHOENIX-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16550154#comment-16550154 ] ASF GitHub Bot commented on PHOENIX-3817: ----------------------------------------- Github user karanmehta93 commented on a diff in the pull request: https://github.com/apache/phoenix/pull/309#discussion_r203927054 --- Diff: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixMapReduceUtil.java --- @@ -157,6 +192,192 @@ public static void setOutput(final Job job, final String tableName,final String PhoenixConfigurationUtil.setUpsertColumnNames(configuration,columns.split(",")); } + /** + * Generate a query plan for a MapReduce job query. + * @param configuration The MapReduce job configuration + * @return Query plan for the MapReduce job + * @throws SQLException If the plan cannot be generated + */ + public static QueryPlan getQueryPlan(final Configuration configuration) + throws SQLException { + return getQueryPlan(configuration, false); + } + + /** + * Generate a query plan for a MapReduce job query + * @param configuration The MapReduce job configuration + * @param isTargetConnection Whether the query plan is for the target HBase cluster + * @return Query plan for the MapReduce job + * @throws SQLException If the plan cannot be generated + */ + public static QueryPlan getQueryPlan(final Configuration configuration, + boolean isTargetConnection) throws SQLException { + Preconditions.checkNotNull(configuration); + final String txnScnValue = configuration.get(PhoenixConfigurationUtil.TX_SCN_VALUE); + final String currentScnValue = configuration.get(PhoenixConfigurationUtil.CURRENT_SCN_VALUE); + final Properties overridingProps = new Properties(); + if(txnScnValue==null && currentScnValue!=null) { + overridingProps.put(PhoenixRuntime.CURRENT_SCN_ATTRIB, currentScnValue); + } + final Connection connection; + final String selectStatement; + if (isTargetConnection) { + String targetTable = PhoenixConfigurationUtil.getInputTargetTableName(configuration); + if (!Strings.isNullOrEmpty(targetTable)) { + // different table on same cluster + connection = ConnectionUtil.getInputConnection(configuration, overridingProps); + selectStatement = PhoenixConfigurationUtil.getSelectStatement(configuration, true); + } else { + // same table on different cluster + connection = + ConnectionUtil.getTargetInputConnection(configuration, overridingProps); + selectStatement = PhoenixConfigurationUtil.getSelectStatement(configuration); + } + } else { + connection = ConnectionUtil.getInputConnection(configuration, overridingProps); + selectStatement = PhoenixConfigurationUtil.getSelectStatement(configuration); + } + Preconditions.checkNotNull(selectStatement); + final Statement statement = connection.createStatement(); + final PhoenixStatement pstmt = statement.unwrap(PhoenixStatement.class); + // Optimize the query plan so that we potentially use secondary indexes + final QueryPlan queryPlan = pstmt.optimizeQuery(selectStatement); + final Scan scan = queryPlan.getContext().getScan(); + // since we can't set a scn on connections with txn set TX_SCN attribute so that the max time range is set by BaseScannerRegionObserver + if (txnScnValue!=null) { + scan.setAttribute(BaseScannerRegionObserver.TX_SCN, Bytes.toBytes(Long.valueOf(txnScnValue))); + } + // Initialize the query plan so it sets up the parallel scans + queryPlan.iterator(MapReduceParallelScanGrouper.getInstance()); + return queryPlan; + } + + /** + * Generates the input splits for a MapReduce job. + * @param qplan Query plan for the job + * @param splits The key range splits for the job + * @param config The job configuration + * @return Input splits for the job + * @throws IOException If the region information for the splits cannot be retrieved + */ + public static List generateSplits(final QueryPlan qplan, + final List splits, Configuration config) throws IOException { + Preconditions.checkNotNull(qplan); + Preconditions.checkNotNull(splits); + + // Get the RegionSizeCalculator + org.apache.hadoop.hbase.client.Connection connection = ConnectionFactory.createConnection(config); --- End diff -- Any particular reason as to why you removed the try with resources block over here? I added it because of a memory leak that I had found out here: https://issues.apache.org/jira/browse/PHOENIX-4489 > VerifyReplication using SQL > --------------------------- > > Key: PHOENIX-3817 > URL: https://issues.apache.org/jira/browse/PHOENIX-3817 > Project: Phoenix > Issue Type: Improvement > Reporter: Alex Araujo > Assignee: Akshita Malhotra > Priority: Minor > Fix For: 4.15.0 > > Attachments: PHOENIX-3817.v1.patch, PHOENIX-3817.v2.patch, PHOENIX-3817.v3.patch, PHOENIX-3817.v4.patch, PHOENIX-3817.v5.patch, PHOENIX-3817.v6.patch > > > Certain use cases may copy or replicate a subset of a table to a different table or cluster. For example, application topologies may map data for specific tenants to different peer clusters. > It would be useful to have a Phoenix VerifyReplication tool that accepts an SQL query, a target table, and an optional target cluster. The tool would compare data returned by the query on the different tables and update various result counters (similar to HBase's VerifyReplication). -- This message was sent by Atlassian JIRA (v7.6.3#76005)