phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-3817) VerifyReplication using SQL
Date Fri, 20 Jul 2018 02:59:00 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16550154#comment-16550154
] 

ASF GitHub Bot commented on PHOENIX-3817:
-----------------------------------------

Github user karanmehta93 commented on a diff in the pull request:

    https://github.com/apache/phoenix/pull/309#discussion_r203927054
  
    --- Diff: phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixMapReduceUtil.java
---
    @@ -157,6 +192,192 @@ public static void setOutput(final Job job, final String tableName,final
String
             PhoenixConfigurationUtil.setUpsertColumnNames(configuration,columns.split(","));
         }
     
    +    /**
    +     * Generate a query plan for a MapReduce job query.
    +     * @param configuration The MapReduce job configuration
    +     * @return Query plan for the MapReduce job
    +     * @throws SQLException If the plan cannot be generated
    +     */
    +    public static QueryPlan getQueryPlan(final Configuration configuration)
    +            throws SQLException {
    +        return getQueryPlan(configuration, false);
    +    }
    +
    +    /**
    +     * Generate a query plan for a MapReduce job query
    +     * @param configuration The MapReduce job configuration
    +     * @param isTargetConnection Whether the query plan is for the target HBase cluster
    +     * @return Query plan for the MapReduce job
    +     * @throws SQLException If the plan cannot be generated
    +     */
    +    public static QueryPlan getQueryPlan(final Configuration configuration,
    +            boolean isTargetConnection) throws SQLException {
    +        Preconditions.checkNotNull(configuration);
    +        final String txnScnValue = configuration.get(PhoenixConfigurationUtil.TX_SCN_VALUE);
    +        final String currentScnValue = configuration.get(PhoenixConfigurationUtil.CURRENT_SCN_VALUE);
    +        final Properties overridingProps = new Properties();
    +        if(txnScnValue==null && currentScnValue!=null) {
    +            overridingProps.put(PhoenixRuntime.CURRENT_SCN_ATTRIB, currentScnValue);
    +        }
    +        final Connection connection;
    +        final String selectStatement;
    +        if (isTargetConnection) {
    +            String targetTable = PhoenixConfigurationUtil.getInputTargetTableName(configuration);
    +            if (!Strings.isNullOrEmpty(targetTable)) {
    +                // different table on same cluster
    +                connection = ConnectionUtil.getInputConnection(configuration, overridingProps);
    +                selectStatement = PhoenixConfigurationUtil.getSelectStatement(configuration,
true);
    +            } else {
    +                // same table on different cluster
    +                connection =
    +                        ConnectionUtil.getTargetInputConnection(configuration, overridingProps);
    +                selectStatement = PhoenixConfigurationUtil.getSelectStatement(configuration);
    +            }
    +        } else {
    +            connection = ConnectionUtil.getInputConnection(configuration, overridingProps);
    +            selectStatement = PhoenixConfigurationUtil.getSelectStatement(configuration);
    +        }
    +        Preconditions.checkNotNull(selectStatement);
    +        final Statement statement = connection.createStatement();
    +        final PhoenixStatement pstmt = statement.unwrap(PhoenixStatement.class);
    +        // Optimize the query plan so that we potentially use secondary indexes
    +        final QueryPlan queryPlan = pstmt.optimizeQuery(selectStatement);
    +        final Scan scan = queryPlan.getContext().getScan();
    +        // since we can't set a scn on connections with txn set TX_SCN attribute so that
the max time range is set by BaseScannerRegionObserver
    +        if (txnScnValue!=null) {
    +            scan.setAttribute(BaseScannerRegionObserver.TX_SCN, Bytes.toBytes(Long.valueOf(txnScnValue)));
    +        }
    +        // Initialize the query plan so it sets up the parallel scans
    +        queryPlan.iterator(MapReduceParallelScanGrouper.getInstance());
    +        return queryPlan;
    +    }
    +
    +    /**
    +     * Generates the input splits for a MapReduce job.
    +     * @param qplan Query plan for the job
    +     * @param splits The key range splits for the job
    +     * @param config The job configuration
    +     * @return Input splits for the job
    +     * @throws IOException If the region information for the splits cannot be retrieved
    +     */
    +    public static List<InputSplit> generateSplits(final QueryPlan qplan,
    +            final List<KeyRange> splits, Configuration config) throws IOException
{
    +        Preconditions.checkNotNull(qplan);
    +        Preconditions.checkNotNull(splits);
    +
    +        // Get the RegionSizeCalculator
    +        org.apache.hadoop.hbase.client.Connection connection = ConnectionFactory.createConnection(config);
    --- End diff --
    
    Any particular reason as to why you removed the try with resources block over here?
    I added it because of a memory leak that I had found out here: https://issues.apache.org/jira/browse/PHOENIX-4489


> VerifyReplication using SQL
> ---------------------------
>
>                 Key: PHOENIX-3817
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3817
>             Project: Phoenix
>          Issue Type: Improvement
>            Reporter: Alex Araujo
>            Assignee: Akshita Malhotra
>            Priority: Minor
>             Fix For: 4.15.0
>
>         Attachments: PHOENIX-3817.v1.patch, PHOENIX-3817.v2.patch, PHOENIX-3817.v3.patch,
PHOENIX-3817.v4.patch, PHOENIX-3817.v5.patch, PHOENIX-3817.v6.patch
>
>
> Certain use cases may copy or replicate a subset of a table to a different table or cluster.
For example, application topologies may map data for specific tenants to different peer clusters.
> It would be useful to have a Phoenix VerifyReplication tool that accepts an SQL query,
a target table, and an optional target cluster. The tool would compare data returned by the
query on the different tables and update various result counters (similar to HBase's VerifyReplication).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message