phoenix-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] karanmehta93 commented on a change in pull request #419: PHOENIX-4009 Run UPDATE STATISTICS command by using MR integration on…
Date Wed, 09 Jan 2019 22:58:28 GMT
karanmehta93 commented on a change in pull request #419: PHOENIX-4009 Run UPDATE STATISTICS
command by using MR integration on…
URL: https://github.com/apache/phoenix/pull/419#discussion_r246578618
 
 

 ##########
 File path: phoenix-core/src/main/java/org/apache/phoenix/schema/stats/MapperStatisticsCollector.java
 ##########
 @@ -0,0 +1,135 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.schema.stats;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.Store;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.schema.SortOrder;
+import org.apache.phoenix.schema.types.PLong;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.SQLException;
+
+/**
+ * Implementation for DefaultStatisticsCollector when running inside Hadoop MR job mapper
+ * Triggered via UpdateStatisticsTool class
+ */
+public class MapperStatisticsCollector extends DefaultStatisticsCollector {
+
+    private static final Log LOG = LogFactory.getLog(MapperStatisticsCollector.class);
+    private PhoenixConnection connection;
+
+    public MapperStatisticsCollector(PhoenixConnection connection, Configuration conf, Region
region, String tableName,
+                                     long clientTimeStamp, byte[] family, byte[] gp_width_bytes,
+                                     byte[] gp_per_region_bytes) {
+        super(conf, region, tableName,
+                clientTimeStamp, family, gp_width_bytes, gp_per_region_bytes);
+        this.connection = connection;
+    }
+
+    @Override
+    protected void initStatsWriter() throws IOException, SQLException {
+        this.statsWriter = StatisticsWriter.newWriter(connection, tableName, clientTimeStamp,
guidePostDepth);
+    }
+
+    @Override
+    protected long getGuidePostDepthFromSystemCatalog() throws IOException, SQLException
{
 
 Review comment:
   They look almost same, however there is a subtle difference in how we get `Table` object
to access the `SYSTEM.CATALOG` table. This difference arises since mapper has a different
way of establishing connection to hbase cluster as compared to region server.
   
   I totally agree with you that more refactoring is better, its just that this is a single
unit of code that I didn't want to partition and hence I kept it that way. I understand that
it is possible to standardize the access of `Table` api and change it only for different implementations.

   
   We have a similar issue for `StatisticsWriter` class as well, where we have two separate
static methods that instantiate `newWriter`. We could combine them in one and improve there
as well, its just something by choice that I haven't done as I felt that we can carry those
out in a new Jira.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message