phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down
Date Mon, 11 Apr 2016 20:19:25 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235898#comment-15235898
] 

ASF GitHub Bot commented on PHOENIX-2743:
-----------------------------------------

Github user joshelser commented on a diff in the pull request:

    https://github.com/apache/phoenix/pull/155#discussion_r59273613
  
    --- Diff: phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixResultWritable.java
---
    @@ -0,0 +1,215 @@
    +/**
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.phoenix.hive.mapreduce;
    +
    +import java.io.DataInput;
    +import java.io.DataOutput;
    +import java.io.IOException;
    +import java.sql.PreparedStatement;
    +import java.sql.ResultSet;
    +import java.sql.ResultSetMetaData;
    +import java.sql.SQLException;
    +import java.util.List;
    +import java.util.Map;
    +
    +import org.apache.commons.logging.Log;
    +import org.apache.commons.logging.LogFactory;
    +import org.apache.hadoop.conf.Configurable;
    +import org.apache.hadoop.conf.Configuration;
    +import org.apache.hadoop.io.Writable;
    +import org.apache.hadoop.mapreduce.lib.db.DBWritable;
    +import org.apache.phoenix.hive.PhoenixRowKey;
    +import org.apache.phoenix.hive.constants.PhoenixStorageHandlerConstants;
    +import org.apache.phoenix.hive.util.PhoenixStorageHandlerUtil;
    +import org.apache.phoenix.hive.util.PhoenixUtil;
    +import org.apache.phoenix.util.ColumnInfo;
    +
    +import com.google.common.collect.Lists;
    +import com.google.common.collect.Maps;
    +
    +/**
    + * Serialized class for SerDe
    + *
    + */
    +public class PhoenixResultWritable implements Writable, DBWritable, Configurable {
    +
    +    private static final Log LOG = LogFactory.getLog(PhoenixResultWritable.class);
    +
    +    private List<ColumnInfo> columnMetadataList;
    +    private List<Object> valueList;    // for output
    +    private Map<String, Object> rowMap = Maps.newHashMap();  // for input
    +
    +    private int columnCount = -1;
    +
    +    private Configuration config;
    +    private boolean isTransactional;
    +    private Map<String, Object> rowKeyMap = Maps.newLinkedHashMap();
    +    private List<String> primaryKeyColumnList;
    +
    +    public PhoenixResultWritable() {
    +    }
    +
    +    public PhoenixResultWritable(Configuration config) throws IOException {
    +        setConf(config);
    +    }
    +
    +    public PhoenixResultWritable(Configuration config, List<ColumnInfo> columnMetadataList)
throws IOException {
    +        this(config);
    +        this.columnMetadataList = columnMetadataList;
    +
    +        valueList = Lists.newArrayListWithExpectedSize(columnMetadataList.size());
    +    }
    +
    +    @Override
    +    public void write(DataOutput out) throws IOException {
    +    }
    --- End diff --
    
    No need to implement these?


> HivePhoenixHandler for big-big join with predicate push down
> ------------------------------------------------------------
>
>                 Key: PHOENIX-2743
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2743
>             Project: Phoenix
>          Issue Type: New Feature
>    Affects Versions: 4.5.0, 4.6.0
>         Environment: hive-1.2.1
>            Reporter: JeongMin Ju
>              Labels: features, performance
>         Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does not
process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on HBase using
HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of applying predicate
push down.
> I am publishing source code to github for contribution and maybe will be completed by
next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message