phoenix-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-5258) Add support to parse header from the input CSV file as input columns for CsvBulkLoadTool
Date Tue, 14 May 2019 11:58:00 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16839344#comment-16839344
] 

Hadoop QA commented on PHOENIX-5258:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12968647/PHOENIX-5258-4.x-HBase-1.4.001.patch
  against 4.x-HBase-1.4 branch at commit 746bf1c275461dc5e6622fc004b74044b7ff1b38.
  ATTACHMENT ID: 12968647

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include any new or modified
tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:red}-1 release audit{color}.  The applied patch generated 1 release audit warnings
(more than the master's current 0 warnings).

    {color:red}-1 lineLengths{color}.  The patch introduces the following lines longer than
100:
    +            stmt.execute("CREATE TABLE S.TABLE14 (\"id\" INTEGER NOT NULL PRIMARY KEY,
\"name\" VARCHAR, \"type\" VARCHAR)");
+            stmt.execute("CREATE TABLE S.TABLE15 (\"id\" INTEGER NOT NULL PRIMARY KEY, \"name\"
VARCHAR, \"type\" VARCHAR, \"category\" VARCHAR)");
+            stmt.execute("CREATE TABLE S.TABLE16 (\"id\" INTEGER NOT NULL PRIMARY KEY, \"name\"
VARCHAR, \"type\" VARCHAR)");
+            stmt.execute("CREATE TABLE S.TABLE17 (\"id\" INTEGER NOT NULL PRIMARY KEY, \"name\"
VARCHAR, \"type\" VARCHAR)");
+                        "Headers in provided input files are different. Headers must be unique
for all input files"
+            stmt.execute("CREATE TABLE S.TABLE18 (\"id\" INTEGER NOT NULL PRIMARY KEY, \"name\"
VARCHAR, \"type\" VARCHAR)");
+            stmt.execute("CREATE TABLE S.TABLE19 (\"id\" INTEGER NOT NULL PRIMARY KEY, \"name\"
VARCHAR, \"type\" VARCHAR)");
+            stmt.execute("CREATE TABLE S.TABLE20 (\"id\" INTEGER NOT NULL PRIMARY KEY, \"cf1\".\"name\"
VARCHAR, \"cf2\".\"type\" VARCHAR, \"cf1\".\"category\" VARCHAR)");
+            try (ResultSet rs = stmt.executeQuery("SELECT \"id\",\"cf1\".\"name\", \"cf2\".\"type\",
\"cf1\".\"category\" FROM S.TABLE20")) {
+    static final Option SKIP_HEADER_OPT = new Option("k", "skip-header", false, "Skip the
first line of CSV files (the header)");

     {color:red}-1 core tests{color}.  The patch failed these unit tests:
     ./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IndexRebuildTaskIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.join.HashJoinMoreIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpgradeIT

Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/2580//testReport/
Release audit warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/2580//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/2580//console

This message is automatically generated.

> Add support to parse header from the input CSV file as input columns for CsvBulkLoadTool
> ----------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-5258
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-5258
>             Project: Phoenix
>          Issue Type: Improvement
>            Reporter: Prashant Vithani
>            Assignee: Prashant Vithani
>            Priority: Minor
>             Fix For: 4.15.0, 5.1.0
>
>         Attachments: PHOENIX-5258-4.x-HBase-1.4.001.patch, PHOENIX-5258-4.x-HBase-1.4.patch,
PHOENIX-5258-master.001.patch, PHOENIX-5258-master.patch
>
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, CsvBulkLoadTool does not support reading header from the input csv and expects
the content of the csv to match with the table schema. The support for the header can be added
to dynamically map the schema with the header.
> The proposed solution is to introduce another option for the tool `–parse-header`.
If this option is passed, the input columns list is constructed by reading the first line
of the input CSV file.
>  * If there is only one file, read the header from the first line and generate the `ColumnInfo`
list.
>  * If there are multiple files, read the header from all the files, and throw an error
if the headers across files do not match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message