hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-18582) MSCK REPAIR TABLE Throw MetaException
Date Wed, 31 Jan 2018 21:07:02 GMT

    [ https://issues.apache.org/jira/browse/HIVE-18582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347611#comment-16347611
] 

Hive QA commented on HIVE-18582:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908522/HIVE-18582.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 23 failed/errored test(s), 12964 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl] (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
(batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
(batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan] (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
(batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded]
(batchId=206)
org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded]
(batchId=206)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap (batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8952/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8952/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8952/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 23 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12908522 - PreCommit-HIVE-Build

>  MSCK REPAIR TABLE Throw MetaException
> --------------------------------------
>
>                 Key: HIVE-18582
>                 URL: https://issues.apache.org/jira/browse/HIVE-18582
>             Project: Hive
>          Issue Type: Bug
>          Components: Query Planning
>    Affects Versions: 2.1.1
>            Reporter: liubangchen
>            Assignee: liubangchen
>            Priority: Major
>         Attachments: HIVE-18582.patch
>
>
> while executing query MSCK REPAIR TABLE tablename I got Exception:
> {code:java}
> org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Expected 1 components,
got 2 (log_date=2015121309/vgameid=lyjt))
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1847)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:402)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197)
> at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161)
> at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
> --
> Caused by: MetaException(message:Expected 1 components, got 2 (log_date=2015121309/vgameid=lyjt))
> at org.apache.hadoop.hive.metastore.Warehouse.makeValsFromName(Warehouse.java:385)
> at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1845)
> {code}
> table PARTITIONED by (log_date,vgameid)
> The data file on HDFS is:
>  
> {code:java}
> /usr/hive/warehouse/a.db/tablename/log_date=2015063023
> drwxr-xr-x - root supergroup 0 2018-01-26 09:41 /usr/hive/warehouse/a.db/tablename/log_date=2015121309/vgameid=lyjt
> {code}
> The subdir of log_data=2015063023 is empty
> If i set  hive.msck.path.validation=ignore Then msck repair table will executed ok.
> Then I found code like this:
> {code:java}
> private int msck(Hive db, MsckDesc msckDesc) {
>   CheckResult result = new CheckResult();
>   List<String> repairOutput = new ArrayList<String>();
>   try {
>     HiveMetaStoreChecker checker = new HiveMetaStoreChecker(db);
>     String[] names = Utilities.getDbTableName(msckDesc.getTableName());
>     checker.checkMetastore(names[0], names[1], msckDesc.getPartSpecs(), result);
>     List<CheckResult.PartitionResult> partsNotInMs = result.getPartitionsNotInMs();
>     if (msckDesc.isRepairPartitions() && !partsNotInMs.isEmpty()) {
>      //I think bug is here
>       AbstractList<String> vals = null;
>       String settingStr = HiveConf.getVar(conf, HiveConf.ConfVars.HIVE_MSCK_PATH_VALIDATION);
>       boolean doValidate = !("ignore".equals(settingStr));
>       boolean doSkip = doValidate && "skip".equals(settingStr);
>       // The default setting is "throw"; assume doValidate && !doSkip means throw.
>       if (doValidate) {
>         // Validate that we can add partition without escaping. Escaping was originally
intended
>         // to avoid creating invalid HDFS paths; however, if we escape the HDFS path
(that we
>         // deem invalid but HDFS actually supports - it is possible to create HDFS paths
with
>         // unprintable characters like ASCII 7), metastore will create another directory
instead
>         // of the one we are trying to "repair" here.
>         Iterator<CheckResult.PartitionResult> iter = partsNotInMs.iterator();
>         while (iter.hasNext()) {
>           CheckResult.PartitionResult part = iter.next();
>           try {
>             vals = Warehouse.makeValsFromName(part.getPartitionName(), vals);
>           } catch (MetaException ex) {
>             throw new HiveException(ex);
>           }
>           for (String val : vals) {
>             String escapedPath = FileUtils.escapePathName(val);
>             assert escapedPath != null;
>             if (escapedPath.equals(val)) continue;
>             String errorMsg = "Repair: Cannot add partition " + msckDesc.getTableName()
>                 + ':' + part.getPartitionName() + " due to invalid characters in the
name";
>             if (doSkip) {
>               repairOutput.add(errorMsg);
>               iter.remove();
>             } else {
>               throw new HiveException(errorMsg);
>             }
>           }
>         }
>       }
> {code}
> I think  AbstractList<String> vals = null; must placed after  "while (iter.hasNext())
{" will work ok.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message