hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vineet Garg (JIRA)" <>
Subject [jira] [Commented] (HIVE-19248) REPL LOAD couldn't copy file from source CM path and also doesn't throw error if file copy fails.
Date Mon, 07 May 2018 18:46:00 GMT


Vineet Garg commented on HIVE-19248:

[~sankarh] Is this blocker for 3.0 release?

> REPL LOAD couldn't copy file from source CM path and also doesn't throw error if file
copy fails.
> -------------------------------------------------------------------------------------------------
>                 Key: HIVE-19248
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2, repl
>    Affects Versions: 3.0.0
>            Reporter: Sankar Hariappan
>            Assignee: Sankar Hariappan
>            Priority: Major
>              Labels: DR, pull-request-available, replication
>             Fix For: 3.0.0, 3.1.0
>         Attachments: HIVE-19248.01.patch, HIVE-19248.02.patch
> Hive replication uses Hadoop distcp to copy files from primary to replica warehouse.
If the HDFS block size is different across clusters, it cause file copy failures.
> {code:java}
> 2018-04-09 14:32:06,690 ERROR [main] Failure
in copying hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
> File copy failed: hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0
--> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
>  at
>  at
>  at
>  at
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(
>  at
>  at org.apache.hadoop.mapred.YarnChild$
>  at Method)
>  at
>  at
>  at org.apache.hadoop.mapred.YarnChild.main(
> Caused by: Couldn't run retriable-command: Copying hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0
to hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0
>  at
>  at
>  ... 10 more
> Caused by: Check-sum mismatch between hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0
and hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/.distcp.tmp.attempt_1522833620762_4416_m_000000_0.
Source and target differ in block-size. Use -pb to preserve block-sizes during copy. Alternatively,
skip checksum-checks altogether, using -skipCrc. (NOTE: By skipping checksums, one runs the
risk of masking data-corruption during file-transfer.)
>  at
>  at
>  at
>  at
>  ... 11 more
> {code}
> Distcp failed as the CM path for the file doesn't point to source file system. So, it
is needed to get the qualified cm root URI as part of files listed in dump.
> Also, REPL LOAD returns success even if distcp jobs failed.
> CopyUtils.doCopyRetry doesn't throw error if copy failed even after maximum attempts. 
> So, need to perform 2 things.
>  # If copy of multiple files fail for some reason, then retry with same set of files
again but need to set CM path if original source file is missing or modified based on checksum.
Let distcp to skip the properly copied files. FileUtil.copy will always overwrite the files.
>  # If source path is moved to CM path, then delete the incorrectly copied files.
>  # If copy fails for maximum attempt, then throw error.

This message was sent by Atlassian JIRA

View raw message