hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rui Li (JIRA)" <>
Subject [jira] [Commented] (HIVE-13278) Avoid FileNotFoundException when map/reduce.xml is not available
Date Thu, 15 Dec 2016 07:51:58 GMT


Rui Li commented on HIVE-13278:

Hi [~xuefuz], I just think it'll be even simpler to go the checking RS way - we can constrain
the fix in just one method {{HiveOutputFormatImpl.checkOutputSpecs}}, rather than making changes
to all these different tasks. Besides, with the flag it seems we're adding extra burden to
ourselves to keep the logic consistent during plan generation.

On the other hand, if we decide to add the flag, I also have one suggestion. We can make {{}}
default to false. And we set them to true respectively in {{Utilities::setMapWork/setReduceWork}}.
The logic behind this is if you haven't set a work with the JobConf, you shouldn't try to
get one from it. Does this make sense?

> Avoid FileNotFoundException when map/reduce.xml is not available
> ----------------------------------------------------------------
>                 Key: HIVE-13278
>                 URL:
>             Project: Hive
>          Issue Type: Bug
>         Environment: Hive on Spark engine
> Found based on :
> Apache Hive 2.0.0
> Apache Spark 1.6.0
>            Reporter: Xin Hao
>            Assignee: Chao Sun
>            Priority: Minor
>         Attachments: HIVE-13278.1.patch, HIVE-13278.2.patch, HIVE-13278.3.patch
> Many redundant 'File not found' messages appeared in container log during query execution
with Hive on Spark.
> Certainly, it doesn't prevent the query from running successfully. So mark it as Minor
> Error message example:
> {noformat}
> 16/03/14 01:45:06 INFO exec.Utilities: File not found: File does not exist: /tmp/hive/hadoop/2d378538-f5d3-493c-9276-c62dd6634fb4/hive_2016-03-14_01-44-16_835_623058724409492515-6/-mr-10010/0a6d0cae-1eb3-448c-883b-590b3b198a73/reduce.xml
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(
>         at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(
>         at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.getBlockLocations(
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
>         at org.apache.hadoop.ipc.RPC$
>         at org.apache.hadoop.ipc.Server$Handler$
>         at org.apache.hadoop.ipc.Server$Handler$
>         at Method)
>         at
>         at
>         at org.apache.hadoop.ipc.Server$
> {noformat}

This message was sent by Atlassian JIRA

View raw message