hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eugene Koifman (JIRA)" <j...@apache.org>
Subject [jira] [Reopened] (HIVE-13392) disable speculative execution for ACID Compactor
Date Fri, 08 Jul 2016 20:21:11 GMT

     [ https://issues.apache.org/jira/browse/HIVE-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Eugene Koifman reopened HIVE-13392:
-----------------------------------

this needs to go into 2.1.1 as well

> disable speculative execution for ACID Compactor
> ------------------------------------------------
>
>                 Key: HIVE-13392
>                 URL: https://issues.apache.org/jira/browse/HIVE-13392
>             Project: Hive
>          Issue Type: Bug
>          Components: Transactions
>    Affects Versions: 1.0.0
>            Reporter: Eugene Koifman
>            Assignee: Eugene Koifman
>             Fix For: 1.3.0, 2.2.0
>
>         Attachments: HIVE-13392.2.patch, HIVE-13392.3.patch, HIVE-13392.4.patch, HIVE-13392.patch
>
>
> https://developer.yahoo.com/hadoop/tutorial/module4.html
> Speculative execution is enabled by default. You can disable speculative execution for
the mappers and reducers by setting the mapred.map.tasks.speculative.execution and mapred.reduce.tasks.speculative.execution
JobConf options to false, respectively.
> CompactorMR is currently not set up to handle speculative execution and may lead to something
like
> {code}
> 2016-02-08 22:56:38,256 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running
child : org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException):
Failed to CREATE_FILE /apps/hive/warehouse/service_logs_v2/ds=2016-01-20/_tmp_6cf08b9f-c2e2-4182-bc81-e032801b147f/base_13858600/bucket_00004
for DFSClient_attempt_1454628390210_27756_m_000001_1_131224698_1 on 172.18.129.12 because
this file lease is currently owned by DFSClient_attempt_1454628390210_27756_m_000001_0_-2027182532_1
on 172.18.129.18
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2937)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2562)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2451)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2335)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:688)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2151)
> {code}
> Short term: disable speculative execution for this job
> Longer term perhaps make each task write to dir with UUID...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message