Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id E5161200B5C for ; Thu, 7 Jul 2016 02:24:13 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id E4004160A73; Thu, 7 Jul 2016 00:24:13 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 9E4DF160A74 for ; Thu, 7 Jul 2016 02:24:12 +0200 (CEST) Received: (qmail 54776 invoked by uid 500); 7 Jul 2016 00:24:11 -0000 Mailing-List: contact issues-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hive.apache.org Delivered-To: mailing list issues@hive.apache.org Received: (qmail 54733 invoked by uid 99); 7 Jul 2016 00:24:11 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 07 Jul 2016 00:24:11 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 1908F2C02AA for ; Thu, 7 Jul 2016 00:24:11 +0000 (UTC) Date: Thu, 7 Jul 2016 00:24:11 +0000 (UTC) From: "Eugene Koifman (JIRA)" To: issues@hive.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HIVE-14179) Too many delta files causes select queries on the table to fail with OOM MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 07 Jul 2016 00:24:14 -0000 [ https://issues.apache.org/jira/browse/HIVE-14179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-14179: ---------------------------------- Affects Version/s: 1.2.0 Target Version/s: 1.3.0, 2.2.0 > Too many delta files causes select queries on the table to fail with OOM > ------------------------------------------------------------------------ > > Key: HIVE-14179 > URL: https://issues.apache.org/jira/browse/HIVE-14179 > Project: Hive > Issue Type: Bug > Components: Transactions > Affects Versions: 1.2.0 > Reporter: Deepesh Khandelwal > > When a large number of delta files get generated during ACID operations, a select query on the ACID table fails with OOM. > {noformat} > ERROR [main]: SessionState (SessionState.java:printError(942)) - Vertex failed, vertexName=Map 1, vertexId=vertex_1465431842106_0014_1_00, diagnostics=[Task failed, taskId=task_1465431842106_0014_1_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Direct buffer memory > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:159) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139) > at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at org.apache.hadoop.util.DirectBufferPool.getBuffer(DirectBufferPool.java:72) > at org.apache.hadoop.hdfs.BlockReaderLocal.createDataBufIfNeeded(BlockReaderLocal.java:260) > at org.apache.hadoop.hdfs.BlockReaderLocal.readWithBounceBuffer(BlockReaderLocal.java:601) > at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:569) > at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:789) > at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:845) > at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:905) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:953) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:377) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:323) > at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238) > at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:462) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1372) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1264) > at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:135) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80) > at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:650) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:621) > at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:405) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:124) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149) > ... 14 more > ], TaskAttempt 1 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Direct buffer memory > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:159) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139) > at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at org.apache.hadoop.util.DirectBufferPool.getBuffer(DirectBufferPool.java:72) > at org.apache.hadoop.hdfs.BlockReaderLocal.createDataBufIfNeeded(BlockReaderLocal.java:260) > at org.apache.hadoop.hdfs.BlockReaderLocal.readWithBounceBuffer(BlockReaderLocal.java:601) > at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:569) > at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:789) > at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:845) > at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:905) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:953) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:377) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:323) > at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238) > at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:462) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1372) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1264) > at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:135) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80) > at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:650) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:621) > at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:405) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:124) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149) > ... 14 more > ], TaskAttempt 2 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Direct buffer memory > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:159) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139) > at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at org.apache.hadoop.util.DirectBufferPool.getBuffer(DirectBufferPool.java:72) > at org.apache.hadoop.hdfs.BlockReaderLocal.createDataBufIfNeeded(BlockReaderLocal.java:260) > at org.apache.hadoop.hdfs.BlockReaderLocal.readWithBounceBuffer(BlockReaderLocal.java:601) > at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:569) > at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:789) > at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:845) > at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:905) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:953) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:377) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:323) > at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238) > at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:462) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1372) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1264) > at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:135) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80) > at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:650) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:621) > at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:405) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:124) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149) > ... 14 more > ], TaskAttempt 3 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Direct buffer memory > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:159) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139) > at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at org.apache.hadoop.util.DirectBufferPool.getBuffer(DirectBufferPool.java:72) > at org.apache.hadoop.hdfs.BlockReaderLocal.createDataBufIfNeeded(BlockReaderLocal.java:260) > at org.apache.hadoop.hdfs.BlockReaderLocal.readWithBounceBuffer(BlockReaderLocal.java:601) > at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:569) > at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:789) > at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:845) > at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:905) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:953) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:377) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:323) > at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238) > at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:462) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1372) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1264) > at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:135) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80) > at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:650) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:621) > at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:405) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:124) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149) > ... 14 more > ]], Vertex did not succeed due to OWN_TASK_FAILURE, failedTasks:1 killedTasks:1, Vertex vertex_1465431842106_0014_1_00 [Map 1] killed/failed due to:OWN_TASK_FAILURE] > 2016-06-09 02:20:01,797 ERROR [main]: SessionState (SessionState.java:printError(942)) - Vertex killed, vertexName=Reducer 2, vertexId=vertex_1465431842106_0014_1_01, diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not succeed due to OTHER_VERTEX_FAILURE, failedTasks:0 killedTasks:1, Vertex vertex_1465431842106_0014_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE] > 2016-06-09 02:20:01,797 ERROR [main]: SessionState (SessionState.java:printError(942)) - DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:1 > 2016-06-09 02:20:01,797 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(166)) - > 2016-06-09 02:20:01,936 INFO [main]: hooks.ATSHook (ATSHook.java:(90)) - Created ATS Hook > 2016-06-09 02:20:01,937 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogBegin(138)) - > 2016-06-09 02:20:01,953 INFO [ATS Logger 0]: hooks.ATSHook (ATSHook.java:createPostHookEvent(193)) - Received post-hook notification for :hrt_qa_20160609021725_d63299c5-a805-4d37-830a-972a3c3bf9dd > 2016-06-09 02:20:01,955 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(166)) - > 2016-06-09 02:20:01,966 ERROR [main]: ql.Driver (SessionState.java:printError(942)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1465431842106_0014_1_00, diagnostics=[Task failed, taskId=task_1465431842106_0014_1_00_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Direct buffer memory > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:159) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139) > at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172) > at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168) > at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:693) > at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) > at org.apache.hadoop.util.DirectBufferPool.getBuffer(DirectBufferPool.java:72) > at org.apache.hadoop.hdfs.BlockReaderLocal.createDataBufIfNeeded(BlockReaderLocal.java:260) > at org.apache.hadoop.hdfs.BlockReaderLocal.readWithBounceBuffer(BlockReaderLocal.java:601) > at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:569) > at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:789) > at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:845) > at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:905) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:953) > at java.io.DataInputStream.readFully(DataInputStream.java:195) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:377) > at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:323) > at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238) > at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:462) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1372) > at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1264) > at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:135) > at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149) > at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80) > at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:650) > at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:621) > at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) > at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:405) > at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:124) > at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:149) > ... 14 more > {noformat} > We can have a better error message here detailing what might have caused this if we cannot avoid it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)