Return-Path: X-Original-To: apmail-spark-issues-archive@minotaur.apache.org Delivered-To: apmail-spark-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C1BF0184F7 for ; Tue, 3 Nov 2015 15:34:29 +0000 (UTC) Received: (qmail 3601 invoked by uid 500); 3 Nov 2015 15:34:28 -0000 Delivered-To: apmail-spark-issues-archive@spark.apache.org Received: (qmail 3508 invoked by uid 500); 3 Nov 2015 15:34:28 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 3148 invoked by uid 99); 3 Nov 2015 15:34:28 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Nov 2015 15:34:28 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 32D732C1F65 for ; Tue, 3 Nov 2015 15:34:28 +0000 (UTC) Date: Tue, 3 Nov 2015 15:34:28 +0000 (UTC) From: "Reynold Xin (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-10929) Tungsten fails to acquire memory writing to HDFS MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SPARK-10929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14987473#comment-14987473 ] Reynold Xin commented on SPARK-10929: ------------------------------------- [~nadenf] it is a pretty big change with risk for regression so unfortunately we can't backport this into branch-1.5. > Tungsten fails to acquire memory writing to HDFS > ------------------------------------------------ > > Key: SPARK-10929 > URL: https://issues.apache.org/jira/browse/SPARK-10929 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.5.0, 1.5.1 > Reporter: Naden Franciscus > Assignee: Davies Liu > Priority: Blocker > Fix For: 1.6.0 > > > We are executing 20 Spark SQL jobs in parallel using Spark Job Server and hitting the following issue pretty routinely. > 40GB heap x 6 nodes. Have tried adjusting shuffle.memoryFraction from 0.2 -> 0.1 with no difference. > {code} > .16): org.apache.spark.SparkException: Task failed while writing rows. > at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:250) > at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150) > at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) > at org.apache.spark.scheduler.Task.run(Task.scala:88) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.IOException: Unable to acquire 16777216 bytes of memory > at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPage(UnsafeExternalSorter.java:351) > at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.(UnsafeExternalSorter.java:138) > at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.create(UnsafeExternalSorter.java:106) > at org.apache.spark.sql.execution.UnsafeExternalRowSorter.(UnsafeExternalRowSorter.java:68) > at org.apache.spark.sql.execution.TungstenSort.org$apache$spark$sql$execution$TungstenSort$$preparePartition$1(sort.scala:146) > at org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169) > at org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169) > at org.apache.spark.rdd.MapPartitionsWithPreparationRDD.prepare(MapPartitionsWithPreparationRDD.scala:50) > at org.apache.spark.rdd.ZippedPartitionsBaseRDD$$anonfun$tryPrepareParents$1.applyOrElse(ZippedPartitionsRDD.scala:83) > at org.apache.spark.rdd.ZippedPartitionsBaseRDD$$anonfun$tryPrepareParents$1.applyOrElse(ZippedPartitionsRDD.scala:82) > at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) > at scala.collection.TraversableLike$$anonfun$collect$1.apply(TraversableLike.scala:278) > at scala.collection.immutable.List.foreach(List.scala:318) > at scala.collection.TraversableLike$class.collect(TraversableLike.scala:278) > at scala.collection.AbstractTraversable.collect(Traversable.scala:105) > at org.apache.spark.rdd.ZippedPartitionsBaseRDD.tryPrepareParents(ZippedPartitionsRDD.scala:82) > at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:97) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:264) > {code} > I have tried setting spark.buffer.pageSize to both 1MB and 64MB (in spark-defaults.conf) and it makes no difference. > It also tries to acquire 33554432 bytes of memory in both cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org