Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id E2ADA200CC8 for ; Fri, 14 Jul 2017 21:32:11 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id E11E016E474; Fri, 14 Jul 2017 19:32:11 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 0D6DF16E473 for ; Fri, 14 Jul 2017 21:32:09 +0200 (CEST) Received: (qmail 61813 invoked by uid 500); 14 Jul 2017 19:32:09 -0000 Mailing-List: contact issues-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@spark.apache.org Received: (qmail 61803 invoked by uid 99); 14 Jul 2017 19:32:09 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 14 Jul 2017 19:32:09 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id BB0C1C0096 for ; Fri, 14 Jul 2017 19:32:08 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id pl-nNxNYMYMG for ; Fri, 14 Jul 2017 19:32:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 856FB5FBDF for ; Fri, 14 Jul 2017 19:32:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 10150E0BC8 for ; Fri, 14 Jul 2017 19:32:01 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id B428524761 for ; Fri, 14 Jul 2017 19:32:00 +0000 (UTC) Date: Fri, 14 Jul 2017 19:32:00 +0000 (UTC) From: "Matthew Scheifer (JIRA)" To: issues@spark.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (SPARK-20086) issue with pyspark 2.1.0 window function MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Fri, 14 Jul 2017 19:32:12 -0000 [ https://issues.apache.org/jira/browse/SPARK-20086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16087862#comment-16087862 ] Matthew Scheifer commented on SPARK-20086: ------------------------------------------ Is there a way I can work around this if I'm stuck on Spark 2.1.0 ? Thanks, Matthew > issue with pyspark 2.1.0 window function > ---------------------------------------- > > Key: SPARK-20086 > URL: https://issues.apache.org/jira/browse/SPARK-20086 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 2.1.0 > Reporter: mandar uapdhye > Assignee: Herman van Hovell > Fix For: 2.1.1, 2.2.0 > > > original post at > [stackoverflow | http://stackoverflow.com/questions/43007433/pyspark-2-1-0-error-when-working-with-window-function] > I get error when working with pyspark window function. here is some example code: > {code:title=borderStyle=solid} > import pyspark > import pyspark.sql.functions as sf > import pyspark.sql.types as sparktypes > from pyspark.sql import window > > sc = pyspark.SparkContext() > sqlc = pyspark.SQLContext(sc) > rdd = sc.parallelize([(1, 2.0), (1, 3.0), (1, 1.), (1, -2.), (1, -1.)]) > df = sqlc.createDataFrame(rdd, ["x", "AmtPaid"]) > df.show() > {code} > gives: > | x|AmtPaid| > | 1| 2.0| > | 1| 3.0| > | 1| 1.0| > | 1| -2.0| > | 1| -1.0| > next, compute cumulative sum > {code:title=test.py|borderStyle=solid} > win_spec_max = (window.Window > .partitionBy(['x']) > .rowsBetween(window.Window.unboundedPreceding, 0))) > df = df.withColumn('AmtPaidCumSum', > sf.sum(sf.col('AmtPaid')).over(win_spec_max)) > df.show() > {code} > gives, > | x|AmtPaid|AmtPaidCumSum| > | 1| 2.0| 2.0| > | 1| 3.0| 5.0| > | 1| 1.0| 6.0| > | 1| -2.0| 4.0| > | 1| -1.0| 3.0| > next, compute cumulative max, > {code} > df = df.withColumn('AmtPaidCumSumMax', sf.max(sf.col('AmtPaidCumSum')).over(win_spec_max)) > df.show() > {code} > gives error log > {noformat} > Py4JJavaError: An error occurred while calling o2609.showString. > with traceback: > Py4JJavaErrorTraceback (most recent call last) > in () > ----> 1 df.show() > /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/dataframe.pyc in show(self, n, truncate) > 316 """ > 317 if isinstance(truncate, bool) and truncate: > --> 318 print(self._jdf.showString(n, 20)) > 319 else: > 320 print(self._jdf.showString(n, int(truncate))) > /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args) > 1131 answer = self.gateway_client.send_command(command) > 1132 return_value = get_return_value( > -> 1133 answer, self.gateway_client, self.target_id, self.name) > 1134 > 1135 for temp_arg in temp_args: > /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw) > 61 def deco(*a, **kw): > 62 try: > ---> 63 return f(*a, **kw) > 64 except py4j.protocol.Py4JJavaError as e: > 65 s = e.java_exception.toString() > /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name) > 317 raise Py4JJavaError( > 318 "An error occurred while calling {0}{1}{2}.\n". > --> 319 format(target_id, ".", name), value) > 320 else: > 321 raise Py4JError( > {noformat} > but interestingly enough, if i introduce another change before sencond window operation, say inserting a column then it does not give that error: > {code} > df = df.withColumn('MaxBound', sf.lit(6.)) > df.show() > {code} > | x|AmtPaid|AmtPaidCumSum|MaxBound| > | 1| 2.0| 2.0| 6.0| > | 1| 3.0| 5.0| 6.0| > | 1| 1.0| 6.0| 6.0| > | 1| -2.0| 4.0| 6.0| > | 1| -1.0| 3.0| 6.0| > {code} > #then apply the second window operations > df = df.withColumn('AmtPaidCumSumMax', sf.max(sf.col('AmtPaidCumSum')).over(win_spec_max)) > df.show() > {code} > | x|AmtPaid|AmtPaidCumSum|MaxBound|AmtPaidCumSumMax| > | 1| 2.0| 2.0| 6.0| 2.0| > | 1| 3.0| 5.0| 6.0| 5.0| > | 1| 1.0| 6.0| 6.0| 6.0| > | 1| -2.0| 4.0| 6.0| 6.0| > | 1| -1.0| 3.0| 6.0| 6.0| > I do not understand this behaviour > well, so far so good, but then I try another operation then again get similar error: > {code} > def _udf_compare_cumsum_sll(x): > if x['AmtPaidCumSumMax'] >= x['MaxBound']: > output = 0 > else: > output = x['AmtPaid'] > return output > udf_compare_cumsum_sll = sf.udf(_udf_compare_cumsum_sll, sparktypes.FloatType()) > df = df.withColumn('AmtPaidAdjusted', udf_compare_cumsum_sll(sf.struct([df[x] for x in df.columns]))) > df.show() > {code} > gives, > {noformat} > Py4JJavaErrorTraceback (most recent call last) > in () > ----> 1 df.show() > /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/dataframe.pyc in show(self, n, truncate) > 316 """ > 317 if isinstance(truncate, bool) and truncate: > --> 318 print(self._jdf.showString(n, 20)) > 319 else: > 320 print(self._jdf.showString(n, int(truncate))) > /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args) > 1131 answer = self.gateway_client.send_command(command) > 1132 return_value = get_return_value( > -> 1133 answer, self.gateway_client, self.target_id, self.name) > 1134 > 1135 for temp_arg in temp_args: > /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw) > 61 def deco(*a, **kw): > 62 try: > ---> 63 return f(*a, **kw) > 64 except py4j.protocol.Py4JJavaError as e: > 65 s = e.java_exception.toString() > /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name) > 317 raise Py4JJavaError( > 318 "An error occurred while calling {0}{1}{2}.\n". > --> 319 format(target_id, ".", name), value) > 320 else: > 321 raise Py4JError( > Py4JJavaError: An error occurred while calling o91.showString. > : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 36.0 failed 1 times, most recent failure: Lost task 0.0 in stage 36.0 (TID 645, localhost, executor driver): org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: AmtPaidCumSum#10 > {noformat} > I wonder if someone could reproduce this behaviour ... > here is complete log .. > {noformat} > Py4JJavaErrorTraceback (most recent call last) > in () > ----> 1 df.show() > /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/dataframe.pyc in show(self, n, truncate) > 316 """ > 317 if isinstance(truncate, bool) and truncate: > --> 318 print(self._jdf.showString(n, 20)) > 319 else: > 320 print(self._jdf.showString(n, int(truncate))) > /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/java_gateway.pyc in __call__(self, *args) > 1131 answer = self.gateway_client.send_command(command) > 1132 return_value = get_return_value( > -> 1133 answer, self.gateway_client, self.target_id, self.name) > 1134 > 1135 for temp_arg in temp_args: > /Users/<>/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.pyc in deco(*a, **kw) > 61 def deco(*a, **kw): > 62 try: > ---> 63 return f(*a, **kw) > 64 except py4j.protocol.Py4JJavaError as e: > 65 s = e.java_exception.toString() > /Users/<>/.virtualenvs/<>/lib/python2.7/site-packages/py4j/protocol.pyc in get_return_value(answer, gateway_client, target_id, name) > 317 raise Py4JJavaError( > 318 "An error occurred while calling {0}{1}{2}.\n". > --> 319 format(target_id, ".", name), value) > 320 else: > 321 raise Py4JError( > Py4JJavaError: An error occurred while calling o703.showString. > : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 119.0 failed 1 times, most recent failure: Lost task 0.0 in stage 119.0 (TID 1817, localhost, executor driver): org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: AmtPaidCumSum#2076 > at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) > at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88) > at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288) > at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:293) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:293) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5$$anonfun$apply$11.apply(TreeNode.scala:360) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:381) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:285) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:358) > at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188) > at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:329) > at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:293) > at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:277) > at org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:87) > at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$bind$1.apply(GenerateMutableProjection.scala:38) > at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$bind$1.apply(GenerateMutableProjection.scala:38) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.bind(GenerateMutableProjection.scala:38) > at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.generate(GenerateMutableProjection.scala:44) > at org.apache.spark.sql.execution.SparkPlan.newMutableProjection(SparkPlan.scala:353) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1$1.apply(WindowExec.scala:203) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1$1.apply(WindowExec.scala:202) > at org.apache.spark.sql.execution.window.AggregateProcessor$.apply(AggregateProcessor.scala:98) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2.org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1(WindowExec.scala:198) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$6.apply(WindowExec.scala:225) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$6.apply(WindowExec.scala:222) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1$$anonfun$16.apply(WindowExec.scala:318) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1$$anonfun$16.apply(WindowExec.scala:318) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1.(WindowExec.scala:318) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14.apply(WindowExec.scala:290) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14.apply(WindowExec.scala:289) > at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796) > at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796) > at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) > at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.RuntimeException: Couldn't find AmtPaidCumSum#2076 in [sum#2299,max#2300,x#2066L,AmtPaid#2067] > at scala.sys.package$.error(package.scala:27) > at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94) > at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:88) > at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) > ... 62 more > Driver stacktrace: > at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435) > at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423) > at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422) > at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422) > at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) > at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802) > at scala.Option.foreach(Option.scala:257) > at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802) > at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650) > at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605) > at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594) > at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) > at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944) > at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:333) > at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) > at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371) > at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57) > at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765) > at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370) > at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377) > at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113) > at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112) > at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795) > at org.apache.spark.sql.Dataset.head(Dataset.scala:2112) > at org.apache.spark.sql.Dataset.take(Dataset.scala:2327) > at org.apache.spark.sql.Dataset.showString(Dataset.scala:248) > at sun.reflect.GeneratedMethodAccessor83.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) > at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) > at py4j.Gateway.invoke(Gateway.java:280) > at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) > at py4j.commands.CallCommand.execute(CallCommand.java:79) > at py4j.GatewayConnection.run(GatewayConnection.java:214) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree: null > at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56) > at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:88) > at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1.applyOrElse(BoundAttribute.scala:87) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$3.apply(TreeNode.scala:288) > at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) > at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:287) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:293) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:293) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5$$anonfun$apply$11.apply(TreeNode.scala:360) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.immutable.List.foreach(List.scala:381) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.immutable.List.map(List.scala:285) > at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$5.apply(TreeNode.scala:358) > at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:188) > at org.apache.spark.sql.catalyst.trees.TreeNode.transformChildren(TreeNode.scala:329) > at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:293) > at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:277) > at org.apache.spark.sql.catalyst.expressions.BindReferences$.bindReference(BoundAttribute.scala:87) > at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$bind$1.apply(GenerateMutableProjection.scala:38) > at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$$anonfun$bind$1.apply(GenerateMutableProjection.scala:38) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.AbstractTraversable.map(Traversable.scala:104) > at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.bind(GenerateMutableProjection.scala:38) > at org.apache.spark.sql.catalyst.expressions.codegen.GenerateMutableProjection$.generate(GenerateMutableProjection.scala:44) > at org.apache.spark.sql.execution.SparkPlan.newMutableProjection(SparkPlan.scala:353) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1$1.apply(WindowExec.scala:203) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1$1.apply(WindowExec.scala:202) > at org.apache.spark.sql.execution.window.AggregateProcessor$.apply(AggregateProcessor.scala:98) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2.org$apache$spark$sql$execution$window$WindowExec$$anonfun$$processor$1(WindowExec.scala:198) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$6.apply(WindowExec.scala:225) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$windowFrameExpressionFactoryPairs$2$$anonfun$6.apply(WindowExec.scala:222) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1$$anonfun$16.apply(WindowExec.scala:318) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1$$anonfun$16.apply(WindowExec.scala:318) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234) > at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234) > at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1.(WindowExec.scala:318) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14.apply(WindowExec.scala:290) > at org.apache.spark.sql.execution.window.WindowExec$$anonfun$14.apply(WindowExec.scala:289) > at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796) > at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796) > at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) > at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > ... 1 more > Caused by: java.lang.RuntimeException: Couldn't find AmtPaidCumSum#2076 in [sum#2299,max#2300,x#2066L,AmtPaid#2067] > at scala.sys.package$.error(package.scala:27) > at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:94) > at org.apache.spark.sql.catalyst.expressions.BindReferences$$anonfun$bindReference$1$$anonfun$applyOrElse$1.apply(BoundAttribute.scala:88) > at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) > ... 62 more > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org For additional commands, e-mail: issues-help@spark.apache.org