livy-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Saisai Shao <sai.sai.s...@gmail.com>
Subject Re: Livy running into OOM after several hours. Whats the best way to diagnose and fix?
Date Wed, 11 Apr 2018 06:47:50 GMT
This mostly like a Spark issue, not a Livy issue (
https://issues.apache.org/jira/browse/SPARK-23682).

2018-04-11 9:53 GMT+08:00 kant kodali <kanth909@gmail.com>:

> Hi All,
>
> Livy had been running into OOM after running few long running streaming
> queries (<10 queries) for a while. It happens after several hours. I am
> trying to figure out why it happens before I tweak some parameters?
>
> currently, I set spark.executor.memory = 3g and spark.driver.memory = 3g
> and I wonder if I still need to set these when I see in spark documentation
> there is  spark.memory.fraction and spark.dynamicAllocation.enabled?
>
> I am using spark 2.3.0 and running livy in a client mode.
>
> *Also, How to scale livy? should I one session for every long running
> streaming query? should I have multiple sessions?*
>
> Stacktrace #1
>
> {
>     "id": 0,
>     "from": 102,
>     "total": 202,
>     "log": [
>         "18/04/08 19:13:40 ERROR MicroBatchExecution: Query [id =
> 7aedaf72-41e0-4be5-8ea6-e374bfbf0ae7, runId = b85bacee-d54e-421e-b453-
> 450591e128c9] terminated with error",
>         "java.lang.OutOfMemoryError: GC overhead limit exceeded",
>         "\tat java.lang.StringCoding$StringEncoder.encode(StringCoding.jav
> a:300)",
>         "\tat java.lang.StringCoding.encode(StringCoding.java:344)",
>         "\tat java.lang.String.getBytes(String.java:918)",
>         "\tat java.io.UnixFileSystem.getLength(Native Method)",
>         "\tat java.io.File.length(File.java:974)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFi
> leStatus.<init>(RawLocalFileSystem.java:626)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileSta
> tus(RawLocalFileSystem.java:609)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInt
> ernal(RawLocalFileSystem.java:824)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLoc
> alFileSystem.java:601)",
>         "\tat org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1309)
> ",
>         "\tat org.apache.hadoop.fs.DelegateToFileSystem.renameInternal(Del
> egateToFileSystem.java:197)",
>         "\tat org.apache.hadoop.fs.AbstractFileSystem.renameInternal(Abstr
> actFileSystem.java:748)",
>         "\tat org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:2
> 36)",
>         "\tat org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileS
> ystem.java:678)",
>         "\tat org.apache.hadoop.fs.FileContext.rename(FileContext.java:958
> )",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog$Fil
> eContextManager.rename(HDFSMetadataLog.scala:374)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog.org
> <http://org.apache.spark.sql.execution.streaming.hdfsmetadatalog.org/>
> $apache$spark$sql$execution$streaming$HDFSMetadataLog$$writeBatch(
> HDFSMetadataLog.scala:160)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog$$an
> onfun$add$1.apply$mcZ$sp(HDFSMetadataLog.scala:112)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog$$an
> onfun$add$1.apply(HDFSMetadataLog.scala:110)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog$$an
> onfun$add$1.apply(HDFSMetadataLog.scala:110)",
>         "\tat scala.Option.getOrElse(Option.scala:121)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog.add
> (HDFSMetadataLog.scala:110)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$
> constructNextBatch$1.apply$mcV$sp(MicroBatchExecution.scala:339)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$
> constructNextBatch$1.apply(MicroBatchExecution.scala:338)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$
> constructNextBatch$1.apply(MicroBatchExecution.scala:338)",
>         "\tat org.apache.spark.sql.execution.streaming.ProgressReporter$cl
> ass.reportTimeTaken(ProgressReporter.scala:271)",
>         "\tat org.apache.spark.sql.execution.streaming.StreamExecution.rep
> ortTimeTaken(StreamExecution.scala:58)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExe
> cution.org
> <http://org.apache.spark.sql.execution.streaming.microbatchexecution.org/>
> $apache$spark$sql$execution$streaming$MicroBatchExecution$
> $constructNextBatch(MicroBatchExecution.scala:338)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(
> MicroBatchExecution.scala:128)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(
> MicroBatchExecution.scala:121)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(
> MicroBatchExecution.scala:121)",
>         "\tat org.apache.spark.sql.execution.streaming.ProgressReporter$cl
> ass.reportTimeTaken(ProgressReporter.scala:271)",
>         "Exception in thread \"stream execution thread for [id =
> 7aedaf72-41e0-4be5-8ea6-e374bfbf0ae7, runId = b85bacee-d54e-421e-b453-
> 450591e128c9]\" java.lang.OutOfMemoryError: GC overhead limit exceeded",
>         "\tat java.lang.StringCoding$StringEncoder.encode(StringCoding.jav
> a:300)",
>         "\tat java.lang.StringCoding.encode(StringCoding.java:344)",
>         "\tat java.lang.String.getBytes(String.java:918)",
>         "\tat java.io.UnixFileSystem.getLength(Native Method)",
>         "\tat java.io.File.length(File.java:974)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFi
> leStatus.<init>(RawLocalFileSystem.java:626)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileSta
> tus(RawLocalFileSystem.java:609)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInt
> ernal(RawLocalFileSystem.java:824)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLoc
> alFileSystem.java:601)",
>         "\tat org.apache.hadoop.fs.FileSystem.rename(FileSystem.java:1309)
> ",
>         "\tat org.apache.hadoop.fs.DelegateToFileSystem.renameInternal(Del
> egateToFileSystem.java:197)",
>         "\tat org.apache.hadoop.fs.AbstractFileSystem.renameInternal(Abstr
> actFileSystem.java:748)",
>         "\tat org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:2
> 36)",
>         "\tat org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileS
> ystem.java:678)",
>         "\tat org.apache.hadoop.fs.FileContext.rename(FileContext.java:958
> )",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog$Fil
> eContextManager.rename(HDFSMetadataLog.scala:374)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog.org
> <http://org.apache.spark.sql.execution.streaming.hdfsmetadatalog.org/>
> $apache$spark$sql$execution$streaming$HDFSMetadataLog$$writeBatch(
> HDFSMetadataLog.scala:160)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog$$an
> onfun$add$1.apply$mcZ$sp(HDFSMetadataLog.scala:112)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog$$an
> onfun$add$1.apply(HDFSMetadataLog.scala:110)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog$$an
> onfun$add$1.apply(HDFSMetadataLog.scala:110)",
>         "\tat scala.Option.getOrElse(Option.scala:121)",
>         "\tat org.apache.spark.sql.execution.streaming.HDFSMetadataLog.add
> (HDFSMetadataLog.scala:110)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$
> constructNextBatch$1.apply$mcV$sp(MicroBatchExecution.scala:339)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$
> constructNextBatch$1.apply(MicroBatchExecution.scala:338)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$
> constructNextBatch$1.apply(MicroBatchExecution.scala:338)",
>         "\tat org.apache.spark.sql.execution.streaming.ProgressReporter$cl
> ass.reportTimeTaken(ProgressReporter.scala:271)",
>         "\tat org.apache.spark.sql.execution.streaming.StreamExecution.rep
> ortTimeTaken(StreamExecution.scala:58)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExe
> cution.org
> <http://org.apache.spark.sql.execution.streaming.microbatchexecution.org/>
> $apache$spark$sql$execution$streaming$MicroBatchExecution$
> $constructNextBatch(MicroBatchExecution.scala:338)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(
> MicroBatchExecution.scala:128)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(
> MicroBatchExecution.scala:121)",
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> $$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(
> MicroBatchExecution.scala:121)",
>         "\tat org.apache.spark.sql.execution.streaming.ProgressReporter$cl
> ass.reportTimeTaken(ProgressReporter.scala:271)",
>         "Exception in thread \"dag-scheduler-event-loop\"
> java.lang.OutOfMemoryError: GC overhead limit exceeded",
>         "\tat java.lang.Class.getDeclaredMethods0(Native Method)",
>         "\tat java.lang.Class.privateGetDeclaredMethods(Class.java:2701)",
>         "\tat java.lang.Class.getDeclaredMethod(Class.java:2128)",
>         "\tat java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass
> .java:1575)",
>         "\tat java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java
> :79)",
>         "\tat java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:508)"
> ,
>         "\tat java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:482)"
> ,
>         "\tat java.security.AccessController.doPrivileged(Native Method)",
>         "\tat java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:482)
> ",
>         "\tat java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:379)
> ",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1134)",
>         "\tat java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputSt
> ream.java:1548)",
>         "\tat java.io.ObjectOutputStream.writeSerialData(ObjectOutputStrea
> m.java:1509)",
>         "\tat java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputS
> tream.java:1432)",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1178)",
>         "\tat java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputSt
> ream.java:1548)",
>         "\tat java.io.ObjectOutputStream.writeSerialData(ObjectOutputStrea
> m.java:1509)",
>         "\tat java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputS
> tream.java:1432)",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1178)",
>         "\tat java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputSt
> ream.java:1548)",
>         "\tat java.io.ObjectOutputStream.writeSerialData(ObjectOutputStrea
> m.java:1509)",
>         "\tat java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputS
> tream.java:1432)",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1178)",
>         "\tat java.io.ObjectOutputStream.writeArray(ObjectOutputStream.jav
> a:1378)",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1174)",
>         "\tat java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputSt
> ream.java:1548)",
>         "\tat java.io.ObjectOutputStream.writeSerialData(ObjectOutputStrea
> m.java:1509)",
>         "\tat java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputS
> tream.java:1432)",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1178)",
>         "\tat java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputSt
> ream.java:1548)",
>         "\tat java.io.ObjectOutputStream.writeSerialData(ObjectOutputStrea
> m.java:1509)",
>         "\tat java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputS
> tream.java:1432)"
>     ]
> }
>
> StackTrace2
>
>
> {
>     "id": 0,
>     "from": 102,
>     "total": 202,
>     "log": [
>         "\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution
> .runActivatedStream(MicroBatchExecution.scala:117)",
>         "\tat org.apache.spark.sql.execution.streaming.StreamExecution.org
> <http://org.apache.spark.sql.execution.streaming.streamexecution.org/>
> $apache$spark$sql$execution$streaming$StreamExecution$$runStream(
> StreamExecution.scala:279)",
>         "\tat org.apache.spark.sql.execution.streaming.StreamExecution$$an
> on$1.run(StreamExecution.scala:189)",
>         "Caused by: java.lang.IllegalStateException: Error reading delta
> file file:/tmp/7a32968b1cea96f54c771da72784ae21/state/0/1/1.delta of
> HDFSStateStoreProvider[id = (op=0,part=1),dir = file:/tmp/
> 7a32968b1cea96f54c771da72784ae21/state/0/1]: file:/tmp/
> 7a32968b1cea96f54c771da72784ae21/state/0/1/1.delta does not exist",
>         "\tat org.apache.spark.sql.execution.streaming.state.HDFSBac
> kedStateStoreProvider.org
> <http://org.apache.spark.sql.execution.streaming.state.hdfsbackedstatestoreprovider.org/>
> $apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$
> updateFromDeltaFile(HDFSBackedStateStoreProvider.scala:371)",
>         "\tat org.apache.spark.sql.execution.streaming.state.HDFSBackedSta
> teStoreProvider$$anonfun$loadMap$1.apply$mcVJ$sp(HDFSBackedS
> tateStoreProvider.scala:333)",
>         "\tat org.apache.spark.sql.execution.streaming.state.HDFSBackedSta
> teStoreProvider$$anonfun$loadMap$1.apply(HDFSBackedStateStoreProvider.
> scala:332)",
>         "\tat org.apache.spark.sql.execution.streaming.state.HDFSBackedSta
> teStoreProvider$$anonfun$loadMap$1.apply(HDFSBackedStateStoreProvider.
> scala:332)",
>         "\tat scala.collection.immutable.NumericRange.foreach(NumericRange
> .scala:73)",
>         "\tat org.apache.spark.sql.execution.streaming.state.HDFSBackedSta
> teStoreProvider.loadMap(HDFSBackedStateStoreProvider.scala:332)",
>         "\tat org.apache.spark.sql.execution.streaming.state.HDFSBackedSta
> teStoreProvider.getStore(HDFSBackedStateStoreProvider.scala:196)",
>         "\tat org.apache.spark.sql.execution.streaming.state.StateStore$.g
> et(StateStore.scala:369)",
>         "\tat org.apache.spark.sql.execution.streaming.state.StateStoreRDD
> .compute(StateStoreRDD.scala:74)",
>         "\tat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:3
> 24)",
>         "\tat org.apache.spark.rdd.RDD.iterator(RDD.scala:288)",
>         "\tat org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsR
> DD.scala:38)",
>         "\tat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:3
> 24)",
>         "\tat org.apache.spark.rdd.RDD.iterator(RDD.scala:288)",
>         "\tat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.sca
> la:87)",
>         "\tat org.apache.spark.scheduler.Task.run(Task.scala:109)",
>         "\tat org.apache.spark.executor.Executor$TaskRunner.run(Executor.s
> cala:345)",
>         "\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1142)",
>         "\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:617)",
>         "\tat java.lang.Thread.run(Thread.java:748)",
>         "Caused by: java.io.FileNotFoundException: File file:/tmp/
> 7a32968b1cea96f54c771da72784ae21/state/0/1/1.delta does not exist",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileSta
> tus(RawLocalFileSystem.java:611)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInt
> ernal(RawLocalFileSystem.java:824)",
>         "\tat org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLoc
> alFileSystem.java:601)",
>         "\tat org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFi
> leSystem.java:421)",
>         "\tat org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputCheck
> er.<init>(ChecksumFileSystem.java:142)",
>         "\tat org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSys
> tem.java:346)",
>         "\tat org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)",
>         "\tat org.apache.spark.sql.execution.streaming.state.HDFSBac
> kedStateStoreProvider.org
> <http://org.apache.spark.sql.execution.streaming.state.hdfsbackedstatestoreprovider.org/>
> $apache$spark$sql$execution$streaming$state$HDFSBackedStateStoreProvider$$
> updateFromDeltaFile(HDFSBackedStateStoreProvider.scala:368)",
>         "\t... 19 more",
>         "Exception in thread \"dispatcher-event-loop-3\"
> java.lang.OutOfMemoryError: GC overhead limit exceeded",
>         "\tat java.lang.Class.getDeclaredMethods0(Native Method)",
>         "\tat java.lang.Class.privateGetDeclaredMethods(Class.java:2701)",
>         "\tat java.lang.Class.getDeclaredMethod(Class.java:2128)",
>         "\tat java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass
> .java:1475)",
>         "\tat java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java
> :72)",
>         "\tat java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:498)"
> ,
>         "\tat java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472)"
> ,
>         "\tat java.security.AccessController.doPrivileged(Native Method)",
>         "\tat java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:472)
> ",
>         "\tat java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:369)
> ",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1134)",
>         "\tat java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputSt
> ream.java:1548)",
>         "\tat java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputSt
> ream.java:441)",
>         "\tat org.apache.spark.broadcast.TorrentBroadcast$$anonfun$writeOb
> ject$1.apply$mcV$sp(TorrentBroadcast.scala:204)",
>         "\tat org.apache.spark.broadcast.TorrentBroadcast$$anonfun$writeOb
> ject$1.apply(TorrentBroadcast.scala:202)",
>         "\tat org.apache.spark.broadcast.TorrentBroadcast$$anonfun$writeOb
> ject$1.apply(TorrentBroadcast.scala:202)",
>         "\tat org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:13
> 43)",
>         "\tat org.apache.spark.broadcast.TorrentBroadcast.writeObject(Torr
> entBroadcast.scala:202)",
>         "\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)",
>         "\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> ssorImpl.java:62)",
>         "\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
> thodAccessorImpl.java:43)",
>         "\tat java.lang.reflect.Method.invoke(Method.java:498)",
>         "\tat java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClas
> s.java:1028)",
>         "\tat java.io.ObjectOutputStream.writeSerialData(ObjectOutputStrea
> m.java:1496)",
>         "\tat java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputS
> tream.java:1432)",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1178)",
>         "\tat java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputSt
> ream.java:1548)",
>         "\tat java.io.ObjectOutputStream.writeSerialData(ObjectOutputStrea
> m.java:1509)",
>         "\tat java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputS
> tream.java:1432)",
>         "\tat java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.j
> ava:1178)",
>         "\tat java.io.ObjectOutputStream.writeObject(ObjectOutputStream.ja
> va:348)",
>         "\tat org.apache.spark.serializer.JavaSerializationStream.writeObj
> ect(JavaSerializer.scala:43)",
>         "18/04/10 03:21:09 ERROR Utils: Uncaught exception in thread
> element-tracking-store-worker",
>         "java.lang.OutOfMemoryError: GC overhead limit exceeded",
>         "\tat org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(
> KVTypeInfo.java:154)",
>         "\tat org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.com
> pare(InMemoryStore.java:248)",
>         "\tat org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lam
> bda$iterator$0(InMemoryStore.java:203)",
>         "\tat org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$La
> mbda$24/1059294725.compare(Unknown Source)",
>         "\tat java.util.TimSort.binarySort(TimSort.java:296)",
>         "\tat java.util.TimSort.sort(TimSort.java:239)",
>         "\tat java.util.Arrays.sort(Arrays.java:1512)",
>         "\tat java.util.ArrayList.sort(ArrayList.java:1454)",
>         "\tat java.util.Collections.sort(Collections.java:175)",
>         "\tat org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.ite
> rator(InMemoryStore.java:203)",
>         "\tat scala.collection.convert.Wrappers$JIterableWrapper.iterator(
> Wrappers.scala:54)",
>         "\tat scala.collection.IterableLike$class.foreach(IterableLike.sca
> la:72)",
>         "\tat scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> ",
>         "\tat org.apache.spark.status.AppStatusListener$$anonfun$org$apach
> e$spark$status$AppStatusListener$$cleanupStages$1.apply(AppS
> tatusListener.scala:891)",
>         "\tat org.apache.spark.status.AppStatusListener$$anonfun$org$apach
> e$spark$status$AppStatusListener$$cleanupStages$1.apply(AppS
> tatusListener.scala:871)",
>         "\tat scala.collection.immutable.List.foreach(List.scala:381)",
>         "\tat org.apache.spark.status.AppStatusListener.org
> <http://org.apache.spark.status.appstatuslistener.org/>$apache$s
> park$status$AppStatusListener$$cleanupStages(AppStatusListen
> er.scala:871)",
>         "\tat org.apache.spark.status.AppStatusListener$$anonfun$3.apply$m
> cVJ$sp(AppStatusListener.scala:84)",
>         "\tat org.apache.spark.status.ElementTrackingStore$$anonfun$write$
> 1$$anonfun$apply$1$$anonfun$apply$mcV$sp$1.apply(ElementTrac
> kingStore.scala:109)",
>         "\tat org.apache.spark.status.ElementTrackingStore$$anonfun$write$
> 1$$anonfun$apply$1$$anonfun$apply$mcV$sp$1.apply(ElementTrac
> kingStore.scala:107)",
>         "\tat scala.collection.immutable.List.foreach(List.scala:381)",
>         "\tat org.apache.spark.status.ElementTrackingStore$$anonfun$write$
> 1$$anonfun$apply$1.apply$mcV$sp(ElementTrackingStore.scala:107)",
>         "\tat org.apache.spark.status.ElementTrackingStore$$anonfun$write$
> 1$$anonfun$apply$1.apply(ElementTrackingStore.scala:105)",
>         "\tat org.apache.spark.status.ElementTrackingStore$$anonfun$write$
> 1$$anonfun$apply$1.apply(ElementTrackingStore.scala:105)",
>         "\tat org.apache.spark.util.Utils$.tryLog(Utils.scala:2001)",
>         "\tat org.apache.spark.status.ElementTrackingStore$$anon$1.run(Ele
> mentTrackingStore.scala:91)",
>         "\tat java.util.concurrent.Executors$RunnableAdapter.call(Executor
> s.java:511)",
>         "\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)",
>         "\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
> Executor.java:1142)",
>         "\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
> lExecutor.java:617)",
>         "\tat java.lang.Thread.run(Thread.java:748)"
>     ]
> }
>
>
> Thanks!
>

Mime
View raw message