spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Reynold Xin <>
Subject Re: graceful shutdown in external data sources
Date Wed, 16 Mar 2016 22:36:01 GMT
There is no way to really know that, because users might run queries at any
given point.

BTW why can't your threads be just daemon threads?

On Wed, Mar 16, 2016 at 3:29 PM, Dan Burkert <> wrote:

> Hi Reynold,
> Is there any way to know when an executor will no longer have any tasks?
> It seems to me there is no timeout which is appropriate that is long enough
> to ensure that no more tasks will be scheduled on the executor, and short
> enough to be appropriate to wait on during an interactive shell shutdown.
> - Dan
> On Wed, Mar 16, 2016 at 2:40 PM, Reynold Xin <> wrote:
>> Maybe just add a watch dog thread and closed the connection upon some
>> timeout?
>> On Wednesday, March 16, 2016, Dan Burkert <> wrote:
>>> Hi all,
>>> I'm working on the Spark connector for Apache Kudu, and I've run into an
>>> issue that is a bit beyond my Spark knowledge. The Kudu connector
>>> internally holds an open connection to the Kudu cluster
>>> <>
>>> internally holds a Netty context with non-daemon threads. When using the
>>> Spark shell with the Kudu connector, exiting the shell via <ctrl>-D causes
>>> the shell to hang, and a thread dump reveals it's waiting for these
>>> non-daemon threads.  Registering a JVM shutdown hook to close the Kudu
>>> client does not do the trick, as it seems that the shutdown hooks are not
>>> fired on <ctrl>-D.
>>> I see that there is an internal Spark API for handling shutdown
>>> <>,
>>> is there something similar available for cleaning up external data sources?
>>> - Dan

View raw message