spark-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject [spark] branch branch-3.1 updated: [SPARK-33874][K8S][FOLLOWUP] Handle long lived sidecars - clean up logging
Date Tue, 05 Jan 2021 21:49:39 GMT
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.1
in repository

The following commit(s) were added to refs/heads/branch-3.1 by this push:
     new a1c066c  [SPARK-33874][K8S][FOLLOWUP] Handle long lived sidecars - clean up logging
a1c066c is described below

commit a1c066c7db08dae091a6e71197bc14994be0cb18
Author: Holden Karau <>
AuthorDate: Tue Jan 5 13:48:52 2021 -0800

    [SPARK-33874][K8S][FOLLOWUP] Handle long lived sidecars - clean up logging
    ### What changes were proposed in this pull request?
    Switch log level from warn to debug when the spark container is not present in the pod's
container statuses.
    ### Why are the changes needed?
    There are many non-critical situations where the Spark container may not be present, and
the warning log level is too high.
    ### Does this PR introduce _any_ user-facing change?
    Log message change.
    ### How was this patch tested?
    Closes #31047 from holdenk/SPARK-33874-follow-up.
    Authored-by: Holden Karau <>
    Signed-off-by: Dongjoon Hyun <>
    (cherry picked from commit 171db85aa2cdacf39caeb26162569275076fd52f)
    Signed-off-by: Dongjoon Hyun <>
 .../apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshot.scala    | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshot.scala
index 71355c7..37aaca7 100644
--- a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshot.scala
+++ b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshot.scala
@@ -93,9 +93,10 @@ object ExecutorPodsSnapshot extends Logging {
                   case _ =>
-              // If we can't find the Spark container status, fall back to the pod status
+              // If we can't find the Spark container status, fall back to the pod status.
This is
+              // expected to occur during pod startup and other situations.
               case _ =>
-                logWarning(s"Unable to find container ${sparkContainerName} in pod ${pod}
" +
+                logDebug(s"Unable to find container ${sparkContainerName} in pod ${pod} "
                   "defaulting to entire pod status (running).")

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message