spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yifeih <...@git.apache.org>
Subject [GitHub] spark pull request #22146: [SPARK-24434][K8S] pod template files
Date Fri, 24 Aug 2018 13:57:33 GMT
Github user yifeih commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22146#discussion_r212633621
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -775,4 +787,183 @@ specific to Spark on Kubernetes.
        This sets the major Python version of the docker image used to run the driver and
executor containers. Can either be 2 or 3. 
       </td>
     </tr>
    +<tr>
    +  <td><code>spark.kubernetes.driver.containerName</code></td>
    +  <td><code>"spark-kubernetes-driver"</code></td>
    +  <td>
    +   This sets the driver container name. If you are specifying a driver [pod template](#pod-template),
you can match this name to the
    +   driver container name set in the template.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.executor.containerName</code></td>
    +  <td><code>"spark-kubernetes-executor"</code></td>
    +  <td>
    +   This sets the executor container name. If you are specifying a an executor [pod template](#pod-template),
you can match this name to the
    +   driver container name set in the template.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.driver.podTemplateFile</code></td>
    +  <td>(none)</td>
    +  <td>
    +   Specify the local file that contains the driver [pod template](#pod-template). For
example
    +   <code>spark.kubernetes.driver.podTemplateFile=/path/to/driver-pod-template.yaml`</code>
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.kubernetes.executor.podTemplateFile</code></td>
    +  <td>(none)</td>
    +  <td>
    +   Specify the local file that contains the executor [pod template](#pod-template). For
example
    +   <code>spark.kubernetes.executor.podTemplateFile=/path/to/executor-pod-template.yaml`</code>
    +  </td>
    +</tr>
    +</table>
    +
    +#### Pod template properties
    +
    +See the below table for the full list of pod specifications that will be overwritten
by spark.
    +
    +### Pod Metadata
    +
    +<table class="table">
    +<tr><th>Pod metadata key</th><th>Modified value</th><th>Description</th></tr>
    +<tr>
    +  <td>name</td>
    +  <td>Value of <code>spark.kubernetes.driver.pod.name</code></td>
    +  <td>
    +    The driver pod name will be overwritten with either the configured or default value
of
    +    <code>spark.kubernetes.driver.pod.name</code>. The executor pod names
will be unaffected.
    +  </td>
    +</tr>
    +<tr>
    +  <td>namespace</td>
    +  <td>Value of <code>spark.kubernetes.namespace</code></td>
    +  <td>
    +    Spark makes strong assumptions about the driver and executor namespaces. Both driver
and executor namespaces will
    +    be replaced by this spark conf value.
    +  </td>
    +</tr>
    +<tr>
    +  <td>labels</td>
    +  <td>Adds the labels from <code>spark.kubernetes.{driver,executor}.label.*</code></td>
    +  <td>
    +    Spark will add additional labels specified by the spark configuration.
    +  </td>
    +</tr>
    +<tr>
    +  <td>annotations</td>
    +  <td>Adds the annotations from <code>spark.kubernetes.{driver,executor}.annotation.*</code></td>
    +  <td>
    +    Spark will add additional labels specified by the spark configuration.
    +  </td>
    +</tr>
    +</table>
    +
    +### Pod Spec
    +
    +<table class="table">
    +<tr><th>Pod spec key</th><th>Modified value</th><th>Description</th></tr>
    +<tr>
    +  <td>imagePullSecrets</td>
    +  <td>Adds image pull secrets from <code>spark.kubernetes.container.image.pullSecrets</code></td>
    +  <td>
    +    Additional pull secrets will be added from the spark configuration to both executor
pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td>nodeSelector</td>
    +  <td>Adds node selectors from <code>spark.kubernetes.node.selector.*</code></td>
    +  <td>
    +    Additional node selectors will be added from the spark configuration to both executor
pods.
    +  </td>
    +</tr>
    +<tr>
    +  <td>restartPolicy</td>
    +  <td></code>"never"</code></td>
    +  <td>
    +    Spark assumes that both drivers and executors never restart.
    +  </td>
    +</tr>
    +<tr>
    +  <td>serviceAccount</td>
    +  <td>Value of </code>spark.kubernetes.authenticate.driver.serviceAccountName</code></td>
    +  <td>
    +    Spark will override <code>serviceAccount</code> with the value of the
spark configuration for only
    --- End diff --
    
    I'm pretty sure this does not: https://github.com/apache/spark/blob/master/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/DriverKubernetesCredentialsFeatureStep.scala#L74.
I'll update the docs to clarify!


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message