From reviews-return-1176527-archive-asf-public=cust-asf.ponee.io@spark.apache.org Wed Sep 30 06:51:46 2020 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mxout1-ec2-va.apache.org (mxout1-ec2-va.apache.org [3.227.148.255]) by mx-eu-01.ponee.io (Postfix) with ESMTPS id 571FE18066B for ; Wed, 30 Sep 2020 08:51:46 +0200 (CEST) Received: from mail.apache.org (mailroute1-lw-us.apache.org [207.244.88.153]) by mxout1-ec2-va.apache.org (ASF Mail Server at mxout1-ec2-va.apache.org) with SMTP id 96BC64279F for ; Wed, 30 Sep 2020 06:51:45 +0000 (UTC) Received: (qmail 30413 invoked by uid 500); 30 Sep 2020 06:51:45 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 30397 invoked by uid 99); 30 Sep 2020 06:51:45 -0000 Received: from ec2-52-202-80-70.compute-1.amazonaws.com (HELO gitbox.apache.org) (52.202.80.70) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Sep 2020 06:51:45 +0000 From: =?utf-8?q?GitBox?= To: reviews@spark.apache.org Subject: =?utf-8?q?=5BGitHub=5D_=5Bspark=5D_dongjoon-hyun_commented_on_a_change_in_pu?= =?utf-8?q?ll_request_=2329897=3A_=5BSPARK-33006=5D=5BK8S=5D=5BDOCS=5D_Add_d?= =?utf-8?q?ynamic_PVC_usage_example_into_K8s_doc?= Message-ID: <160144870523.32230.14261403081417924624.asfpy@gitbox.apache.org> Date: Wed, 30 Sep 2020 06:51:45 -0000 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit In-Reply-To: References: dongjoon-hyun commented on a change in pull request #29897: URL: https://github.com/apache/spark/pull/29897#discussion_r497279320 ########## File path: docs/running-on-kubernetes.md ########## @@ -307,7 +307,18 @@ And, the claim name of a `persistentVolumeClaim` with volume name `checkpointpvc spark.kubernetes.driver.volumes.persistentVolumeClaim.checkpointpvc.options.claimName=check-point-pvc-claim ``` -The configuration properties for mounting volumes into the executor pods use prefix `spark.kubernetes.executor.` instead of `spark.kubernetes.driver.`. For a complete list of available options for each supported type of volumes, please refer to the [Spark Properties](#spark-properties) section below. +The configuration properties for mounting volumes into the executor pods use prefix `spark.kubernetes.executor.` instead of `spark.kubernetes.driver.`. + +For example, you can mount a dynamically-created persistent volume claim per executor by using `OnDemand` as a claim name and `storageClass` and `sizeLimit` options like the following. This is useful in case of [Dynamic Allocation](configuration.html#dynamic-allocation). Review comment: @dbtsai . What do you mean by the following? > Currently, this doesn't support DA yet. Since Apache Spark 3.0.0, dynamic allocation with K8s has been supported with shuffle data tracking. And, this feature is also developed for both additional large disk requirement and dynamic allocation scenario. For example, in case of dynamic allocation, the executor id increases monotonically and indefinitely, so users cannot prepare pre-populated PVCs. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: users@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org