openwhisk-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dgr...@apache.org
Subject [incubator-openwhisk-deploy-kube] branch master updated: Enable persistence by default (#347)
Date Mon, 12 Nov 2018 01:30:08 GMT
This is an automated email from the ASF dual-hosted git repository.

dgrove pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-openwhisk-deploy-kube.git


The following commit(s) were added to refs/heads/master by this push:
     new fe16c55  Enable persistence by default (#347)
fe16c55 is described below

commit fe16c55f100e452e9cb24184370be6d125171d5d
Author: David Grove <dgrove-oss@users.noreply.github.com>
AuthorDate: Sun Nov 11 20:29:57 2018 -0500

    Enable persistence by default (#347)
    
    Consolidate options around persistence and enable it by default.
    Kubernetes on Docker for Mac has good support for dynamic provisioning of
    persistent volumes (via the hostpath provisioner), so enabling this by
    default is the less surprising option.  In particular, it allows an
    OpenWhisk deployment to survive controlled host system reboots and still
    be usable.
    
    Major reorganization of setup/install instructions to group by the
    different kinds of Kubernetes clusters.  I think this should result in
    a simpler install experience, since the Kubernetes specific details are
    better organized and distinct from the main flow which is common to all
    clusters.
---
 README.md                                          | 107 ++-----
 docs/configurationChoices.md                       |  50 ++--
 docs/ingress.md                                    | 333 ---------------------
 docs/k8s-aws.md                                    | 108 +++++++
 docs/k8s-dind-cluster.md                           |  45 ++-
 docs/k8s-docker-for-mac.md                         |   9 +-
 docs/k8s-google.md                                 |  74 +++++
 docs/k8s-ibm-public.md                             | 167 +++++++++++
 docs/k8s-minikube.md                               |   9 +-
 docs/k8s-technical-requirements.md                 |  23 +-
 docs/troubleshooting.md                            |   8 +
 helm/openwhisk/Chart.yaml                          |   2 +-
 helm/openwhisk/templates/couchdb-pod.yaml          |   4 +-
 helm/openwhisk/templates/couchdb-pvc.yaml          |  10 +-
 helm/openwhisk/templates/kafka-pod.yaml            |   4 +-
 helm/openwhisk/templates/kafka-pvc.yaml            |  10 +-
 helm/openwhisk/templates/redis-pod.yaml            |   6 +-
 helm/openwhisk/templates/redis-pvc.yaml            |   8 +-
 helm/openwhisk/templates/zookeeper-pod.yaml        |  25 +-
 helm/openwhisk/templates/zookeeper-pvc-data.yaml   |  19 ++
 .../openwhisk/templates/zookeeper-pvc-datalog.yaml |  19 ++
 helm/openwhisk/values.yaml                         |  26 +-
 tools/travis/build-helm.sh                         |   5 +
 23 files changed, 570 insertions(+), 501 deletions(-)

diff --git a/README.md b/README.md
index d4d4947..e00b5c4 100644
--- a/README.md
+++ b/README.md
@@ -24,9 +24,9 @@
 [![Join Slack](https://img.shields.io/badge/join-slack-9B69A0.svg)](http://slack.openwhisk.org/)
 
 This repository can be used to deploy OpenWhisk to Kubernetes.
-It contains Helm charts, documentation, and supporting
-configuration files and scripts that can be used to deploy OpenWhisk
-to both single-node and multi-node Kubernetes clusters.
+It contains Helm charts, documentation, and other supporting artifacts
+that can be used to deploy OpenWhisk to both single-node and
+multi-node Kubernetes clusters.
 
 # Table of Contents
 
@@ -80,19 +80,22 @@ provide detailed setup instructions for Windows.
 
 Minikube provides a Kubernetes cluster running inside a virtual
 machine (for example VirtualBox). It can be used on MacOS, Linux, or
-Windows to run OpenWhisk, but is somewhat more finicky than the
+Windows to run OpenWhisk, but is somewhat less flexible than the
 docker-in-docker options described above. For details on setting up
-Minikube, see these [instructions](docs/k8s-minikube.md).
+Minikube, see these [setup instructions](docs/k8s-minikube.md).
 
 ### Using a Kubernetes cluster from a cloud provider
 
 You can also provision a Kubernetes cluster from a cloud provider,
 subject to the cluster meeting the [technical
-requirements](docs/k8s-technical-requirements.md).  Managed
-Kubernetes services from IBM (IKS), Google (GKE), and Amazon (EKS) are
-known to work for running OpenWhisk and are all documented and
-supported by this project.  We would welcome contributions of
-documentation for Azure (AKS) and any other public cloud providers.
+requirements](docs/k8s-technical-requirements.md).  We have
+detailed documentation on using Kubernetes clusters from the following
+major cloud providers:
+* [IBM (IKS)](docs/k8s-ibm-public.md)
+* [Google (GKE)](docs/k8s-google.md)
+* [Amazon (EKS)](docs/k8s-aws.md)
+
+We would welcome contributions of documentation for Azure (AKS) and any other public cloud providers.
 
 ## Helm
 
@@ -106,7 +109,7 @@ For detailed instructions on installing Helm, see these [instructions](docs/helm
 
 In short if you already have the `helm` cli installed on your development machine,
 you will need to execute these two commands and wait a few seconds for the
-`tiller-deploy` pod to be in the `Running` state.
+`tiller-deploy` pod in the `kube-system` namespace to be in the `Running` state.
 ```shell
 helm init
 kubectl create clusterrolebinding tiller-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
@@ -124,7 +127,7 @@ There are four deployment steps that are described in more
 detail below in the rest of this section.
 1. [Initial cluster setup](#initial-setup). You will create a
 Kubernetes namespace into which to deploy OpenWhisk and label the
-Kubernetes worker nodes to be used to execute user actions.
+Kubernetes worker nodes to indicate their intended usage by OpenWhisk.
 2. [Customize the deployment](#customize-the-deployment). You will
 create a `mycluster.yaml` that specifies key facts about your
 Kubernetes cluster and the OpenWhisk configuration you wish to
@@ -155,79 +158,33 @@ you want to be an invoker, execute
 $ kubectl label nodes <INVOKER_NODE_NAME> openwhisk-role=invoker
 ```
 
+For optimal scheduling of pods on a multi-node cluster, you can
+optionally label non-invoker worker nodes with `openwhisk-role=core`
+to indicate nodes which should run the OpenWhisk controller, kafka,
+zookeeeper, etc. and `openwhisk-role=edge` to indicate the node which
+should run the `nginx` frontdoor to OpenWhisk.
+
 ## Customize the Deployment
 
 You must create a `mycluster.yaml` file to record key aspects of your
 Kubernetes cluster that are needed to configure the deployment of
-OpenWhisk to your cluster. Most of the needed configuration is related
-to networking and is described in the [ingress discussion](./docs/ingress.md).
-
-Beyond specifying the ingress, the `mycluster.yaml` file is also used
+OpenWhisk to your cluster. For details, see the documentation
+appropriate to your Kubernetes cluster:
+* [Docker for Mac](docs/k8s-docker-for-mac.md#configuring-openwhisk)
+* [kubeadm-dind-cluster](docs/k8s-dind-cluster.md#configuring-openwhisk)
+* [Minikube](docs/k8s-minikube.md#configuring-openwhisk)
+* [IBM (IKS)](docs/k8s-ibm-public.md#configuring-openwhisk)
+* [Google (GKE)](docs/k8s-google.md#configuring-openwhisk)
+* [Amazon (EKS)](docs/k8s-aws.md#configuring-openwhisk)
+
+Beyond the Kubernetes cluster specific configuration information,
+the `mycluster.yaml` file is also used
 to customize your OpenWhisk deployment by enabling optional features
 and controlling the replication factor of the various microservices
 that make up the OpenWhisk implementation. See the [configuration
 choices documentation](./docs/configurationChoices.md) for a
 discussion of the primary options.
 
-### Sample mycluster.yaml for Docker for Mac
-
-Here is a sample file for a Docker for Mac deployment where
-`kubectl describe nodes | grep InternalIP` returns 192.168.65.3 and port 31001 is available to
-be used on your host machine.
-```yaml
-whisk:
-  ingress:
-    type: NodePort
-    apiHostName: 192.168.65.3
-    apiHostPort: 31001
-
-nginx:
-  httpsNodePort: 31001
-```
-
-### Sample mycluster.yaml for kubeadm-dind-cluster.sh
-
-Here is a sample file for a kubeadm-dind-cluster where `kubectl describe node kube-node-1 |
-grep InternalIP` returns 10.192.0.3 and port 31001 is available to
-be used on your host machine.
-```yaml
-whisk:
-  ingress:
-    type: NodePort
-    apiHostName: 10.192.0.3
-    apiHostPort: 31001
-
-nginx:
-  httpsNodePort: 31001
-
-invoker:
-  containerFactory:
-    dind: true
-```
-
-Note the stanza setting `invoker.containerFactory.dind` to `true`.
-This stanza is required; failure to override the default of `false` inherited
-from `helm/openwhisk/values.yaml` will result in a deployment
-of OpenWhisk with no healthy invokers (and thus a deployment that
-will not execute any user actions).
-
-### Sample mycluster.yaml for Minikube
-
-Here is a sample file appropriate for a Minikube cluster where
-`minikube ip` returns `192.168.99.100` and port 31001 is available to
-be used on your host machine.
-
-```yaml
-whisk:
-  ingress:
-    type: NodePort
-    apiHostName: 192.168.99.100
-    apiHostPort: 31001
-
-nginx:
-  httpsNodePort: 31001
-```
-
 ## Deploy With Helm
 
 Deployment can be done by using the following single command:
diff --git a/docs/configurationChoices.md b/docs/configurationChoices.md
index 1ef8204..9b2a847 100644
--- a/docs/configurationChoices.md
+++ b/docs/configurationChoices.md
@@ -113,33 +113,35 @@ Optionally, if including this chart as a dependency of another chart where kafka
 
 ### Persistence
 
-The couchdb, zookeeper, kafka, and redis microservices can each be
-configured to use persistent volumes to store their data. Enabling
-persistence may allow the system to survive failures/restarts of these
-components without a complete loss of application state. By default,
-none of these services is configured to use persistent volumes.  To
-enable persistence, you can add stanzas like the following to your
-`mycluster.yaml` to enable persistence and to request an appropriately
-sized volume.
-
+Several of the OpenWhisk components that are deployed by the Helm
+chart utilize PersistentVolumes to store their data.  This enables
+that data to survive failures/restarts of those components without a
+complete loss of application state.  To support this, the
+couchdb, zookeeper, kafka, and redis deployments all generate
+PersistentVolumeClaims that must be satisfied to enable their pods to
+be scheduled.  If your Kubernetes cluster is properly configured to support
+[Dynamic Volume Provision](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/),
+including having a DefaultStorageClass admission controller and a
+designated default StorageClass, then this will all happen seamlessly.
+
+If your cluster is not properly configured, then you will need to
+manually create the necessary PersistentVolumes when deploying the
+Helm chart. In this case, you should also disable the use of dynamic
+provisioning by the Helm chart by adding the following stanza to your
+mycluster.yaml
 ```yaml
-redis:
+k8s:
+  persistence:
+    useDynamicProvisioning: false
+```
+
+You may disable persistence entirely by adding the following stanza to
+your mycluster.yaml:
+```
+k8s:
   persistence:
-    enabled: true
-    size: 256Mi
-    storageClass: default
+    enabled: false
 ```
-If you are deploying to `minikube`, use the storageClass `standard`.
-If you are deploying on a managed Kubernetes cluster, check the cloud
-provider's documentation to determine the appropriate `storageClass`
-and `size` to request.
-
-Note that the Helm charts do not explicitly create the
-PersistentVolumes to satisfy the PersistentVolumeClaims they
-instantiate. We assume that either your cluster is configured to
-support [Dynamic Volume Provision](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/)
-or that you will manually create any necessary PersistentVolumes when
-deploying the Helm chart.
 
 ### Invoker Container Factory
 
diff --git a/docs/ingress.md b/docs/ingress.md
deleted file mode 100644
index 34f26f0..0000000
--- a/docs/ingress.md
+++ /dev/null
@@ -1,333 +0,0 @@
-<!--
-#
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
--->
-
-Ingress
--------
-
-Defining a Kubernetes Ingress is what makes the OpenWhisk system you
-are going to deploy available outside of your Kubernetes cluster. When
-you select an ingress method, you are determining what values to use
-for the `whisk.ingress` stanza of your `mycluster.yaml` file that you
-will use in the `helm install` command.  You will need to define
-values for at least `whisk.ingress.type` and `whisk.ingress.apiHostName`
-and `whisk.ingress.apiHostPort`.
-
-Unfortunately, the exact details of configuring an Ingress can vary
-across cloud providers.  The detailed instructions describe multiple
-possible Ingress configurations with specific details for some public
-cloud providers.  We welcome contributions from the community to
-describe how to configure Ingress for additional cloud providers.
-
-If you are deploying on minikube, use the NodePort instructions below.
-
-# NodePort
-
-NodePort is the simplest type of Ingress and is suitable for use with
-minikube and single node clusters that do not support more advanced
-ingress options.  Deploying a NodePort ingress will expose a port on
-each Kubernetes worker node for OpenWhisk's nginx service.
-
-In this Ingress, TLS termination will be handled by OpenWhisk's
-`nginx` service and will use self-signed certificates.  You will need
-to invoke `wsk` with the `-i` command line argument to bypass
-certificate checking.
-
-## Setting up NodePort on minikube
-
-First,  obtain the IP address of the single Kubernetes worker node.
-```shell
-minikube ip
-```
-This will return an ip address, for example `192.168.99.100`.
-
-Next pick an unassigned port (eg 31001) and define `mycluster.yaml` as
-```yaml
-whisk:
-  ingress:
-    type: NodePort
-    apiHostName: 192.168.99.100
-    apiHostPort: 31001
-
-nginx:
-  httpsNodePort: 31001
-```
-
-## Setting up NodePort on Kubernetes in Docker for Mac
-
-First,  obtain the IP address of the single Kubernetes worker node.
-```shell
-kubectl describe nodes | grep InternalIP
-```
-This should produce output like: `InternalIP:  192.168.65.3`
-
-Next pick an unassigned port (eg 31001) and define `mycluster.yaml` as
-```yaml
-whisk:
-  ingress:
-    type: NodePort
-    apiHostName: 192.168.65.3
-    apiHostPort: 31001
-
-nginx:
-  httpsNodePort: 31001
-```
-
-## Setting up NodePort using kubadm-dind-cluster
-
-Obtain the IP address of one of the two Kubernetes worker nodes using
-the command below.  Although the nginx NodePort service is actually
-available on both of the nodes, by using the node which you labelled
-with `openwhisk-role=core` as your api-host you can cut 1 hop
-out of the network path. So, if you label `kube-node-1` as your
-core node, pick `kube-node-1` as your api_host.
-```shell
-kubectl describe node kube-node-1 | grep InternalIP
-```
-This should produce output like: `InternalIP:  10.192.0.3`
-
-Next pick an unassigned port (eg 31001) and define `mycluster.yaml` as
-```yaml
-whisk:
-  ingress:
-    type: NodePort
-    apiHostName: 10.192.0.3
-    apiHostPort: 31001
-
-nginx:
-  httpsNodePort: 31001
-```
-
-## Setting up NodePort on an IBM Cloud Lite cluster
-
-The only available ingress method for an IBM Cloud Lite cluster is to
-use a NodePort. Obtain the Public IP address of the sole worker node
-by using the command
-```shell
-bx cs workers <my-cluster>
-```
-Then define `mycluster.yaml` as
-```yaml
-whisk:
-  ingress:
-    type: NodePort
-    apiHostName: YOUR_WORKERS_PUBLIC_IP_ADDR
-    apiHostPort: 31001
-
-nginx:
-  httpsNodePort: 31001
-```
-
-# Standard
-
-Many cloud providers will support creating a Kubernetes Ingress that
-may offer additional capabilities features such as TLS termination,
-load balancing, and other advanced features. We will call this a
-`standard` ingress and provide a parameterized ingress.yaml as part of
-the Helm chart that will create it using cloud-provider specific
-parameters from your `mycluster.yaml`. Generically, your
-`mycluster.yaml`'s ingress section will look something like:
-```yaml
-whisk:
-  ingress:
-    apiHostName: *<domain>*
-    apiHostPort: 443
-    apiHostProto: https
-    type: standard
-    domain: *<domain>*
-    tls:
-      enabled: *<true or false>*
-      secretenabled: *<true or false>*
-      createsecret: *<true or false>*
-      secretname: *<tlssecretname>*
-      *<additional cloud-provider-specific key/value pairs>*
-    annotations:
-      *<optional list of cloud-provider-specific key/value pairs>*
-```
-
-Note that if you can setup an ingress that does not use self-signed
-certificates for TLS termination you will be able to use `wsk` instead
-of `wsk -i` for cli operations.
-
-## IBM Cloud standard cluster
-
-This cluster type does not use self-signed certificates for TLS
-termination and can be configured with additional annotations to
-fine tune ingress performance.
-
-First, determine the values for <domain> and <ibmtlssecret> for
-your cluster by running the command:
-```
-bx cs cluster-get <mycluster>
-```
-The CLI output will look something like
-```
-bx cs cluster-get <mycluster>
-Retrieving cluster <mycluster>...
-OK
-Name:    <mycluster>
-ID:    b9c6b00dc0aa487f97123440b4895f2d
-Created:  2017-04-26T19:47:08+0000
-State:    normal
-Master URL:  https://169.57.40.165:1931
-Ingress subdomain:  <domain>
-Ingress secret:  <ibmtlssecret>
-Workers:  3
-```
-
-Now define `mycluster.yaml` as below (substituting the real values for
-`<domain>` and `<ibmtlssecret>`).
-```yaml
-whisk:
-  ingress:
-    apiHostName: <domain>
-    apiHostPort: 443
-    apiHostProto: https
-    type: standard
-    domain: <domain>
-    tls:
-      enabled: true
-      secretenabled: true
-      createsecret: false
-      secretname: <ibmtlssecret>
-    annotations:
-      # A blocking request is held open by the controller for slightly more than 60 seconds
-      # before it is responded to with HTTP status code 202 (accepted) and closed.
-      # Set to 75s to be on the safe side.
-      # See https://console.bluemix.net/docs/containers/cs_annotations.html#proxy-connect-timeout
-      # See http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout
-      ingress.bluemix.net/proxy-read-timeout: "75s"
-
-      # Allow up to 50 MiB body size to support creation of large actions and large
-      # parameter sizes.
-      # See https://console.bluemix.net/docs/containers/cs_annotations.html#client-max-body-size
-      # See http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
-      ingress.bluemix.net/client-max-body-size: "size=50m"
-
-      # Add the request_id, generated by nginx, to the request against the controllers. This id will be used as tid there.
-      # https://console.bluemix.net/docs/containers/cs_annotations.html#proxy-add-headers
-      ingress.bluemix.net/proxy-add-headers: |
-        serviceName=controller {
-          'X-Request-ID' $request_id;
-        }
-
-```
-
-## Google Cloud with nginx ingress
-
-This type of installation allows the same benefits as the IBM Cloud standard cluster.
-
-According to your nginx ingress settings you can define a <domain> value of your choice. Check the official Google Cloud documentation here: https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip. As stated you can create a domain of the type: `openwhisk.<your-chosen-dns-name>.com`
-
-You can choose to create a tls secret for that <domain> and provide values for <tlscrt> and <tlskey> in base64.
-
-To generate the values for <tlscrt> and <tlskey> you can use the openssl tool:
-
-```
-openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt
-cat tls.key | base64
-cat tls.crt | base64
-```
-
-Now define `mycluster.yaml` as below:
-
-```yaml
-whisk:
-  ingress:
-    apiHostName: <domain>
-    apiHostPort: 443
-    apiHostProto: https
-    type: standard
-    domain: <domain>
-    tls:
-      enabled: true
-      secretenabled: true
-      createsecret: true
-      secretname: openwhisk-ingress-tls-secret
-      secrettype: kubernetes.io/tls
-      crt: <tlscrt>
-      key: <tlskey>
-    annotations:
-      kubernetes.io/ingress.class: nginx
-      kubernetes.io/tls-acme: true
-      nginx.ingress.kubernetes.io/proxy-body-size: 0
-```
-
-## Additional cloud providers
-
-Please submit Pull Requests with instructions for configuing the
-`standard` ingress for other cloud providers.
-
-# LoadBalancer
-
-AWS's Elastic Kubernetes Service (EKS) does not support the standard
-ingress type.  Instead, it relies on provisioning Elastic Load
-Balancers (ELBs) outside of the EKS cluster to direct traffic to
-exposed services running in the cluster.  Because the `wsk` cli
-expects be able to use TLS to communicate securely with the OpenWhisk
-server, you will first need to ensure that you have a certificate
-available for your ELB instance to use in AWS's IAM service. For
-development and testing purposes, you can use a self-signed
-certificate (for example the `openwhisk-server-cert.pem` and
-`openwhisk-server-key.pem` that are generated when you build OpenWhisk
-from source and can be found in the
-`$OPENWHISK_HOME/ansible/roles/nginx/files` directory. Upload these to
-IAM using the aws cli:
-```shell
-aws iam upload-server-certificate --server-certificate-name ow-self-signed --certificate-body file://openwhisk-server-cert.pem --private-key file://openwhisk-server-key.pem
-```
-Verify that the upload was successful by using the command:
-```shell
-aws iam list-server-certificates
-```
-A typical output would be as shown below
-```
-{
-    "ServerCertificateMetadataList": [
-        {
-            "ServerCertificateId": "ASCAJ4HPCCVA65ZHD5TFQ",
-            "ServerCertificateName": "ow-self-signed",
-            "Expiration": "2019-10-01T20:50:02Z",
-            "Path": "/",
-            "Arn": "arn:aws:iam::12345678901:server-certificate/ow-self-signed",
-            "UploadDate": "2018-10-01T21:27:47Z"
-        }
-    ]
-}
-```
-Add the following to your mycluster.yaml, using your certificate's Arn
-instead of the example one:
-```yaml
-whisk:
-  ingress:
-    type: LoadBalancer
-    annotations:
-      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
-      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:iam::12345678901:server-certificate/ow-self-signed
-```
-
-Shortly after you deploy your helm chart, an ELB should be
-automatically created. You will can determine its hostname by issuing
-the command `kubectl get services  -o wide`. Use the value in the
-the EXTERNAL-IP column for the nginx service and port 443 to define
-your wsk apihost.
-
-NOTE: It may take several minutes after the ELB is reported as being
-available before the hostname is actually properly registered in DNS.
-Be patient and keep trying until you stop getting `no such host`
-errors from `wsk` when attempting to access it.
diff --git a/docs/k8s-aws.md b/docs/k8s-aws.md
new file mode 100644
index 0000000..de4cd79
--- /dev/null
+++ b/docs/k8s-aws.md
@@ -0,0 +1,108 @@
+<!--
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+-->
+
+# Amazon EKS for OpenWhisk
+
+## Overview
+
+## Initial setup
+
+### Creating the Kubernetes Cluster
+
+Follow Amazon's instructions to provision your cluster.
+
+### Configuring OpenWhisk
+
+AWS's Elastic Kubernetes Service (EKS) does not support standard Kubernetes
+ingress.  Instead, it relies on provisioning Elastic Load
+Balancers (ELBs) outside of the EKS cluster to direct traffic to
+exposed services running in the cluster.  Because the `wsk` cli
+expects to be able to use TLS to communicate securely with the OpenWhisk
+server, you will first need to ensure that you have a certificate
+available for your ELB instance to use in AWS's IAM service. For
+development and testing purposes, you can use a self-signed
+certificate (for example the `openwhisk-server-cert.pem` and
+`openwhisk-server-key.pem` that are generated when you build OpenWhisk
+from source and can be found in the
+`$OPENWHISK_HOME/ansible/roles/nginx/files` directory. Upload these to
+IAM using the aws cli:
+```shell
+aws iam upload-server-certificate --server-certificate-name ow-self-signed --certificate-body file://openwhisk-server-cert.pem --private-key file://openwhisk-server-key.pem
+```
+Verify that the upload was successful by using the command:
+```shell
+aws iam list-server-certificates
+```
+A typical output would be as shown below
+```
+{
+    "ServerCertificateMetadataList": [
+        {
+            "ServerCertificateId": "ASCAJ4HPCCVA65ZHD5TFQ",
+            "ServerCertificateName": "ow-self-signed",
+            "Expiration": "2019-10-01T20:50:02Z",
+            "Path": "/",
+            "Arn": "arn:aws:iam::12345678901:server-certificate/ow-self-signed",
+            "UploadDate": "2018-10-01T21:27:47Z"
+        }
+    ]
+}
+```
+Add the following to your mycluster.yaml, using your certificate's Arn
+instead of the example one:
+```yaml
+whisk:
+  ingress:
+    type: LoadBalancer
+    annotations:
+      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
+      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:iam::12345678901:server-certificate/ow-self-signed
+
+k8s:
+  persistence:
+    enabled: false
+```
+
+For ease of deployment, you should disable persistent volumes because
+EKS does not come with an automatically configured default
+StorageClass. Alternatively, you may choose to leave persistence
+enabled and manually create the necessary persistent volumes using
+AWS/EKS instructions to do so.
+
+Shortly after you deploy your helm chart, an ELB should be
+automatically created. You can determine its hostname by issuing
+the command `kubectl get services -o wide`. Use the value in the
+the EXTERNAL-IP column for the nginx service and port 443 to define
+your wsk apihost.
+
+NOTE: It may take several minutes after the ELB is reported as being
+available before the hostname is actually properly registered in DNS.
+Be patient and keep trying until you stop getting `no such host`
+errors from `wsk` when attempting to access it.
+
+## Hints and Tips
+
+## Limitations
+
+Without additional configuration to enable persistent volumes, EKS is
+only appropriate for development and testing purposes.  It is not
+recommended for production deployments of OpenWhisk.
+
+If you used a self-signed certificate, you will need to invoke `wsk`
+with the `-i` command line argument to bypass certificate checking.
diff --git a/docs/k8s-dind-cluster.md b/docs/k8s-dind-cluster.md
index ec45911..270ab57 100644
--- a/docs/k8s-dind-cluster.md
+++ b/docs/k8s-dind-cluster.md
@@ -75,21 +75,11 @@ kubectl label node kube-worker-2 openwhisk-role=invoker
 
 ### Configuring OpenWhisk
 
-Because the container logs for docker containers running on the
-virtual worker nodes are in a non-standard location, you must
-configure the invoker to look for user action logs in a different
-path. You do that by adding the following required stanza to your
-mycluster.yaml.
-```yaml
-invoker:
-  containerFactory:
-    dind: true
-```
 
 You will be using a NodePort ingress to access OpenWhisk. Assuming
-w`kubectl describe node kube-node-1 | grep InternalIP` returns 10.192.0.3
-and port 31001 is available to be used on your host machine, you can
-add the following stanzas of to your mycluster.yaml:
+`kubectl describe node kube-node-1 | grep InternalIP` returns 10.192.0.3
+and port 31001 is available to be used on your host machine, a
+mycluster.yaml for a standard deployment of OpenWhisk would be:
 ```yaml
 whisk:
   ingress:
@@ -99,18 +89,41 @@ whisk:
 
 nginx:
   httpsNodePort: 31001
+
+invoker:
+  containerFactory:
+    dind: true
+
+k8s:
+  persistence:
+    enabled: false
 ```
 
+Note the stanza setting `invoker.containerFactory.dind` to true. This
+is needed because the logs for docker containers running on the
+virtual worker nodes are in a non-standard location, requiring special
+configuration of OpenWhisk's invoker pods. Failure to set this
+variable when running on kubeadm-dind-cluster will result in an
+OpenWhisk deployment that cannot execute user actions.
+
+For ease of deployment, you should also disable persistent volumes
+because kubeadm-dind-cluster does not configure a default
+StorageClass.
+
 ## Limitations
 
 Using kubeadm-dind-cluster is only appropriate for development and
 testing purposes.  It is not recommended for production deployments of
 OpenWhisk.
 
+Without enabling persistence, it is not possible to restart the
+Kubernetes cluster without also re-installing Helm and OpenWhisk.
+
+TLS termination will be handled by OpenWhisk's `nginx` service and
+will use self-signed certificates.  You will need to invoke `wsk` with
+the `-i` command line argument to bypass certificate checking.
+
 Unlike using Kubernetes with Docker for Mac 18.06 and later, only the
 virtual master/worker nodes are visible to Docker on the host system. The
 individual pods running the OpenWhisk system are only visible using
 `kubectl` and not directly via host Docker commands.
-
-There does not appear to be a reliable way to restart the Kubernetes
-cluster without also re-installing Helm and OpenWhisk.
diff --git a/docs/k8s-docker-for-mac.md b/docs/k8s-docker-for-mac.md
index 0b37480..6159c7d 100644
--- a/docs/k8s-docker-for-mac.md
+++ b/docs/k8s-docker-for-mac.md
@@ -51,8 +51,8 @@ might also have installed on your machine.  Finally, pick the
 
 You will be using a NodePort ingress to access OpenWhisk. Assuming
 `kubectl describe nodes | grep InternalIP` returns 192.168.65.3 and
-port 31001 is available to be used on your host machine, you can add
-the following stanzas of to your mycluster.yaml:
+port 31001 is available to be used on your host machine, a
+mycluster.yaml for a standard deployment of OpenWhisk would be:
 ```yaml
 whisk:
   ingress:
@@ -85,6 +85,10 @@ Using Kubernetes in Docker for Mac is only appropriate for development
 and testing purposes.  It is not recommended for production
 deployments of OpenWhisk.
 
+TLS termination will be handled by OpenWhisk's `nginx` service and
+will use self-signed certificates.  You will need to invoke `wsk` with
+the `-i` command line argument to bypass certificate checking.
+
 The docker network is not exposed to the host on MacOS. However, the
 exposed ports for NodePort services are forwarded from localhost.
 Therefore you must use different host names to connect to OpenWhisk
@@ -92,3 +96,4 @@ from outside the cluster (with the `wsk` cli) and from inside the
 cluster (in `mycluster.yaml`).  Continuing the example from above,
 when setting the `--apihost` for the `wsk` cli, you would use
 `localhost:31001`.
+
diff --git a/docs/k8s-google.md b/docs/k8s-google.md
new file mode 100644
index 0000000..c6970a3
--- /dev/null
+++ b/docs/k8s-google.md
@@ -0,0 +1,74 @@
+<!--
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+-->
+
+# Google GKE for OpenWhisk
+
+## Overview
+
+## Initial setup
+
+### Creating the Kubernetes Cluster
+
+Follow Google's instructions to provision your cluster.
+
+### Configuring OpenWhisk
+
+We recommend using an nginx ingress when running OpenWhisk on GKE.
+
+According to your nginx ingress settings you can define a <domain> value of your choice. Check the official Google Cloud documentation here: https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip. As stated you can create a domain of the type: `openwhisk.<your-chosen-dns-name>.com`
+
+You can choose to create a tls secret for that <domain> and provide values for <tlscrt> and <tlskey> in base64.
+
+To generate the values for <tlscrt> and <tlskey> you can use the openssl tool:
+
+```
+openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt
+cat tls.key | base64
+cat tls.crt | base64
+```
+
+Now define `mycluster.yaml` as below:
+
+```yaml
+whisk:
+  ingress:
+    apiHostName: <domain>
+    apiHostPort: 443
+    apiHostProto: https
+    type: standard
+    domain: <domain>
+    tls:
+      enabled: true
+      secretenabled: true
+      createsecret: true
+      secretname: openwhisk-ingress-tls-secret
+      secrettype: kubernetes.io/tls
+      crt: <tlscrt>
+      key: <tlskey>
+    annotations:
+      kubernetes.io/ingress.class: nginx
+      kubernetes.io/tls-acme: true
+      nginx.ingress.kubernetes.io/proxy-body-size: 0
+```
+
+## Hints and Tips
+
+
+## Limitations
+
diff --git a/docs/k8s-ibm-public.md b/docs/k8s-ibm-public.md
new file mode 100644
index 0000000..f7f9820
--- /dev/null
+++ b/docs/k8s-ibm-public.md
@@ -0,0 +1,167 @@
+<!--
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+-->
+
+# IBM IKS for OpenWhisk
+
+## Overview
+
+IBM provides both a "Lite" and a "Standard" Kubernetes offering in its
+public cloud Kubernetes service (IKS). These differ in capabilities,
+so they are described separately below.
+
+## Initial setup
+
+### Creating the Kubernetes Cluster
+
+Follow IBM's instructions to provision your cluster.
+
+### Configuring OpenWhisk
+
+####  IBM Cloud Standard cluster
+
+An IBM Cloud Standard cluster has full support for TLS
+and can be configured with additional annotations to
+fine tune ingress performance.
+
+First, determine the values for <domain> and <ibmtlssecret> for
+your cluster by running the command:
+```
+bx cs cluster-get <mycluster>
+```
+The CLI output will look something like
+```
+bx cs cluster-get <mycluster>
+Retrieving cluster <mycluster>...
+OK
+Name:    <mycluster>
+ID:    b9c6b00dc0aa487f97123440b4895f2d
+Created:  2017-04-26T19:47:08+0000
+State:    normal
+Master URL:  https://169.57.40.165:1931
+Ingress subdomain:  <domain>
+Ingress secret:  <ibmtlssecret>
+Workers:  3
+```
+
+Now define `mycluster.yaml` as below (substituting the real values for
+`<domain>` and `<ibmtlssecret>`).
+```yaml
+whisk:
+  ingress:
+    apiHostName: <domain>
+    apiHostPort: 443
+    apiHostProto: https
+    type: standard
+    domain: <domain>
+    tls:
+      enabled: true
+      secretenabled: true
+      createsecret: false
+      secretname: <ibmtlssecret>
+    annotations:
+      # A blocking request is held open by the controller for slightly more than 60 seconds
+      # before it is responded to with HTTP status code 202 (accepted) and closed.
+      # Set to 75s to be on the safe side.
+      # See https://console.bluemix.net/docs/containers/cs_annotations.html#proxy-connect-timeout
+      # See http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout
+      ingress.bluemix.net/proxy-read-timeout: "75s"
+
+      # Allow up to 50 MiB body size to support creation of large actions and large
+      # parameter sizes.
+      # See https://console.bluemix.net/docs/containers/cs_annotations.html#client-max-body-size
+      # See http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
+      ingress.bluemix.net/client-max-body-size: "size=50m"
+
+      # Add the request_id, generated by nginx, to the request against the controllers. This id will be used as tid there.
+      # https://console.bluemix.net/docs/containers/cs_annotations.html#proxy-add-headers
+      ingress.bluemix.net/proxy-add-headers: |
+        serviceName=controller {
+          'X-Request-ID' $request_id;
+        }
+
+k8s:
+  persistence:
+    defaultStorageClass: default
+```
+
+IKS does not provide a properly configured DefaultStorageClass,
+instead you need to tell the Helm chart to use the `default`
+StorageClassName as shown above.
+
+####  IBM Cloud Lite cluster
+
+The only available ingress method for an IBM Cloud Lite cluster is to
+use a NodePort. Obtain the Public IP address of the sole worker node
+by using the command
+```shell
+bx cs workers <my-cluster>
+```
+Then define `mycluster.yaml` as
+```yaml
+whisk:
+  ingress:
+    type: NodePort
+    apiHostName: YOUR_WORKERS_PUBLIC_IP_ADDR
+    apiHostPort: 31001
+
+nginx:
+  httpsNodePort: 31001
+
+k8s:
+  persistence:
+    defaultStorageClass: default
+```
+
+IKS does not provide a properly configured DefaultStorageClass,
+instead you need to tell the Helm chart to use the `default`
+StorageClassName as shown above.
+
+## Hints and Tips
+
+On IBM Standard clusters, you can configure OpenWhisk to integrate
+with platform logging and monitoring services following the general
+instructions for enabling these services for pods deployed on
+Kubernetes.
+
+## Limitations
+
+Using an IBM Cloud Lite cluster is only appropriate for development
+and testing purposes.  It is not recommended for production
+deployments of OpenWhisk.
+
+When using an IBM Cloud Lite cluster, TLS termination will be handled
+by OpenWhisk's `nginx` service and will use self-signed certificates.
+You will need to invoke `wsk` with the `-i` command line argument to
+bypass certificate checking.
+
+IBM's 1.11 and 1.12 Kubernetes clusters have switched to using
+`containerd` as the underlying container runtime system. This is not
+compatible with OpenWhisk's DockerContainerFactory. Therefore you
+either need to provision a 1.10 cluster or use the
+KubernetesContainerFactory by adding the following to your
+mycluster.yaml:
+```yaml
+invoker:
+  containerFactory:
+    impl: kubernetes
+    kubernetes:
+      agent:
+        enabled: false
+```
+
diff --git a/docs/k8s-minikube.md b/docs/k8s-minikube.md
index 3895fff..85972c8 100644
--- a/docs/k8s-minikube.md
+++ b/docs/k8s-minikube.md
@@ -121,9 +121,8 @@ minikube ssh -- sudo ip link set docker0 promisc on
 
 You will be using a NodePort ingress to access OpenWhisk. Assuming
 `minikube ip` returns `192.168.99.100` and port 31001 is available to
-be used on your host machine,  you can add the following stanzas of to
-your mycluster.yaml:
-
+be used on your host machine, a
+mycluster.yaml for a standard deployment of OpenWhisk would be:
 ```yaml
 whisk:
   ingress:
@@ -141,6 +140,10 @@ Using Minikube is only appropriate for development and testing
 purposes.  It is not recommended for production deployments of
 OpenWhisk.
 
+TLS termination will be handled by OpenWhisk's `nginx` service and
+will use self-signed certificates.  You will need to invoke `wsk` with
+the `-i` command line argument to bypass certificate checking.
+
 You must remember to put the docker network in promiscuous mode via
 ```
 minikube ssh -- sudo ip link set docker0 promisc on
diff --git a/docs/k8s-technical-requirements.md b/docs/k8s-technical-requirements.md
index 04138db..448e60d 100644
--- a/docs/k8s-technical-requirements.md
+++ b/docs/k8s-technical-requirements.md
@@ -19,9 +19,22 @@
 
 # Technical Requirements for Kubernetes
 
-The Kubernetes cluster on which you are deploying OpenWhisk must meet the following requirements:
-* [Kubernetes](https://github.com/kubernetes/kubernetes) version 1.9+. However, version 1.9.4 will not work for OpenWhisk due to a bug with volume mount subpaths (see[[kubernetes-61076](https://github.com/kubernetes/kubernetes/issues/61076)]). This bug will surface as a failure when deploying the nginx container.
-* The ability to create Ingresses to make a Kubernetes service available outside of the cluster so you can actually use OpenWhisk.
-* If you enable persistence (see [docs/configurationChoices.md](./docs/configurationChoices.md)), either your cluster is configured to support [Dynamic Volume Provision](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) or you must manually create any necessary PersistentVolumes when deploying the Helm chart.
-* Endpoints of Kubernetes services must be able to loopback to themselves (the kubelet's `hairpin-mode` must not be `none`).
+The Kubernetes cluster on which you are deploying OpenWhisk must meet
+the following requirements:
+* [Kubernetes](https://github.com/kubernetes/kubernetes) version
+  1.9+. However, version 1.9.4 will not work for OpenWhisk due to a
+  bug with volume mount subpaths
+  (see[[kubernetes-61076](https://github.com/kubernetes/kubernetes/issues/61076)]). This
+  bug will surface as a failure when deploying the nginx container.
+* The ability to create Ingresses to make a Kubernetes service
+  available outside of the cluster so you can actually use OpenWhisk.
+* Unless you disable persistence (see
+  [docs/configurationChoices.md](./docs/configurationChoices.md)),
+  either your cluster must be configured to support [Dynamic Volume
+  Provision](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/)
+  and you must have a DefaultStorageClass admission controller enabled
+  or you must manually create any necessary PersistentVolumes when
+  deploying the Helm chart.
+* Endpoints of Kubernetes services must be able to loopback to
+  themselves (the kubelet's `hairpin-mode` must not be `none`).
 
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 36d34e0..1756443 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -38,6 +38,14 @@ mounting`/sys/fs/cgroup`, `/run/runc`,`/var/lib/docker/containers`, or
 value in `helm/openwhisk/templates/_invoker-helpers.yaml` to match the host operating system
 running on your Kubernetes worker node.
 
+### Kafka, Redis, CouchDB, and Zookeeper pods stuck in Pending
+
+These pods all mount Volumes via PersistentVolumeClaims. If there is a
+misconfiguration related to the dynamic provisioning of
+PersistentVolumes, then these pods will not be scheduled.  See the
+Persistence section in the [configuration choices
+documentation](./configurationChoices.md) for more details.
+
 ### Controller and Invoker cannot connect to Kafka
 
 If services are having trouble connecting to Kafka, it may be that the
diff --git a/helm/openwhisk/Chart.yaml b/helm/openwhisk/Chart.yaml
index 16efa0b..71b6366 100644
--- a/helm/openwhisk/Chart.yaml
+++ b/helm/openwhisk/Chart.yaml
@@ -12,4 +12,4 @@ maintainers:
   - name: Apache OpenWhisk committers
     email: dev@openwhisk.apache.org
 tillerVersion: ">=2.10.0"
-kubeVersion: "1.10 - 1.11"
+kubeVersion: "1.10 - 1.11.*"
diff --git a/helm/openwhisk/templates/couchdb-pod.yaml b/helm/openwhisk/templates/couchdb-pod.yaml
index 7b68672..c86da48 100644
--- a/helm/openwhisk/templates/couchdb-pod.yaml
+++ b/helm/openwhisk/templates/couchdb-pod.yaml
@@ -49,12 +49,12 @@ spec:
               key: db_password
         - name: "NODENAME"
           value: "couchdb0"
-        {{- if .Values.db.persistence.enabled }}
+        {{- if .Values.k8s.persistence.enabled }}
         volumeMounts:
           - name: database-storage
             mountPath: /opt/couchdb/data
         {{- end }}
-      {{- if .Values.db.persistence.enabled }}
+      {{- if .Values.k8s.persistence.enabled }}
       volumes:
         - name: database-storage
           persistentVolumeClaim:
diff --git a/helm/openwhisk/templates/couchdb-pvc.yaml b/helm/openwhisk/templates/couchdb-pvc.yaml
index 9ae38d4..77c08b2 100644
--- a/helm/openwhisk/templates/couchdb-pvc.yaml
+++ b/helm/openwhisk/templates/couchdb-pvc.yaml
@@ -1,19 +1,19 @@
 # Licensed to the Apache Software Foundation (ASF) under one or more contributor
 # license agreements; and to You under the Apache License, Version 2.0.
 
-{{ if not .Values.db.external }}
-{{- if .Values.db.persistence.enabled }}
+{{- if and (not .Values.db.external) .Values.k8s.persistence.enabled }}
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: {{ .Values.db.persistence.pvcName | quote }}
   namespace: {{ .Release.Namespace | quote }}
 spec:
-  storageClassName: {{ .Values.db.persistence.storageClass }}
+{{- if .Values.k8s.persistence.useDynamicProvisioning }}
+  storageClassName: {{ .Values.k8s.persistence.defaultStorageClass }}
+{{- end }}
   accessModes:
-    - {{ .Values.db.persistence.accessMode }}
+    - ReadWriteOnce
   resources:
     requests:
       storage: {{ .Values.db.persistence.size }}
 {{- end }}
-{{ end }}
diff --git a/helm/openwhisk/templates/kafka-pod.yaml b/helm/openwhisk/templates/kafka-pod.yaml
index cabb74a..c2dacda 100644
--- a/helm/openwhisk/templates/kafka-pod.yaml
+++ b/helm/openwhisk/templates/kafka-pod.yaml
@@ -31,7 +31,7 @@ spec:
 {{ include "openwhisk.affinity.selfAntiAffinity" ( .Values.kafka.name | quote ) | indent 8 }}
       {{- end }}
 
-{{- if .Values.kafka.persistence.enabled }}
+{{- if .Values.k8s.persistence.enabled }}
       volumes:
       - name: kafka-data
         persistentVolumeClaim:
@@ -45,7 +45,7 @@ spec:
       - name: {{ .Values.kafka.name | quote }}
         image: {{ .Values.kafka.image | quote }}
         imagePullPolicy: {{ .Values.kafka.imagePullPolicy | quote }}
-{{- if .Values.kafka.persistence.enabled }}
+{{- if .Values.k8s.persistence.enabled }}
         volumeMounts:
         - mountPath: /kafka
           name: kafka-data
diff --git a/helm/openwhisk/templates/kafka-pvc.yaml b/helm/openwhisk/templates/kafka-pvc.yaml
index 3d18e00..f1a9a4a 100644
--- a/helm/openwhisk/templates/kafka-pvc.yaml
+++ b/helm/openwhisk/templates/kafka-pvc.yaml
@@ -1,19 +1,19 @@
 # Licensed to the Apache Software Foundation (ASF) under one or more contributor
 # license agreements; and to You under the Apache License, Version 2.0.
 
-{{- if not .Values.kafka.external }}
-{{- if .Values.kafka.persistence.enabled }}
+{{- if and (not .Values.kafka.external) .Values.k8s.persistence.enabled }}
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: {{ .Values.kafka.persistence.pvcName | quote }}
   namespace: {{ .Release.Namespace | quote }}
 spec:
-  storageClassName: {{ .Values.kafka.persistence.storageClass }}
+{{- if .Values.k8s.persistence.useDynamicProvisioning }}
+  storageClassName: {{ .Values.k8s.persistence.defaultStorageClass }}
+{{- end }}
   accessModes:
-    - {{ .Values.kafka.persistence.accessMode }}
+    - ReadWriteOnce
   resources:
     requests:
       storage: {{ .Values.kafka.persistence.size }}
 {{- end }}
-{{- end }}
diff --git a/helm/openwhisk/templates/redis-pod.yaml b/helm/openwhisk/templates/redis-pod.yaml
index 7a0100d..ea23cbe 100644
--- a/helm/openwhisk/templates/redis-pod.yaml
+++ b/helm/openwhisk/templates/redis-pod.yaml
@@ -28,14 +28,14 @@ spec:
 {{ include "openwhisk.affinity.selfAntiAffinity" ( .Values.redis.name | quote ) | indent 8 }}
       {{- end }}
 
-{{- if .Values.redis.persistence.enabled }}
+{{- if .Values.k8s.persistence.enabled }}
       volumes:
       - name: redis-data
         persistentVolumeClaim:
           claimName: {{ .Values.redis.persistence.pvcName | quote }}
 {{- end }}
 
-{{- if .Values.redis.persistence.enabled }}
+{{- if .Values.k8s.persistence.enabled }}
       initContainers:
       - name: redis-init
         image: busybox
@@ -55,7 +55,7 @@ spec:
         - name: redis
           imagePullPolicy: {{ .Values.redis.imagePullPolicy | quote }}
           image: {{ .Values.redis.image | quote }}
-{{- if .Values.redis.persistence.enabled }}
+{{- if .Values.k8s.persistence.enabled }}
           volumeMounts:
           - mountPath: /data
             name: redis-data
diff --git a/helm/openwhisk/templates/redis-pvc.yaml b/helm/openwhisk/templates/redis-pvc.yaml
index 41e19af..52145a9 100644
--- a/helm/openwhisk/templates/redis-pvc.yaml
+++ b/helm/openwhisk/templates/redis-pvc.yaml
@@ -1,16 +1,18 @@
 # Licensed to the Apache Software Foundation (ASF) under one or more contributor
 # license agreements; and to You under the Apache License, Version 2.0.
 
-{{- if .Values.redis.persistence.enabled }}
+{{- if .Values.k8s.persistence.enabled }}
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: {{ .Values.redis.persistence.pvcName | quote }}
   namespace: {{ .Release.Namespace | quote }}
 spec:
-  storageClassName: {{ .Values.redis.persistence.storageClass }}
+{{- if .Values.k8s.persistence.useDynamicProvisioning }}
+  storageClassName: {{ .Values.k8s.persistence.defaultStorageClass }}
+{{- end }}
   accessModes:
-    - {{ .Values.redis.persistence.accessMode }}
+    - ReadWriteOnce
   resources:
     requests:
       storage: {{ .Values.redis.persistence.size }}
diff --git a/helm/openwhisk/templates/zookeeper-pod.yaml b/helm/openwhisk/templates/zookeeper-pod.yaml
index 0e91c39..99431a6 100644
--- a/helm/openwhisk/templates/zookeeper-pod.yaml
+++ b/helm/openwhisk/templates/zookeeper-pod.yaml
@@ -35,6 +35,14 @@ spec:
         - name: zk-config
           configMap:
             name: {{ .Values.zookeeper.name | quote }}
+{{- if and .Values.k8s.persistence.enabled (eq (int .Values.zookeeper.replicaCount) 1) }}
+        - name: "{{- .Values.zookeeper.persistence.pvcName -}}-data"
+          persistentVolumeClaim:
+            claimName: "{{- .Values.zookeeper.persistence.pvcName -}}-data"
+        - name: "{{- .Values.zookeeper.persistence.pvcName -}}-datalog"
+          persistentVolumeClaim:
+            claimName: "{{- .Values.zookeeper.persistence.pvcName -}}-datalog"
+{{- end }}
 
       containers:
       - name: {{ .Values.zookeeper.name | quote }}
@@ -52,30 +60,35 @@ spec:
         volumeMounts:
         - mountPath: /conf
           name: zk-config
-{{- if .Values.zookeeper.persistence.enabled }}
+{{- if .Values.k8s.persistence.enabled }}
         - mountPath: {{ .Values.zookeeper.config.dataDir }}
           name: "{{- .Values.zookeeper.persistence.pvcName -}}-data"
         - mountPath: {{ .Values.zookeeper.config.dataLogDir }}
           name: "{{- .Values.zookeeper.persistence.pvcName -}}-datalog"
 {{- end }}
 
-{{- if .Values.zookeeper.persistence.enabled }}
+{{/* PVCs created by volumeClaimTemplates must be manually removed; only create them if we absolutely need them */}}
+{{- if and .Values.k8s.persistence.enabled (gt (int .Values.zookeeper.replicaCount) 1) }}
   volumeClaimTemplates:
   - metadata:
       name: "{{- .Values.zookeeper.persistence.pvcName -}}-data"
     spec:
-      storageClassName: {{ .Values.zookeeper.persistence.storageClass }}
+{{- if .Values.k8s.persistence.useDynamicProvisioning }}
+      storageClassName: {{ .Values.k8s.persistence.defaultStorageClass }}
+{{- end }}
       accessModes:
-        - {{ .Values.zookeeper.persistence.accessMode }}
+        - ReadWriteOnce
       resources:
         requests:
           storage: {{ .Values.zookeeper.persistence.size }}
   - metadata:
       name: "{{- .Values.zookeeper.persistence.pvcName -}}-datalog"
     spec:
-      storageClassName: {{ .Values.zookeeper.persistence.storageClass }}
+{{- if .Values.k8s.persistence.useDynamicProvisioning }}
+      storageClassName: {{ .Values.k8s.persistence.defaultStorageClass }}
+{{- end }}
       accessModes:
-        - {{ .Values.zookeeper.persistence.accessMode }}
+        - ReadWriteOnce
       resources:
         requests:
           storage: {{ .Values.zookeeper.persistence.size }}
diff --git a/helm/openwhisk/templates/zookeeper-pvc-data.yaml b/helm/openwhisk/templates/zookeeper-pvc-data.yaml
new file mode 100644
index 0000000..b928dd7
--- /dev/null
+++ b/helm/openwhisk/templates/zookeeper-pvc-data.yaml
@@ -0,0 +1,19 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more contributor
+# license agreements; and to You under the Apache License, Version 2.0.
+
+{{ if and (not .Values.zookeeper.external) (and .Values.k8s.persistence.enabled (eq (int .Values.zookeeper.replicaCount) 1 )) }}
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: "{{- .Values.zookeeper.persistence.pvcName -}}-data"
+  namespace: {{ .Release.Namespace | quote }}
+spec:
+{{- if .Values.k8s.persistence.useDynamicProvisioning }}
+  storageClassName: {{ .Values.k8s.persistence.defaultStorageClass }}
+{{- end }}
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: {{ .Values.zookeeper.persistence.size }}
+{{- end }}
diff --git a/helm/openwhisk/templates/zookeeper-pvc-datalog.yaml b/helm/openwhisk/templates/zookeeper-pvc-datalog.yaml
new file mode 100644
index 0000000..a11a88c
--- /dev/null
+++ b/helm/openwhisk/templates/zookeeper-pvc-datalog.yaml
@@ -0,0 +1,19 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more contributor
+# license agreements; and to You under the Apache License, Version 2.0.
+
+{{ if and (not .Values.zookeeper.external) (and .Values.k8s.persistence.enabled (eq (int .Values.zookeeper.replicaCount) 1 )) }}
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+  name: "{{- .Values.zookeeper.persistence.pvcName -}}-datalog"
+  namespace: {{ .Release.Namespace | quote }}
+spec:
+{{- if .Values.k8s.persistence.useDynamicProvisioning }}
+  storageClassName: {{ .Values.k8s.persistence.defaultStorageClass }}
+{{- end }}
+  accessModes:
+    - ReadWriteOnce
+  resources:
+    requests:
+      storage: {{ .Values.zookeeper.persistence.size }}
+{{- end }}
diff --git a/helm/openwhisk/values.yaml b/helm/openwhisk/values.yaml
index db82479..08ac360 100644
--- a/helm/openwhisk/values.yaml
+++ b/helm/openwhisk/values.yaml
@@ -46,6 +46,10 @@ whisk:
 k8s:
   domain: cluster.local
   dns: kube-dns.kube-system
+  persistence:
+    enabled: true
+    useDynamicProvisioning: true
+    defaultStorageClass:               # intentionally left blank, which will result in using the default storageclass
 
 # Images used to run auxillary tasks/jobs
 utility:
@@ -65,6 +69,7 @@ docker:
 
 # zookeeper configurations
 zookeeper:
+  external: false
   name: "zookeeper"
   image: "zookeeper:3.4"
   # Note: Zookeeper's quorum protocol is designed to have an odd number of replicas.
@@ -75,11 +80,8 @@ zookeeper:
   serverPort: 2888
   leaderElectionPort: 3888
   persistence:
-    enabled: false
     pvcName: zookeeper-pvc
-    size: 2Gi
-    storageClass: default
-    accessMode: ReadWriteOnce
+    size: 256Mi
   # Default values for entries in zoo.cfg (see Apache Zookeeper documentation for semantics)
   config:
     tickTime: 2000
@@ -90,6 +92,7 @@ zookeeper:
 
 # kafka configurations
 kafka:
+  external: false
   name: "kafka"
   image: "wurstmeister/kafka:0.11.0.1"
   # NOTE: setting replicaCount > 1 will not work...actively being worked on.
@@ -98,11 +101,8 @@ kafka:
   restartPolicy: "Always"
   port: 9092
   persistence:
-    enabled: false
     pvcName: kafka-pvc
-    size: 2Gi
-    storageClass: default
-    accessMode: ReadWriteOnce
+    size: 512Mi
 
 # Database configuration
 db:
@@ -129,11 +129,8 @@ db:
   actionsTable: "test_whisks"
   authsTable: "test_subjects"
   persistence:
-    enabled: false
     pvcName: couchdb-pvc
-    size: 8Gi
-    storageClass: default
-    accessMode: ReadWriteOnce
+    size: 2Gi
 
 # Nginx configurations
 nginx:
@@ -200,7 +197,7 @@ apigw:
   apiPort: 9000
   mgmtPort: 8080
 
-# Redis (used by apigatewy)
+# Redis (used by apigateway)
 redis:
   name: "redis"
   image: "redis:3.2"
@@ -210,11 +207,8 @@ redis:
   restartPolicy: "Always"
   port: 6379
   persistence:
-    enabled: false
     pvcName: redis-pvc
     size: 256Mi
-    storageClass: default
-    accessMode: ReadWriteOnce
 
 # Used to define pod affinity and anti-affinity for the Kubernetes scheduler.
 # If affinity.enabled is true, then all of the deployments for the OpenWhisk
diff --git a/tools/travis/build-helm.sh b/tools/travis/build-helm.sh
index 14c89c6..57f619e 100755
--- a/tools/travis/build-helm.sh
+++ b/tools/travis/build-helm.sh
@@ -213,6 +213,11 @@ whisk:
     apiHostPort: $WSK_PORT
   runtimes: "runtimes-minimal-travis.json"
 
+# TODO: instead document how to enable dynamic volume provisioning for dind
+k8s:
+  persistence:
+    enabled: false
+
 invoker:
   containerFactory:
     dind: true


Mime
View raw message