Charmed-Kubernetes/kubernetes-master/docs/index.md

833 lines
28 KiB
Markdown

<!-- METADATA
charm_name: kubernetes-master
charm_revision: '0'
timestamp: 1619014557
discourse_url: 'https://discourse.charmhub.io/t/kubernetes-master-charm/4506'
config: true
actions: true
context:
description: Documentation for the kubernetes-master charm
title: kubernetes-master charm
keywords: kubernetes-master, charm, config
-->
This charm is an encapsulation of the Kubernetes master processes and the
operations to run on any cloud for the entire lifecycle of the cluster.
This charm is built from other charm layers using the Juju reactive framework.
The other layers focus on specific subset of operations making this layer
specific to operations of Kubernetes master processes.
# Deployment
This charm is not fully functional when deployed by itself. It requires other
charms to model a complete Kubernetes cluster. A Kubernetes cluster needs a
distributed key value store such as [Etcd](https://coreos.com/etcd/) and the
kubernetes-worker charm which delivers the Kubernetes node services. A cluster
requires a Software Defined Network (SDN), a Container Runtime such as
[containerd](https://jaas.ai/u/containers/containerd), and Transport Layer
Security (TLS) so the components in a cluster communicate securely.
Please take a look at the [Charmed Kubernetes]( https://jaas.ai/charmed-kubernetes)
or the [Kubernetes core](https://jaas.ai/kubernetes-core) bundles for
examples of complete models of Kubernetes clusters.
# Resources
The kubernetes-master charm takes advantage of the [Juju Resources](https://jaas.ai/docs/juju-resources)
feature to deliver the Kubernetes software.
In deployments on public clouds the Charm Store provides the resource to the
charm automatically with no user intervention. Some environments with strict
firewall rules may not be able to contact the Charm Store. In these network
restricted environments the resource can be uploaded to the model by the Juju
operator.
#### Snap Refresh
The kubernetes resources used by this charm are snap packages. When not
specified during deployment, these resources come from the public store. By
default, the `snapd` daemon will refresh all snaps installed from the store
four (4) times per day. A charm configuration option is provided for operators
to control this refresh frequency.
>NOTE: this is a global configuration option and will affect the refresh
time for all snaps installed on a system.
Examples:
```sh
## refresh kubernetes-master snaps every tuesday
juju config kubernetes-master snapd_refresh="tue"
## refresh snaps at 11pm on the last (5th) friday of the month
juju config kubernetes-master snapd_refresh="fri5,23:00"
## delay the refresh as long as possible
juju config kubernetes-master snapd_refresh="max"
## use the system default refresh timer
juju config kubernetes-master snapd_refresh=""
```
For more information, see the [snap documentation](/kubernetes/docs/snap-refresh).
## Configuration
This charm supports some configuration options to set up a Kubernetes cluster
that works in your environment, detailed in the section below.
For some specific Kubernetes service configuration tasks, please also see the
section on [configuring K8s services](#k8s-services).
<!-- CONFIG STARTS -->
<!--AUTOGENERATED CONFIG TEXT - DO NOT EDIT -->
| name | type | Default | Description |
|------|--------|--------------|-------------------------------------------|
| <a id="table-allow-privileged"> </a> allow-privileged | string | auto | [See notes](#allow-privileged-description) |
| <a id="table-api-extra-args"> </a> api-extra-args | string | | [See notes](#api-extra-args-description) |
| <a id="table-audit-policy"> </a> audit-policy | string | [See notes](#audit-policy-default) | Audit policy passed to kube-apiserver via --audit-policy-file. For more info, please refer to the upstream documentation at https://kubernetes.io/docs/tasks/debug-application-cluster/audit/ |
| <a id="table-audit-webhook-config"> </a> audit-webhook-config | string | | Audit webhook config passed to kube-apiserver via --audit-webhook-config-file. For more info, please refer to the upstream documentation at https://kubernetes.io/docs/tasks/debug-application-cluster/audit/ |
| <a id="table-authorization-mode"> </a> authorization-mode | string | AlwaysAllow | Comma separated authorization modes. Allowed values are "RBAC", "Node", "Webhook", "ABAC", "AlwaysDeny" and "AlwaysAllow". |
| <a id="table-channel"> </a> channel | string | 1.17/stable | Snap channel to install Kubernetes master services from |
| <a id="table-client_password"> </a> client_password | string | | Password to be used for admin user (leave empty for random password). |
| <a id="table-controller-manager-extra-args"> </a> controller-manager-extra-args | string | | [See notes](#controller-manager-extra-args-description) |
| <a id="table-dashboard-auth"> </a> dashboard-auth | string | auto | [See notes](#dashboard-auth-description) |
| <a id="table-default-storage"> </a> default-storage | string | auto | The storage class to make the default storage class. Allowed values are "auto", "none", "ceph-xfs", "ceph-ext4". Note: Only works in Kubernetes >= 1.10 |
| <a id="table-dns-provider"> </a> dns-provider | string | auto | [See notes](#dns-provider-description) |
| <a id="table-dns_domain"> </a> dns_domain | string | cluster.local | The local domain for cluster dns |
| <a id="table-enable-dashboard-addons"> </a> enable-dashboard-addons | boolean | True | Deploy the Kubernetes Dashboard and Heapster addons |
| <a id="table-enable-keystone-authorization"> </a> enable-keystone-authorization | boolean | False | If true and the Keystone charm is related, users will authorize against the Keystone server. Note that if related, users will always authenticate against Keystone. |
| <a id="table-enable-metrics"> </a> enable-metrics | boolean | True | If true the metrics server for Kubernetes will be deployed onto the cluster. |
| <a id="table-enable-nvidia-plugin"> </a> enable-nvidia-plugin | string | auto | Load the nvidia device plugin daemonset. Supported values are "auto" and "false". When "auto", the daemonset will be loaded only if GPUs are detected. When "false" the nvidia device plugin will not be loaded. |
| <a id="table-extra_packages"> </a> extra_packages | string | | Space separated list of extra deb packages to install. |
| <a id="table-extra_sans"> </a> extra_sans | string | | Space-separated list of extra SAN entries to add to the x509 certificate created for the master nodes. |
| <a id="table-ha-cluster-dns"> </a> ha-cluster-dns | string | | DNS entry to use with the HA Cluster subordinate charm. Mutually exclusive with ha-cluster-vip. |
| <a id="table-ha-cluster-vip"> </a> ha-cluster-vip | string | | Virtual IP for the charm to use with the HA Cluster subordinate charm Mutually exclusive with ha-cluster-dns. Multiple virtual IPs are separated by spaces. |
| <a id="table-image-registry"> </a> image-registry | string | [See notes](#image-registry-default) | Container image registry to use for CDK. This includes addons like the Kubernetes dashboard, metrics server, ingress, and dns along with non-addon images including the pause container and default backend image. |
| <a id="table-install_keys"> </a> install_keys | string | | [See notes](#install_keys-description) |
| <a id="table-install_sources"> </a> install_sources | string | | [See notes](#install_sources-description) |
| <a id="table-keystone-policy"> </a> keystone-policy | string | [See notes](#keystone-policy-default) | Policy for Keystone authorization. This is used when a Keystone charm is related to kubernetes-master in order to provide authorization for Keystone users on the Kubernetes cluster. |
| <a id="table-keystone-ssl-ca"> </a> keystone-ssl-ca | string | | Keystone certificate authority encoded in base64 for securing communications to Keystone. For example: `juju config kubernetes-master keystone-ssl-ca=$(base64 /path/to/ca.crt)` |
| <a id="table-loadbalancer-ips"> </a> loadbalancer-ips | string | | [See notes](#loadbalancer-ips-description) |
| <a id="table-nagios_context"> </a> nagios_context | string | juju | [See notes](#nagios_context-description) |
| <a id="table-nagios_servicegroups"> </a> nagios_servicegroups | string | | A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup |
| <a id="table-package_status"> </a> package_status | string | install | The status of service-affecting packages will be set to this value in the dpkg database. Valid values are "install" and "hold". |
| <a id="table-proxy-extra-args"> </a> proxy-extra-args | string | | [See notes](#proxy-extra-args-description) |
| <a id="table-require-manual-upgrade"> </a> require-manual-upgrade | boolean | True | When true, master nodes will not be upgraded until the user triggers it manually by running the upgrade action. |
| <a id="table-scheduler-extra-args"> </a> scheduler-extra-args | string | | [See notes](#scheduler-extra-args-description) |
| <a id="table-service-cidr"> </a> service-cidr | string | 10.152.183.0/24 | CIDR to user for Kubernetes services. Cannot be changed after deployment. |
| <a id="table-snap_proxy"> </a> snap_proxy | string | | DEPRECATED. Use snap-http-proxy and snap-https-proxy model configuration settings. HTTP/HTTPS web proxy for Snappy to use when accessing the snap store. |
| <a id="table-snap_proxy_url"> </a> snap_proxy_url | string | | DEPRECATED. Use snap-store-proxy model configuration setting. The address of a Snap Store Proxy to use for snaps e.g. http://snap-proxy.example.com |
| <a id="table-snapd_refresh"> </a> snapd_refresh | string | max | [See notes](#snapd_refresh-description) |
| <a id="table-storage-backend"> </a> storage-backend | string | auto | The storage backend for kube-apiserver persistence. Can be "etcd2", "etcd3", or "auto". Auto mode will select etcd3 on new installations, or etcd2 on upgrades. |
| <a id="table-sysctl"> </a> sysctl | string | [See notes](#sysctl-default) | [See notes](#sysctl-description) |
---
### allow-privileged
<a id="allow-privileged-description"> </a>
**Description:**
Allow kube-apiserver to run in privileged mode. Supported values are
"true", "false", and "auto". If "true", kube-apiserver will run in
privileged mode by default. If "false", kube-apiserver will never run in
privileged mode. If "auto", kube-apiserver will not run in privileged
mode by default, but will switch to privileged mode if gpu hardware is
detected on a worker node.
[Back to table](#table-allow-privileged)
### api-extra-args
<a id="api-extra-args-description"> </a>
**Description:**
Space separated list of flags and key=value pairs that will be passed as arguments to
kube-apiserver. For example a value like this:
```
runtime-config=batch/v2alpha1=true profiling=true
```
will result in kube-apiserver being run with the following options:
--runtime-config=batch/v2alpha1=true --profiling=true
[Back to table](#table-api-extra-args)
### audit-policy
<a id="audit-policy-default"> </a>
**Default:**
```
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
# Don't log read-only requests from the apiserver
- level: None
users: ["system:apiserver"]
verbs: ["get", "list", "watch"]
# Don't log kube-proxy watches
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- resources: ["endpoints", "services"]
# Don't log nodes getting their own status
- level: None
userGroups: ["system:nodes"]
verbs: ["get"]
resources:
- resources: ["nodes"]
# Don't log kube-controller-manager and kube-scheduler getting endpoints
- level: None
users: ["system:unsecured"]
namespaces: ["kube-system"]
verbs: ["get"]
resources:
- resources: ["endpoints"]
# Log everything else at the Request level.
- level: Request
omitStages:
- RequestReceived
```
[Back to table](#table-audit-policy)
### controller-manager-extra-args
<a id="controller-manager-extra-args-description"> </a>
**Description:**
Space separated list of flags and key=value pairs that will be passed as arguments to
kube-controller-manager. For example a value like this:
```
runtime-config=batch/v2alpha1=true profiling=true
```
will result in kube-controller-manager being run with the following options:
--runtime-config=batch/v2alpha1=true --profiling=true
[Back to table](#table-controller-manager-extra-args)
### dashboard-auth
<a id="dashboard-auth-description"> </a>
**Description:**
Method of authentication for the Kubernetes dashboard. Allowed values are "auto",
"basic", and "token". If set to "auto", basic auth is used unless Keystone is
related to kubernetes-master, in which case token auth is used.
[Back to table](#table-dashboard-auth)
### dns-provider
<a id="dns-provider-description"> </a>
**Description:**
DNS provider addon to use. Can be "auto", "core-dns", "kube-dns", or
"none".
CoreDNS is only supported on Kubernetes 1.14+.
When set to "auto", the behavior is as follows:
- New deployments of Kubernetes 1.14+ will use CoreDNS
- New deployments of Kubernetes 1.13 or older will use KubeDNS
- Upgraded deployments will continue to use whichever provider was
previously used.
[Back to table](#table-dns-provider)
### image-registry
<a id="image-registry-default"> </a>
**Default:**
```
rocks.canonical.com:443/cdk
```
[Back to table](#table-image-registry)
### install_keys
<a id="install_keys-description"> </a>
**Description:**
List of signing keys for install_sources package sources, per charmhelpers standard format (a yaml list of strings encoded as a string). The keys should be the full ASCII armoured GPG public keys. While GPG key ids are also supported and looked up on a keyserver, operators should be aware that this mechanism is insecure. null can be used if a standard package signing key is used that will already be installed on the machine, and for PPA sources where the package signing key is securely retrieved from Launchpad.
[Back to table](#table-install_keys)
### install_sources
<a id="install_sources-description"> </a>
**Description:**
List of extra apt sources, per charm-helpers standard format (a yaml list of strings encoded as a string). Each source may be either a line that can be added directly to sources.list(5), or in the form ppa:<user>/<ppa-name> for adding Personal Package Archives, or a distribution component to enable.
[Back to table](#table-install_sources)
### keystone-policy
<a id="keystone-policy-default"> </a>
**Default:**
```
apiVersion: v1
kind: ConfigMap
metadata:
name: k8s-auth-policy
namespace: kube-system
labels:
k8s-app: k8s-keystone-auth
data:
policies: |
[
{
"resource": {
"verbs": ["get", "list", "watch"],
"resources": ["*"],
"version": "*",
"namespace": "*"
},
"match": [
{
"type": "role",
"values": ["k8s-viewers"]
},
{
"type": "project",
"values": ["k8s"]
}
]
},
{
"resource": {
"verbs": ["*"],
"resources": ["*"],
"version": "*",
"namespace": "default"
},
"match": [
{
"type": "role",
"values": ["k8s-users"]
},
{
"type": "project",
"values": ["k8s"]
}
]
},
{
"resource": {
"verbs": ["*"],
"resources": ["*"],
"version": "*",
"namespace": "*"
},
"match": [
{
"type": "role",
"values": ["k8s-admins"]
},
{
"type": "project",
"values": ["k8s"]
}
]
}
]
```
[Back to table](#table-keystone-policy)
### loadbalancer-ips
<a id="loadbalancer-ips-description"> </a>
**Description:**
Space separated list of IP addresses of loadbalancers in front of the control plane.
These can be either virtual IP addresses that have been floated in front of the control
plane or the IP of a loadbalancer appliance such as an F5. Workers will alternate IP
addresses from this list to distribute load - for example If you have 2 IPs and 4 workers,
each IP will be used by 2 workers. Note that this will only work if kubeapi-load-balancer
is not in use and there is a relation between kubernetes-master:kube-api-endpoint and
kubernetes-worker:kube-api-endpoint. If using the kubeapi-load-balancer, see the
loadbalancer-ips configuration variable on the kubeapi-load-balancer charm.
[Back to table](#table-loadbalancer-ips)
### nagios_context
<a id="nagios_context-description"> </a>
**Description:**
Used by the nrpe subordinate charms.
A string that will be prepended to instance name to set the host name
in nagios. So for instance the hostname would be something like:
```
juju-myservice-0
```
If you're running multiple environments with the same services in them
this allows you to differentiate between them.
[Back to table](#table-nagios_context)
### proxy-extra-args
<a id="proxy-extra-args-description"> </a>
**Description:**
Space separated list of flags and key=value pairs that will be passed as arguments to
kube-proxy. For example a value like this:
```
runtime-config=batch/v2alpha1=true profiling=true
```
will result in kube-apiserver being run with the following options:
--runtime-config=batch/v2alpha1=true --profiling=true
[Back to table](#table-proxy-extra-args)
### scheduler-extra-args
<a id="scheduler-extra-args-description"> </a>
**Description:**
Space separated list of flags and key=value pairs that will be passed as arguments to
kube-scheduler. For example a value like this:
```
runtime-config=batch/v2alpha1=true profiling=true
```
will result in kube-scheduler being run with the following options:
--runtime-config=batch/v2alpha1=true --profiling=true
[Back to table](#table-scheduler-extra-args)
### snapd_refresh
<a id="snapd_refresh-description"> </a>
**Description:**
How often snapd handles updates for installed snaps. Setting an empty
string will check 4x per day. Set to "max" to delay the refresh as long
as possible. You may also set a custom string as described in the
'refresh.timer' section here:
https://forum.snapcraft.io/t/system-options/87
[Back to table](#table-snapd_refresh)
### sysctl
<a id="sysctl-default"> </a>
**Default:**
```
{ net.ipv4.conf.all.forwarding : 1, net.ipv4.neigh.default.gc_thresh1 : 128, net.ipv4.neigh.default.gc_thresh2 : 28672, net.ipv4.neigh.default.gc_thresh3 : 32768, net.ipv6.neigh.default.gc_thresh1 : 128, net.ipv6.neigh.default.gc_thresh2 : 28672, net.ipv6.neigh.default.gc_thresh3 : 32768, fs.inotify.max_user_instances : 8192, fs.inotify.max_user_watches: 1048576 }
```
[Back to table](#table-sysctl)
<a id="sysctl-description"> </a>
**Description:**
YAML formatted associative array of sysctl values, e.g.:
'{kernel.pid_max : 4194303 }'. Note that kube-proxy handles
the conntrack settings. The proper way to alter them is to
use the proxy-extra-args config to set them, e.g.:
```
juju config kubernetes-master proxy-extra-args="conntrack-min=1000000 conntrack-max-per-core=250000"
juju config kubernetes-worker proxy-extra-args="conntrack-min=1000000 conntrack-max-per-core=250000"
```
The proxy-extra-args conntrack-min and conntrack-max-per-core can be set to 0 to ignore
kube-proxy's settings and use the sysctl settings instead. Note the fundamental difference between
the setting of conntrack-max-per-core vs nf_conntrack_max.
[Back to table](#table-sysctl)
<!-- CONFIG ENDS -->
<a id="k8s-services"> </a>
# Configuring K8s services
**Charmed Kubernetes** ships with sensible, tested default configurations to
ensure a reliable Kubernetes experience, but of course these can be changed to
reflect the purpose and resources of your cluster.
The configuration section above details all available configuration options,
this section deals with specific, commonly used settings.
You may wish to also read the [Addons page][] for information on the extra
services installed with **Charmed Kubernetes**.
<a id="config-ipvs"> </a>
## IPVS (IP Virtual Server)
IPVS implements transport-layer load balancing as part of the Linux kernel, and
can be used by the `kube-proxy` service to handle service routing. By default
`kube-proxy` uses a solution based on iptables, but this can cause a lot of
overhead in systems with large numbers of nodes. There is more information on
this in the upstream Kubernetes [IPVS deep dive][] documentation.
IPVS is an extra option for kube-proxy, and can be enabled by changing the
configuration:
```
juju config kubernetes-master proxy-extra-args="proxy-mode=ipvs"
```
It is also necessary to change this configuration option on the worker:
```
juju config kubernetes-worker proxy-extra-args="proxy-mode=ipvs"
```
## Admission controls
As with other aspects of the Kubernetes API, admission controls can be
enabled by adding extra values to the charm's
[api-extra-args](#api-extra-args-description) configuration.
For admission controls, it may be useful to refer to the
[Kubernetes blog][blog-admission] for more information on the options, but
for example, to add the `PodSecurityPolicy` admission controller:
1. Check any current config settings for `api-extra-args` (there are none by default):
```bash
juju config kubernetes-master api-extra-args
```
2. Append the desired config option to the previous output and apply:
```bash
juju config kubernetes-master api-extra-args="enable-admission-plugins=PodSecurityPolicy"
```
Note that prior to Kubernetes 1.16 (kubernetes-master revision 778), the config
setting was `admission-control`, rather than `enable-admission-plugins`.
<a id="extra_sans"> </a>
## Adding SANs and certificate regeneration
As explained in the [Certificates and trust overview][certs-and-trust], the
[`extra_sans`](#table-extra_sans) configuration settings can be used to add
SANs and regenerate x509 certificate(s) for the API server running on the
Kubernetes master node(s), and for the load balancer. When this configuration is
changed, the master node(s) will regenerate its certificate and restart the API
server to update the certificate used for communication. Note: This is
disruptive and restarts the API server.
The process is the same for both the `kubernetes-master` and the
`kubeapi-load-balancer`. The configuration option takes a space-separated list
of extra entries:
```bash
juju config kubernetes-master extra_sans="master.mydomain.com lb.mydomain.com"
juju config kubeapi-load-balancer extra_sans="master.mydomain.com lb.mydomain.com"
```
To clear the entries out of the certificate, use an empty string:
```bash
juju config kubernetes-master extra_sans=""
juju config kubeapi-load-balancer extra_sans=""
```
## DNS for the cluster
The DNS add-on allows pods to have DNS names in addition to IP addresses.
The Kubernetes cluster DNS server (based on the SkyDNS library) supports
forward lookups (A records), service lookups (SRV records) and reverse IP
address lookups (PTR records). More information about the DNS can be obtained
from the [Kubernetes DNS admin guide](http://kubernetes.io/docs/admin/dns/).
# Actions
<!-- ACTIONS STARTS -->
<!-- AUTOGENERATED TEXT - DO NOT EDIT -->
You can run an action with the following
```bash
juju run-action kubernetes-master ACTION [parameters] [--wait]
```
<div class="row">
<div class="col-2">
<h5>
apply-manifest
</h5>
</div>
<div class="col-5">
<p>
Apply JSON formatted Kubernetes manifest to cluster
</p>
</div>
</div>
<div class="row">
<div class="col-2"></div>
<div class="col-5">
<p>
This action has the following parameters:
</p>
<hr>
<pre>json</pre>
<p>
The content of the manifest to deploy in JSON format
</p>
<p>
<strong>Default:</strong>
</p><br>
</div>
</div>
<hr>
<div class="row">
<div class="col-2">
<h5>
cis-benchmark
</h5>
</div>
<div class="col-5">
<p>
Run the CIS Kubernetes Benchmark against snap-based components.
</p>
</div>
</div>
<div class="row">
<div class="col-2"></div>
<div class="col-5">
<p>
This action has the following parameters:
</p>
<hr>
<pre>apply</pre>
<p>
Apply remediations to address benchmark failures. The default, 'none', will not attempt to fix any reported failures. Set to 'conservative' to resolve simple failures. Set to 'dangerous' to attempt to resolve all failures. Note: Applying any remediation may result in an unusable cluster.
</p>
<p>
<strong>Default:</strong> none
</p><br>
<pre>config</pre>
<p>
Archive containing configuration files to use when running kube-bench. The default value is known to be compatible with snap components. When using a custom URL, append '#&lt;hash_type&gt;=&lt;checksum&gt;' to verify the archive integrity when downloaded.
</p>
<p>
<strong>Default:</strong> https://github.com/charmed-kubernetes/kube-bench-c onfig/archive/cis-1.5.zip#sha1=cb8e78712ee5bfeab87 d0ed7c139a83e88915530
</p><br>
<pre>release</pre>
<p>
Set the kube-bench release to run. If set to 'upstream', the action will compile and use a local kube-bench binary built from the master branch of the upstream repository: https://github.com/aquasecurity/kube-bench This value may also be set to an accessible archive containing a pre-built kube-bench binary, for example: https://github.com/aquasecurity/kube- bench/releases/download/v0.0.34/kube-bench_0.0.34_ linux_amd64.tar.gz#sha256=f96d1fcfb84b18324f1299db 074d41ef324a25be5b944e79619ad1a079fca077
</p>
<p>
<strong>Default:</strong> https://github.com/aquasecurity/kube- bench/releases/download/v0.2.3/kube-bench_0.2.3_li nux_amd64.tar.gz#sha256=429a1db271689aafec009434de d1dea07a6685fee85a1deea638097c8512d548
</p><br>
</div>
</div>
<hr>
<div class="row">
<div class="col-2">
<h5>
debug
</h5>
</div>
<div class="col-5">
<p>
Collect debug data
</p>
</div>
</div>
<hr>
<div class="row">
<div class="col-2">
<h5>
get-kubeconfig
</h5>
</div>
<div class="col-5">
<p>
Retrieve Kubernetes cluster config, including credentials
</p>
</div>
</div>
<hr>
<div class="row">
<div class="col-2">
<h5>
namespace-create
</h5>
</div>
<div class="col-5">
<p>
Create new namespace
</p>
</div>
</div>
<div class="row">
<div class="col-2"></div>
<div class="col-5">
<p>
This action has the following parameters:
</p>
<hr>
<pre>name</pre>
<p>
Namespace name eg. staging
</p>
<p>
<strong>Default:</strong>
</p><br>
</div>
</div>
<hr>
<div class="row">
<div class="col-2">
<h5>
namespace-delete
</h5>
</div>
<div class="col-5">
<p>
Delete namespace
</p>
</div>
</div>
<div class="row">
<div class="col-2"></div>
<div class="col-5">
<p>
This action has the following parameters:
</p>
<hr>
<pre>name</pre>
<p>
Namespace name eg. staging
</p>
<p>
<strong>Default:</strong>
</p><br>
</div>
</div>
<hr>
<div class="row">
<div class="col-2">
<h5>
namespace-list
</h5>
</div>
<div class="col-5">
<p>
List existing k8s namespaces
</p>
</div>
</div>
<hr>
<div class="row">
<div class="col-2">
<h5>
restart
</h5>
</div>
<div class="col-5">
<p>
Restart the Kubernetes master services on demand.
</p>
</div>
</div>
<hr>
<div class="row">
<div class="col-2">
<h5>
upgrade
</h5>
</div>
<div class="col-5">
<p>
Upgrade the kubernetes snaps
</p>
</div>
</div>
<div class="row">
<div class="col-2"></div>
<div class="col-5">
<p>
This action has the following parameters:
</p>
<hr>
<pre>fix-cluster-name</pre>
<p>
If using the OpenStack cloud provider, whether to fix the cluster name sent to it to include the cluster tag. This fixes an issue with load balancers conflicting with other clusters in the same project but will cause new load balancers to be created which will require manual intervention to resolve.
</p>
<p>
<strong>Default:</strong> True
</p><br>
</div>
</div>
<hr>
<!-- ACTIONS ENDS -->
# More information
- [Kubernetes github project](https://github.com/kubernetes/kubernetes)
- [Kubernetes issue tracker](https://github.com/kubernetes/kubernetes/issues)
- [Kubernetes documentation](http://kubernetes.io/docs/)
- [Kubernetes releases](https://github.com/kubernetes/kubernetes/releases)
<!-- LINKS -->
[IPVS deep dive]: https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
[blog-admission]: https://kubernetes.io/blog/2019/03/21/a-guide-to-kubernetes-admission-controllers/
[Addons page]: /kubernetes/docs/cdk-addons
[certs-and-trust]: /kubernetes/docs/certs-and-trust