This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

This is a placeholder page that shows you how to use this template site.

This section is where the user documentation for your project lives - all the information your users need to understand and successfully use your project.

For large documentation sets we recommend adding content under the headings in this section, though if some or all of them don’t apply to your project feel free to remove them or add your own. You can see an example of a smaller Docsy documentation site in the Docsy User Guide, which lives in the Docsy theme repo if you’d like to copy its docs section.

Other content such as marketing material, case studies, and community updates should live in the About and Community pages.

Find out how to use the Docsy theme in the Docsy User Guide. You can learn more about how to organize your documentation (and how we organized this site) in Organizing Your Content.

1 - Overview

Here’s where your user finds out if your project is for them.

This is a placeholder page that shows you how to use this template site.

The Overview is where your users find out about your project. Depending on the size of your docset, you can have a separate overview page (like this one) or put your overview contents in the Documentation landing page (like in the Docsy User Guide).

Try answering these questions for your user in this page:

What is it?

Introduce your project, including what it does or lets you do, why you would use it, and its primary goal (and how it achieves it). This should be similar to your README description, though you can go into a little more detail here if you want.

Why do I want it?

Help your user know if your project will help them. Useful information can include:

  • What is it good for?: What types of problems does your project solve? What are the benefits of using it?

  • What is it not good for?: For example, point out situations that might intuitively seem suited for your project, but aren’t for some reason. Also mention known limitations, scaling issues, or anything else that might let your users know if the project is not for them.

  • What is it not yet good for?: Highlight any useful features that are coming soon.

Where should I go next?

Give your users next steps from the Overview. For example:

2 - Getting Started

What does your user need to know to try your project?

This is a placeholder page that shows you how to use this template site.

Information in this section helps your user try your project themselves.

  • What do your users need to do to start using your project? This could include downloading/installation instructions, including any prerequisites or system requirements.

  • Introductory “Hello World” example, if appropriate. More complex tutorials should live in the Tutorials section.

Consider using the headings below for your getting started page. You can delete any that are not applicable to your project.

Prerequisites

Are there any system requirements for using your project? What languages are supported (if any)? Do users need to already have any software or tools installed?

Installation

Where can your user find your project code? How can they install it (binaries, installable package, build from source)? Are there multiple options/versions they can install and how should they choose the right one for them?

Setup

Is there any initial setup users need to do after installation to try your project?

Try it out!

Can your users test their installation, for example by running a command or deploying a Hello World example?

2.1 - Deploying AIO

Deploying the AIO environment

For users who are new to KubeClipper and want to get started quickly, it is recommended to use the All-in-One installation mode, which can help you quickly deploy KubeClipper with zero configuration.

Deploy KubeClipper

Download kcctl

KubeClipper provides a command line tool 🔧 kcctl to simplify operation and maintenance. You can download the latest version of kcctl directly with the following command:

#curl -sfL https://oss.kubeclipper.io/kcctl.sh | sh -
#If you are in China, you can use cn environment variables during installation, in this case we will use registry.aliyuncs.com/google_containers instead of k8s.gcr.io
Curl -sfL https://oss.kubeclipper.io/kcctl.sh | KC_REGION = en sh -

You can also download the specified version from the [GitHub Release Page] ( https://github.com/kubeclipper/kubeclipper/releases ) .

Check if the installation was successful with the following command:

Kcctl version

Start installation

In this quickstart tutorial, you only need to execute one command to install KubeClipper with a template like this:

Kcctl deploy [--user root] (--passwd SSH_PASSWD | --pk-file SSH_PRIVATE_KEY)

If you use the ssh passwd method, the command is as follows:

Kcctl deploy --user root --passwd $SSH_PASSWD

The private key is as follows:

Kcctl deploy --user root --pk-file $SSH_PRIVATE_KEY

You only need to provide the ssh user and ssh passwd or ssh private key to deploy KubeClipper natively.

After executing this command, Kcctl will check your installation environment, and if the conditions are met, it will enter the installation process. After printing the following KubeClipper banner, the installation is complete.

 _ __ _ _____ _ _ 
| | / / | | / __ \ ( _)
| |/ / _ _ | |__ ___| / \/ | _ _ _ __ _ __ ___ _ __
| \| | | | ' _\/_ \ | | | | ' _\ | '_\/_ \ '__|
| |\ \ | _ | | | _ ) | __/ \__/\ | | | _ ) | | _ ) | __/ |
\ _ |\ _ /\__, _ | _ .__/ \___|\____/ _ | _ | .__/| .__/ \___| _ |
| | | |
| _ | | _ |

Login to console

After the installation is complete, open a browser and visit http://$IP to enter the KubeClipper console.

console

You can use the default account password " admin/Thinkbig1 " to log in.

You may need to configure port forwarding rules and open ports in security groups for external users to access the console.

Create k8s cluster

After successful deployment you can create a k8s cluster using the ** kcctl tool ** or via the ** console ** . Use the kcctl tool to create it in this quickstart tutorial.

First, use the default account password to log in and obtain the token, which is convenient for subsequent interaction between kcctl and kc-server.

Kcctl login -H http://localhost -u admin -p Thinkbig1

Then create a k8s cluster with the following command:

NODE = $ (kcctl get node -o yaml | grep ipv4DefaultIP: | sed's/ipv4DefaultIP : //')

Kcctl create cluster --master $NODE --name demo --untaint-master

It takes about 3 minutes to complete the cluster creation, or you can use the following command to view the cluster status

Kcctl get cluster -o yaml | grep status -A5

You can also go to the console to view the real-time log.

Entering the Running state means that the cluster installation is complete, you can use the kubectl get cs command to view the cluster health.

2.2 - Create k8s clusters offline using the kubeclipper platform

How to create a k8s cluster offline using the KC platform

1. Go to the creation screen

Log in to the Kubeclipper platform and click the button as shown in the figure to enter the cluster creation interface

2. Configure cluster nodes

Follow the text prompts to complete the steps of entering the cluster name and selecting nodes

Note: The number of master nodes cannot be an even number.

3. Configure cluster

This step is used to configure the cluster network and components such as the database and container runtime

Select offline installation and fill in the address of the image repository you have built first

4. Configure cluster storage

Select nfs storage and follow the text prompts to fill in the appropriate fields

5. Installation completed

Complete all configurations to confirm installation

Installation is successful and the cluster is up and running

3 - Deployment docs

Low level reference docs for your project.

This is a placeholder page that shows you how to use this template site.

If your project has an API, configuration, or other reference - anything that users need to look up that’s at an even lower level than a single task - put (or link to it) here. You can serve and link to generated reference docs created using Doxygen, Javadoc, or other doc generation tools by putting them in your static/ directory. Find out more in Adding static content. For OpenAPI reference, Docsy also provides a Swagger UI layout and shortcode that renders Swagger UI using any OpenAPI YAML or JSON file as source.

3.1 - Deploy HA

deploy a high available kubeclipper by some simple cmd.

The purpose of this document is to deploy an HA KubeClipper with a simple operation.

If you just want a simple experience, please refer to QuickStart to deploy AIO environment.

Preparations

You only need to prepare a host with reference to the following requirements for machine hardware and operating system: Preparations

HA Deploy Recommend:

  • Kubeclipper uses etcd as backend storage. In order to ensure high availability, it is recommended to use 3 nodes and above for deployment.
  • At the same time, the production environment recommends that the server node and the agent node be separated to avoid acting as both the server node and the agent node at the same time. Node planning:

Deploying KubeClipper

Download kcctl

KubeClipper provides command line tools 🔧 kcctl to simplify operation and maintenance work. You can directly download the latest version of kcctl with the following command:

curl -sfL https://oss.kubeclipper.io/kcctl.sh | sh -

# In China, you can add cn env, we use registry.aliyuncs.com/google_containers instead of k8s.gcr.io
# curl -sfL https://oss.kubeclipper.io/kcctl.sh | KC_REGION=cn sh -

You can also download the specified version on the GitHub Release Page.

Check if the installation is successful with the following command:

kcctl version

Get Started with Installation

All you need to do is execute a command to install KubeClipper, whose template looks like this:

kcctl deploy  [--user root] (--passwd SSH_PASSWD | --pk-file SSH_PRIVATE_KEY) (--server SERVER_NODES) (--agent AGENT_NODES)

If you use the ssh passwd method, the command is as follows:

kcctl deploy --user root --passwd $SSH_PASSWD --server SERVER_NODES --agent AGENT_NODES

The private key method is as follows:

kcctl deploy --user root --pk-file $SSH_PRIVATE_KEY --server SERVER_NODES --agent AGENT_NODES

You only need to provide ssh user and ssh passwd or ssh private key to deploy KubeClipper on the corresponding node.

This tutorial uses the private key to deploy, the specific commands are as follows:

kcctl deploy --server 192.168.10.110,192.168.10.111,192.168.10.112 --agent 192.168.10.113,192.168.10.114,192.168.10.115 --pk-file ~/.ssh/id_rsa --pkg https://oss.kubeclipper.io/release/v1.1.0/kc-amd64.tar.gz

This command shows kubeclipper platform will has 3 server node and 3 agent node.

You can visit the GitHub Release Page to view the current KubeClipper release version and modify the version number in the pkg parameter.

For example, after the v1.2.0 release you can specify –pkg as the https://oss.kubeclipper.io/release/v1.2.0/kc-amd64.tar.gz to install the v1.2.0 version.

After you runn this command, kcctl will check your installation environment and enter the installation process, if the conditions are met. After printing the KubeClipper banner, the installation is complete.

| | / /     | |        /  __ \ (_)
| |/ / _   _| |__   ___| /  \/ |_ _ __  _ __   ___ _ __
|    \| | | | '_ \ / _ \ |   | | | '_ \| '_ \ / _ \ '__|
| |\  \ |_| | |_) |  __/ \__/\ | | |_) | |_) |  __/ |
\_| \_/\__,_|_.__/ \___|\____/_|_| .__/| .__/ \___|_|
| |   | |
|_|   |_|

Login console

When deployed successfully, you can open a browser and visit http://$IP to enter the KubeClipper console.

console

You can log in with the default account password admin/Thinkbig1.

You may need to configure port forwarding rules and open ports in security groups for external users to access the console.

4 - Tutorials

Show your user how to work through some end to end examples.

This is a placeholder page that shows you how to use this template site.

Tutorials are complete worked examples made up of multiple tasks that guide the user through a relatively simple but realistic scenario: building an application that uses some of your project’s features, for example. If you have already created some Examples for your project you can base Tutorials on them. This section is optional. However, remember that although you may not need this section at first, having tutorials can be useful to help your users engage with your example code, especially if there are aspects that need more explanation than you can easily provide in code comments.

4.1 - Cluster management

A short lead description about this content page. It can be bold or italic and can be split over multiple paragraphs.

Create a kubernetes cluster

You can create a kubernetes cluster through the wizard-style page, and install the required plugins such as CNI and CSI. You can also save the cluster template in advance, and create a cluster quickly after selecting the template.

Prepare to create a cluster

  1. You need to have enough available nodes. To add nodes, refer to "Add Nodes".

  2. Prepare the image or binary files of K8S, CRI, calico, CSI and other plug-ins that need to be installed. You can choose online/offline according to the network environment of the platform, and then choose the recommended K8S version on page. You can also upload the image required for deployment to your own image repository in advance, and specify the image repository during deployment. For more installation configuration, refer to "Cluster Configuration Guide".

Create a single-node experimental cluster

  1. Click "Cluster Management" > "Cluster" to enter the cluster list page, and click the "Create Cluster" button in the upper left corner.

  2. Enter the "Node Configuration" page of the Create Cluster Wizard page. Fill in the "Cluster Name", such as "test", without selecting "Cluster Template". Select an available node, add it as a control node, and remove the taint from the master node in the taint management list. Click the "Next" button.

  1. Enter the "Cluster Configuration" page of the Create Cluster Wizard page. Select "Offline Installation", no need to specify "Mirror Repository", retain the default values for other configurations, click the "Quick Create" button, jump to the configuration confirmation page, and click the "OK" button.

  2. The experimental cluster of a single node is created. You can view the cluster details on the cluster details page, or click the "ViewLog" button to view the real-time log during the cluster creation process.

Create a cluster using a mirror repository

If you create a cluster that contains large images, it is recommended that you upload all images to a specific image repository, the creating process will be faster and smoother.

  1. Add a mirror repository. Click "Cluster Management" > "Mirror Repository" to enter the mirror repository list page, and click the "Add" button in the upper left corner. Enter the IP address of the repository where the mirror is stored in the pop-up window of adding a mirror repository, and click the "OK" button.

  2. Create a cluster. Click "Cluster Management" > "Cluster" to enter the cluster list page, and click the "Create Cluster" button in the upper left corner. Configure the cluster nodes as needed. In the "Mirror Repository" of the "Cluster Configuration" page, select the mirror repository added in the first step, and create the cluster after completing other configurations of the cluster as needed.

Create a cluster using the cluster template

You can use cluster templates to simplify the cluster creation process.

  1. Add a template. There are two ways to save a template. You can add a cluster template on the "Cluster Management" > "Template Management" page, and select the template when creating a new cluster. You can also save the existing cluster configuration as a template by clicking "More" > "Save as Template" in the cluster operation, so as to create a K8S cluster with the same configuration as the former cluster.

  2. Create a cluster. Click "Cluster Management" > "Cluster" to enter the cluster list page, click the "Create Cluster" button in the upper left corner, enter the cluster creation page, fill in the "cluster name", such as "demo", select the cluster template saved in the first step, Add the required nodes, click the "Quick Create" button in the lower right corner, jump to the "Configuration Confirmation" page, after checking the template information, click the "OK" button to create a cluster.

Cluster Configuration Guide

Node configuration steps

On the node configuration page, you can configure the node as follows:

  • Region: The region to which the cluster belongs. When adding a node, a physical or logical region can be specified for the node. The K8S cluster created by the node under this area also belongs to this region. Creating a cluster using multiple regional nodes is not supported.

  • Control Nodes: Specify an odd number of control nodes for the cluster. The production environments generally use 3 control nodes to achieve high availability.

  • Worker nodes: Add worker nodes to the new cluster according to the business size.

  • Taint management: You can configure taint for added nodes, kubeclipper will automatically add no schedule taint to the control nodes, and you can also make changes as needed.

  • Node Labels: You can configure labels for added cluster nodes as needed.

You can configure the required nodes according to your business needs. If you need to create a non-highly available experimental cluster, you can also add only one control node, and remove the taint automatically added for the control node. For details, see "Creating a Single-Node Experimental Cluster".

Cluster configuration steps

On the cluster configuration page, you can configure the cluster as follows:

Installation method and mirror repository:

Page configurationConfigure Package/Image Sources
Online (public network environment)Mirror repository is emptyConfiguration package source: Download from kubeclipper.io.Image pull method: The image is pulled from the official image repository by default, such as k8s image pulled from k8s.gcr.io, calico pulled from docker.io.
Online (public network environment)Mirror repository specificationConfiguration package source: Download from kubeclipper.io.Image pull method: Pull from the filled mirror repository. The components will inherit the repository address by default. Please ensure that the repository has the related component images. You can also set an independent mirror repository for a specific component, and the component image will be pulled from this address.
Offline (intranet environment)Mirror repository is emptyConfiguration package source: Download from the local kubeclipper cluster server node, you can use the “kcctl resource list” command to check the available configuration packages, or use the “kcctl resource push” command to upload the required configuration packages.Image pull method: Download from the local kubeclipper cluster server node. CRI will import the image after downloading. You can use the “kcctl resource list” command to check the available image packages, or use the “kcctl resource push” command to upload the required image packages.
Offline (intranet environment)Mirror repository specificationConfiguration package source: Download from the local kubeclipper cluster server node, you can use the “kcctl resource list” command to check the available configuration packages, or use the “kcctl resource push” command to upload the required configuration packages.Image pull method: Pull from the filled mirror repository. The components will inherit the repository address by default. Please ensure that the repository has the related component images. You can also set an independent mirror repository for a specific component, and the component image will be pulled from this address. kubeclipper provides the Docker Registry and uses the kcctl registry command for management. You can also use your own image repositories.
  • K8S version: Specify the cluster K8S version. When you choose to install offline, you can choose from the K8S version of the configuration package in the current environment; when you choose to install online, you can choose from the officially recommended version of kubeclipper.

  • ETCD Data Dir: You can specify the ETCD data directory, the default is /var/lib/etcd.

  • CertSANs: The IP address or domain name of the k8s cluster ca certificate signature, more than one can be filled in.

  • Container Runtime: According to the specified K8S version, the default container runtime is Docker for K8S version before v1.20.0, the default container runtime is Contianerd after v1.20.0; Docker is not supported after v1.24.0.

  • Container Runtime version: Specify the containerd/docker version. As with K8S, when you choose to install offline, you can choose from the version of the configuration package in the current environment; when you choose to install online, you can choose from the officially recommended version of kubeclipper.

  • Containerd data Path: The "root dir" in the config.toml configuration can be filled in. The default is /var/lib/containerd.

  • Docker data Path: The "root dir" in the daemon.json configuration can be filled in . The default is /var/lib/docker.

  • Containerd image repository: The repository address where the containerd image is stored, the "registry.mirrors" in the config.toml configuration, more than one can be filled in.

  • Docker image repository: The repository address where the Docker image is stored, the insecure registry in the daemon.json configuration, more than one can be filled in.

  • DNS domain name: The domain name of the k8s cluster, the default is cluster.local.

  • Worker load IP: Used for load balancing from worker nodes to multiple masters, a single master does not need to be set.

  • External access IP: You can fill in a floating IP for user access, which can be empty.

CNI configuration

The current version kubeclipper only supports Calico as cluster CNI.

Calico divides the pod CIDR set by users into several blocks (network segments), dynamically allocates them to the required nodes according to business requirements, and maintains the routing table of the cluster nodes through the bgp peer in the nodes.

For example: container address pool: 172.25.0.0/16, dynamically allocated network segment pool: 172.25.0.0 - 172.25.255.192 (172.25.0.0/26 i.e. 10 bits), the number of dynamically allocated network segments: 1023, the number of pods per network segment: 61 (193-254), the total number of pods is 1023 * 61 = 62403, the relative maximum number of nodes (according to the 200 service pod as the reference value): 312.

Clusters larger than 50 nodes are currently not recommended. Clusters larger than 50 nodes are recommended to manually configure route reflection to optimize the stability of routing table maintenance for nodes in the cluster.

To use Calico as the cluster CNI, you need the following configuration:

  • Calico mode: 5 network modes are supported:

    • Overlay-IPIP-All: Use IP-in-IP technology to open up the network of pods of different nodes. Usually, this method is used in the environment where the underlying platform is IaaS. Of course, if your underlying network environment is directly a physical device, it is also completely can be used, but the efficiency and flexibility will be greatly reduced. It should be noted that you need to confirm that the underlying network environment (underlay) supports the IPIP protocol. (The network method using overlay will have a certain impact on network performance).
    • Overlay-Vxlan-All: Use IP-in-IP technology to open up the network of pods of different nodes. Usually, this method is used in the environment where the underlying platform is IaaS. Of course, if your underlying network environment is directly a physical device, it is also completely can be used, but the efficiency and flexibility will be greatly reduced. In theory, it can run on any network environment. Usually, we will use it when the underlying environment does not support the IPIP protocol. (The network method using overlay has a certain impact on network performance).
    • BGP : Use IP-in-IP technology to open up the network of pods of different nodes. Usually this method is used in a bare metal environment. Of course, if the Iaas platform supports BGP, it can also be used. In this mode, the IP communication of pods is accomplished by exchanging routing tables among nodes in the cluster. If you need to manually open up the pod network between multiple clusters, you need to pay attention that the addresses you assign should not conflict.
    • Overly-IPIP-Cross-Subnet: Use IP-in-IP technology to open up the network of pods of different nodes. Usually this method is used in the environment where the underlying platform is IaaS . It should be noted that you need to confirm the underlying network environment (underlay) supports the IPIP protocol. The difference with Overlay-IPIP-All is that if two upper Pods of different nodes in the same network segment communicate with each other through the routing table, the efficiency of upper Pods of different nodes in the same network segment can be improved.
    • Overly-Vxlan-Cross-Subnet: The logic is similar to that of Overly-IPIP-Cross-Subnet.
  • IP version: The IP version can be specified as IPV4 or IPV4 IPV6 dual stack.

  • Service subnet: Fill in the service subnet CIDR, v4 defaults to: 10.96.0.0/16, v6 defaults to fd03::/112, note that the Service network must not overlap with any host network.

  • Pod CIDR: Fill in the pod subnet CIDR, v4 default: 172.25.0.0/24, v6 default is fd05::/120, note that the Pod network must not overlap with any host network.

  • The bottom layer of the pod network:

    • First-found (default): The program will traverse all valid IP addresses (local, loop back, docker bridge, etc. will be automatically excluded) according to ipfamily (v4 or v6). Usually, if it is a multi-network interface card, it will exclude the default gateway. The network interface card ip other than the gateway will be used as the routing address between nodes.
    • Can-reach: Set the routing address between nodes by checking the reachability of the domain names or IP addresses.
    • Interface: Get all network interface card device names that satisfy the regular expression and return the address of the first network interface card as the routing address between nodes.
  • MTU: Configure the maximum transmission unit (MTU) for the Calico environment. It is recommended to be no larger than 1440. The default is 1440. See https://docs.projectcalico.org/networking/mtu for details.

Storage configuration

The current version of Kubeclipper supports NFS as external storage types.

  • Connect to NFS storage

For NFS type external storage, you need to set the following:

FieldFunction descriptiondescription/optional
ServerAddrServerAddr, the service address of NFSRequired
SharedPathSharedPath, the service mount path for NFSRequired
StorageClassNameStorageClassName, the name of the storage classThe default is nfs-sc, the name can be customized, and it cannot be repeated with other storage classes in the cluster
ReclaimPolicyReclaimPolicy, VPC recovery strategyDelete/Retain
ArchiveOnDeleteArchiveOnDelete, whether to archive PVC after deletionYes/No
MountOptionsMountOptions, the options parameter of NFS, such as nfsvers = 4.1Optional, you can fill in several
ReplicasReplicas, number of NFS provisionersDefault is 1

By default, multiple NFS storage types can be connected. Click the "Continue to add" button below to add another NFS storage. Note that the storage class name cannot be repeated.

After setting up the external storage, the card below will show the storages you have enabled. You can choose a storage class as the default storage. For PVCs that do not specify a specific StorageClass, the default storage class will be used.

Cluster operation log view

On the cluster details page, click the "operation log" tab to view the cluster operation log list. Click the "View Log" button on the right side of an operation log to view the detailed logs of all steps and nodes in the pop-up window. Click the step name on the left to view the detailed log output of the execution steps.

During the execution of cluster operations, click View Log, you can view real-time log updates to trace the operation execution. For tasks that fail to execute, you can also view the log to find the execution steps and nodes marked with red dots, quickly locate errors, and troubleshoot the cause of operation failure.

Access cluster kubectl

You can access the kubectl of the running cluster, click "More" > "Connect Terminal" in the cluster operation, and you can execute the kubectl command line operation in the cluster kuebectl pop-up window.

Cluster plugin management

In addition to installing plugins when creating a cluster, you can also install plugins for a running cluster. Taking the installation of storage plugins as an example, click the "More" > "Add Storage Item" button in the cluster operation to enter the Add Storage Item page. You can install NFS plugins for the cluster. The installation configuration is the same as the configuration in cluster creation.

For installed plugins, you can view the plugin information on the cluster details page. You can click the "Save as Template" button in the upper right corner of the plugin card to save the plugin information as a template. You can also uninstall the cluster plugin by clicking the "Remove" button in the upper right corner of the plugin card.

Cluster node management

On the "Nodes" list page of the cluster detail page, you can view the list of nodes in the current cluster, their specifications, status and role information.

Add cluster node

When the cluster load is high, you can add nodes to the cluster to expand capacity. Adding nodes does not affect the running services.

On the cluster details page, under the Node List tab, click the "Add Node" button on the left, select the available nodes in the pop-up window, set the node labels, and click the "OK" button. The current version only supports adding worker nodes.

Remove cluster node

On the cluster details page, under the Node List tab, you can remove a node by clicking the "Remove" button on the right of the node. The current version only supports removing worker nodes.

Note: To remove cluster nodes, you need to pay attention to security issues in production to avoid application interruptions.

Cluster version upgrade

If the cluster version does not meet the requirements, you can upgrade the K8S version for the cluster. Similar to creating a cluster, you need to prepare the configuration package required for the cluster version and the K8S image of the target version and upload it to the specified location. For details, see "Preparing to Create a Cluster".

Click the "More" > "Cluster Upgrade" button of the cluster operation. In the cluster upgrade pop-up window, select the installation mode and mirror repository, and select the target version of the upgrade. The installation method of the upgrade and the configuration of the K8S version are the same as those of creating a cluster. For details, please refer to "Cluster Configuration Guide".

Cluster upgrades can be performed across minor versions, but upgrades skipped over later versions are not supported. For example, you can upgrade from v1.20.2 to v1.20.13, or from v1.20.x to v1.21.x, but not from v1.20.x to v1.22.x. For version 1.23.x, upgrading to version 1.24.x is not currently supported.

The cluster upgrade operation may take a long time. You can view the operation log on the cluster details page to track the cluster upgrade status.

Cluster Backup and Recovery

The backup of K8S cluster by KubeClipper mainly backs up ETCD database data, and k8s resource object, such as namespaces, deployments, configMaps. The files and data generated by the resource itself are not backed up. For example, the data and files generated by the mysql pod will not be backed up. Similarly, the files under the PV object of the file class are not backed up, only the pv object is backed up. The backup function provided by KubeClipper is hot backup, which does not affect cluster usage during backup. While KubeClipper is not against backing up during the "busy period" of the cluster, it also strongly disapproves of backing up during the "busy period" of the cluster.

Create a backup point

Before performing a backup, you need to set a backup point for the cluster, that is, set the storage location of the backup files. The storage type of the backup point can be FS storage or S3 storage . The following are node local storage , NFS storage and MINIO storage as examples:

  • Node local storage (only for single-node experimental clusters):

  1. Create a storage directory. Connect to the cluster master node terminal (see Connect Nodes Terminal) and use the mkdir command to create the "/root/backup" directory in the master node.

  2. Create a backup point. Click "Cluster Management" > "Backup Point" to enter the backup point list page, click the "Create" button in the upper left corner, in the Create Backup Point pop-up window, enter "Backup Point Name", such as "local", select "Storage Type" as "FS", fill in "Backup Path", such as "/root/backup".

  3. Set up a cluster backup point. When creating a cluster, select "Backup Point" as "local" on the "Cluster Configuration" page, or edit an existing cluster and select "local" in the "Backup Point" pop-up.

Note: Using a local node to store backup files does not require the introduction of external storage. The disadvantage is that if the local node is damaged, the backup files will also be lost, so it is strongly disapproved in a production environment .

  • NFS:
  1. Prepare NFS storage. Prepare an NFS service and create a directory on the NFS server to store backup files, such as "/data/kubeclipper/cluster-backups".

  2. Mount the storage directory. Connect the cluster master node terminal (see Connect node Terminal), use the mkdir command to create the "/data/kubeclipper/cluster-backups" directory in each master node, and mount it to the /data/kubeclipper/cluster-backups directory of the NFS server. Command example: mount -t nfs {NFS_IP}:/data/kubeclipper/cluster-backups /opt/kubeclipper/cluster-backups -o proto = tcp -o nolock.

  3. Create a backup point. Click "Cluster Management" > "Backup Point" to enter the backup point list page, click the "Create" button in the upper left corner, in the Create Backup Point pop-up window, enter "Backup Point Name", such as "nfs", select "Storage Type" as "FS", fill in "Backup Path" as "/opt/kubeclipper/cluster-backups".

  4. Set up a cluster backup point. When creating a cluster, select "Backup Point" as "nfs" on the "Cluster Configuration" page, or edit an existing cluster and select "nfs" in the "Backup Point" pop-up.

  • MINIO:
  1. Prepare MINIO storage. Build MINIO services, refer to the official website https://docs.min.io/docs/minio-quickstart-guide.html for the deployment process, or use existing MINIO services.

  2. Create a backup point. Click "Cluster Management" > "Backup Point" to enter the backup point list page, click the "Create" button in the upper left corner, in the Create Backup Point pop-up window, enter "Backup Point Name", such as "minio", select "Storage Type" as "S3", fill in "bucket name", such as "kubeclipper-backups", the bucket will be automatically created by kubeclipper, fill in the IP and port number of the MINIO storage service in the first step in "Endpoint", fill in the service username and password, click the "OK" button.

  3. Set up a cluster backup point. When creating a cluster, select "backup point" as "minio" on the "Cluster Configuration" page, or edit an existing cluster and select "minio" in the "Backup Point" pop-up.

You can view the list and details of all backup points on the "Backup Points" page of "Cluster Management" and do the following:

  • Edit: Edit the backup point description, and the username/password of the S3 type backup point.

  • Delete: Delete the backup point. If there are backup files under the backup point, deletion is not allowed.

Cluster backup

You can back up your cluster ETCD data by clicking the "More" > "Cluster Backup" button in the cluster operation.

You can view all backup files for the current cluster under the Backup tab on the cluster details page, and you can also perform the following operations for backups:

  • Edit: Edit the backup description.

  • Restore: Performs a cluster restore operation to restore the cluster to the specified backup state.

  • Delete: Deletes the backup file.

Scheduled backup

You can also create a timed backup for the cluster, click the "More" > "Scheduled Backup" button in the cluster operation, in the timed backup pop-up window, enter the timed backup name, execution type ( repeat / only once) and execution time, and set the number of valid backups for repeated timed backups, and click the "OK" button.

kubeClipper will perform backup tasks for the cluster at the execution time you set, and the backup file will be automatically named "Cluster Name - Timed Backup Name - Random Code". For repeated timed backups, when the number of backup files exceeds the number of valid backup files, kubeClipper will automatically delete the later backup files.

After the scheduled backup is added, you can view the scheduled backup information on the "Scheduled Backup" tab of the cluster details page, and you can also view the backup files generated by the scheduled backup on the "Backup" tab.

For scheduled backup tasks, you can also perform the following operations:

  • Edit: Edit the execution time of the scheduled backup task and the number of valid backups for repeated scheduled backups.

  • Enable/Disable: Disabled scheduled backup tasks are temporarily stopped.

  • Delete: Deletes a scheduled backup task.

Cluster Backup Restore

If you perform restore operation while the cluster is running, KubeClipper will perform overlay recovery on the cluster, that is, backup the ETCD data in the file, overwriting the existing data .

You can click the "Restore" button on the right side of the backup under the Backup tab of the cluster details page; or click the "More" > "Restore Cluster" button in the cluster operation, and select the backup to be restored in the Restore Cluster pop-up window. The current cluster can be restored to the specified backup state.

Note: After the K8S version of the cluster is upgraded, it will no longer be possible to restore the backup to the pre-upgrade version.

4.2 - Node & Zone Management

A short lead description about this content page. It can be bold or italic and can be split over multiple paragraphs.

On the "Node Information" page, you can view the list of all nodes managed in the platform, node specifications, status and other information. Click the node name to enter the node details page, you can view detailed node basic information and system information.

The node status in KubeClipper represents the management status of the node by kc-gent. Under normal circumstances, the node status is displayed as "Ready". When the node is out of contact for 4 minutes (within 10s of the error time), the status will be updated to "Unknown". Nodes with unknown status cannot perform any operations, nor can they create clusters or add/remove nodes for clusters.

Add node

When deploying KubeClipper, you can add the initial server nodes which are used to deploy KubeClipper's own services, and agent nodes which are used to deploy K8S clusters. In a KubeClipper environment for experimentation or development, you can add a server node as an agent node at the same time. However, if it is used in a production environment, it is recommended not to reuse the server node as an agent node.

You can also use the kcctl join command to add agent nodes to KubeClipper, and mark a region for each agent node. The region can be a physical or logical location. You can use nodes in the same region to create a K8S cluster, but you cannot use nodes across regions to create a cluster. Nodes in unmarked regions belong to the default region. For details, see "Kcctl Operation Guide".

Command line example:

kcctl join --agent beijing:1.2.3.4 --agent shanghai:2.3.4.5

Remove node

When you no longer need some nodes, you can use the kcctl drain command to remove nodes from the platform. See "Kcctl Operation Guide" for details.

Command line example:

kcctl drain --agent 192.168.10.19

Connect Terminal

On the node list page, you can click the "Connect Terminal" button on the right side of the target node, enter the node port and username password information in the pop-up window, access the node SSH console and execute the command.

4.3 - Access control

A short lead description about this content page. It can be bold or italic and can be split over multiple paragraphs.

Create user

After installing KubeClipper, you need to create a user for the desired role. Initially, the system has only one user, admin, by default, with the platform administrator role.

Click "Access Control" > "Users" to enter the user management page, click the "Create User" button in the upper left corner, fill in the user name, password, mobile phone number, email and other information in the pop-up window, specify the user role, and click the "OK" button. The four built-in roles in the system are as follows:

  • Platform administrator: have platform configuration, cluster management, user management and other platform viewing and operation rights.

  • Cluster Administrator: Have all cluster management rights.

  • User Administrator: Have all user management rights.

  • Platform read-only users: have platform viewing rights.

After the user is created, you can view the user details and login logs on the user details page and do the following:

  • Edit: Edit user alias, role, mobile phone number, email information.

  • Edit Password: Edit the user login password.

  • Delete: Delete the user.

Create a custom role

In addition to system built-in roles, you can also create custom roles to meet business needs.

Click "Access Control" > "Roles" to enter the role management page. You can click the "Create Role" button in the upper left corner to create a custom role.

On the Create Role page, you need to fill in the role name and description, and check the permissions required to customize the role. Some permissions depend on other permissions. When these permissions are selected, the dependent permissions will be automatically selected.

After creating a custom role, you can view the basic role information, role permission list, authorized user list on the role details page, and perform the following operations for the custom role:

  • Edit: Edit the custom role alias.

  • Edit permissions: Edit permissions under the custom role.

  • Delete: To delete a custom role, make sure that no user is using the role to be deleted.

Access to external users

KubeClipper can log in using external users via the OIDC protocol .

First, the platform administrator needs to log in to the platform server node and insert the following information under authentication in the kubeclipepr-server.yaml file:

oauthOptions:
    identityProviders:
    - name: keycloak
      type: OIDC
      mappingMethod: auto
      provider:
        clientID: kc
        clientSecret: EErn729BB1bKawdRtnZgrqj9Bx0]mzUs
        issuer: http://172.20.163.233:7777/auth/realms/kubeclipper
        scopes:
        - openid
        - email
        redirectURL: http://{kc-console-address}/oatuh2/redirect/{IDP-Name}

Under "provider", you need to fill in the clientID , clientSecret , and issuer information of your OAuth2 service, taking keycloack as an example, as shown in the figure below.

RedirectURL example: http://172.0.0.90/oauth2/redirect/keycloack

OAuth2 users can access and use the KubeClipper platform by following these steps:

  1. Click the "OAuth2 Login" button on the login page, enter the OAuth2 login page, enter the username and password to log in, enter the KubeClipper platform. When logging in for the first time, you will not be able to access the platform because you have not been granted any permission.

  2. The platform administrator or other user with user management rights log in to KubeClipper, find the target OAuth2 user on the user management page, and set the user role by editing the user information.

  3. The OAuth2 user repeats the first step, logs in to KubeClipper, and can access the platform normally and perform operations within the role permissions.

5 - Contribution Guidelines

How to contribute to the docs

These basic sample guidelines assume that your Docsy site is deployed using Netlify and your files are stored in GitHub. You can use the guidelines “as is” or adapt them with your own instructions: for example, other deployment options, information about your doc project’s file structure, project-specific review guidelines, versioning guidelines, or any other information your users might find useful when updating your site. Kubeflow has a great example.

Don’t forget to link to your own doc repo rather than our example site! Also make sure users can find these guidelines from your doc repo README: either add them there and link to them from this page, add them here and link to them from the README, or include them in both locations.

We use Hugo to format and generate our website, the Docsy theme for styling and site structure, and Netlify to manage the deployment of the site. Hugo is an open-source static site generator that provides us with templates, content organisation in a standard directory structure, and a website generation engine. You write the pages in Markdown (or HTML if you want), and Hugo wraps them up into a website.

All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult GitHub Help for more information on using pull requests.

Quick start with Netlify

Here’s a quick guide to updating the docs. It assumes you’re familiar with the GitHub workflow and you’re happy to use the automated preview of your doc updates:

  1. Fork the Goldydocs repo on GitHub.
  2. Make your changes and send a pull request (PR).
  3. If you’re not yet ready for a review, add “WIP” to the PR name to indicate it’s a work in progress. (Don’t add the Hugo property “draft = true” to the page front matter, because that prevents the auto-deployment of the content preview described in the next point.)
  4. Wait for the automated PR workflow to do some checks. When it’s ready, you should see a comment like this: deploy/netlify — Deploy preview ready!
  5. Click Details to the right of “Deploy preview ready” to see a preview of your updates.
  6. Continue updating your doc and pushing your changes until you’re happy with the content.
  7. When you’re ready for a review, add a comment to the PR, and remove any “WIP” markers.

Updating a single page

If you’ve just spotted something you’d like to change while using the docs, Docsy has a shortcut for you:

  1. Click Edit this page in the top right hand corner of the page.
  2. If you don’t already have an up to date fork of the project repo, you are prompted to get one - click Fork this repository and propose changes or Update your Fork to get an up to date version of the project to edit. The appropriate page in your fork is displayed in edit mode.
  3. Follow the rest of the Quick start with Netlify process above to make, preview, and propose your changes.

Previewing your changes locally

If you want to run your own local Hugo server to preview your changes as you work:

  1. Follow the instructions in Getting started to install Hugo and any other tools you need. You’ll need at least Hugo version 0.45 (we recommend using the most recent available version), and it must be the extended version, which supports SCSS.

  2. Fork the Goldydocs repo repo into your own project, then create a local copy using git clone. Don’t forget to use --recurse-submodules or you won’t pull down some of the code you need to generate a working site.

    git clone --recurse-submodules --depth 1 https://github.com/google/docsy-example.git
    
  3. Run hugo server in the site root directory. By default your site will be available at http://localhost:1313/. Now that you’re serving your site locally, Hugo will watch for changes to the content and automatically refresh your site.

  4. Continue with the usual GitHub workflow to edit files, commit them, push the changes up to your fork, and create a pull request.

Creating an issue

If you’ve found a problem in the docs, but you’re not sure how to fix it yourself, please create an issue in the Goldydocs repo. You can also create an issue about a specific page by clicking the Create Issue button in the top right hand corner of the page.

Useful resources