Configure High Availability
This section explains how to configure multiple control plane nodes for high availability in a production environment for the KubeSphere cluster. This ensures that the cluster services remain operational even if a single control plane node fails. If your KubeSphere cluster does not require high availability, you can skip this section.
Note |
---|
The high availability configuration for KubeSphere is only supported when installing Kubernetes and KubeSphere together. If you are installing KubeSphere on an existing Kubernetes cluster, KubeSphere will utilize the existing high availability configuration of the Kubernetes cluster. |
This section explains the following methods for configuring high availability:
-
Local Load Balancer Configuration: You can install HAProxy on the worker nodes during the KubeSphere installation process using the KubeKey tool. HAProxy will act as a reverse proxy for the control plane nodes, and the Kubernetes components on the worker nodes will connect to the control plane nodes through HAProxy. This method requires additional health check mechanisms and may reduce efficiency compared to other methods, but can be used in scenarios without a dedicated load balancer and with a limited number of servers.
-
Dedicated Load Balancer: You can use a load balancer provided by your cloud environment as a reverse proxy for the control plane nodes. This method requires deploying the KubeSphere cluster in a cloud environment that offers a dedicated load balancer.
-
Generic Servers as Load Balancers: You can install Keepalived and HAProxy on Linux servers outside the cluster nodes to act as load balancers. This method requires at least two additional Linux servers.
Local Load Balancer Configuration
To use HAProxy for high availability, you need to configure the following parameters in the installation configuration file config-sample.yaml during the installation of KubeSphere:
spec:
controlPlaneEndpoint:
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
KubeKey will automatically install HAProxy on the worker nodes and complete the high availability configuration, requiring no additional actions. For more information, please refer to Install Kubernetes and KubeSphere.
Dedicated Load Balancer
To achieve high availability using a dedicated load balancer provided by your cloud environment, you need to perform the following steps within your cloud environment:
-
Create a load balancer with a minimum of two replicas in your cloud environment.
-
Configure the load balancer to listen on port 6443 of each control plane node in the KubeSphere cluster.
-
Obtain the IP address of the load balancer for future use during the installation of KubeSphere.
For specific instructions, please refer to the user guide of your cloud environment or contact your cloud service provider.
Generic Servers as Load Balancers
The following describes how to configure a generic server as a load balancer using Keepalived and HAProxy.
Prerequisites
-
You need to prepare two Linux servers that belong to the same private network as the cluster nodes as load balancers.
-
You need to prepare a Virtual IP address (VIP) to serve as the floating IP address for the two load balancer servers. This address should not be used by any other devices or components to avoid address conflicts.
Configure High Availability
-
Log in to the server that will be used as the load balancer and execute the following command to install HAProxy and Keepalived (the example assumes Ubuntu as the operating system; please replace apt with the corresponding package manager for other operating systems):
apt install keepalived haproxy psmisc -y
-
Execute the following command to edit the HAProxy configuration file:
vi /etc/haproxy/haproxy.cfg
-
Add the following information to the HAProxy configuration file and save the file (replace <IP address> with the private IP addresses of the control plane nodes in the KubeSphere cluster):
global log /dev/log  local0 warning chroot      /var/lib/haproxy pidfile     /var/run/haproxy.pid maxconn     4000 user        haproxy group       haproxy daemon stats socket /var/lib/haproxy/stats defaults log global option  httplog option  dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 frontend kube-apiserver bind *:6443 mode tcp option tcplog default_backend kube-apiserver backend kube-apiserver mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server kube-apiserver-1 <IP address>:6443 check server kube-apiserver-2 <IP address>:6443 check server kube-apiserver-3 <IP address>:6443 check
-
Execute the following command to restart HAProxy:
systemctl restart haproxy
-
Execute the following command to set HAProxy to run automatically on startup:
systemctl enable haproxy
-
Execute the following command to edit the Keepalived configuration file:
vi /etc/keepalived/keepalived.conf
-
Add the following information to the Keepalived configuration file and save the file:
global_defs { notification_email { } router_id LVS_DEVEL vrrp_skip_check_adv_addr vrrp_garp_interval 0 vrrp_gna_interval 0 } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance haproxy-vip { state BACKUP priority 100 interface <NIC> virtual_router_id 60 advert_int 1 authentication { auth_type PASS auth_pass 1111 } unicast_src_ip <source IP address> unicast_peer { <peer IP address> } virtual_ipaddress { <floating IP address> } track_script { chk_haproxy } }
Replace the following parameters with actual values:
Parameter Description <NIC>
The network interface card (NIC) of the current load balancer.
<source IP address>
The IP address of the current load balancer.
<peer IP address>
The IP address of the other load balancer.
<floating IP address>
The virtual IP address used as the floating IP address.
-
Execute the following command to restart Keepalived:
systemctl restart keepalived
-
Execute the following command to set Keepalived to run automatically on startup:
systemctl enable keepalived
-
Repeat the above steps to install and configure HAProxy and Keepalived on the other load balancer server.
-
Record the floating IP address for future use during the installation of KubeSphere.
Verify High Availability
-
Log in to the first load balancer server and execute the following command to check the floating IP address:
ip a s
If the system’s high availability is functioning properly, the configured floating IP address will be displayed in the command output. For example, in the following command output, inet 172.16.0.10/24 scope global secondary eth0 indicates that the floating IP address is bound to the eth0 network interface:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 52:54:9e:27:38:c8 brd ff:ff:ff:ff:ff:ff inet 172.16.0.2/24 brd 172.16.0.255 scope global noprefixroute dynamic eth0 valid_lft 73334sec preferred_lft 73334sec inet 172.16.0.10/24 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::510e:f96:98b2:af40/64 scope link noprefixroute valid_lft forever preferred_lft forever
-
Execute the following command to simulate a failure on the current load balancer server:
systemctl stop haproxy
-
Execute the following command again to check the floating IP address:
ip a s
If the system’s high availability is functioning properly, the command output will no longer display the floating IP address, as shown in the following command output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 52:54:9e:27:38:c8 brd ff:ff:ff:ff:ff:ff inet 172.16.0.2/24 brd 172.16.0.255 scope global noprefixroute dynamic eth0 valid_lft 72802sec preferred_lft 72802sec inet6 fe80::510e:f96:98b2:af40/64 scope link noprefixroute valid_lft forever preferred_lft forever
-
Log in to the other load balancer server and execute the following command to view the floating IP address:
ip a s
If the system’s high availability is functioning properly, the configured floating IP address will be displayed in the command output. For example, in the following command output, inet 172.16.0.10/24 scope global secondary eth0 indicates that the floating IP address is bound to the eth0 network interface:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 52:54:9e:3f:51:ba brd ff:ff:ff:ff:ff:ff inet 172.16.0.3/24 brd 172.16.0.255 scope global noprefixroute dynamic eth0 valid_lft 72690sec preferred_lft 72690sec inet 172.16.0.10/24 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::f67c:bd4f:d6d5:1d9b/64 scope link noprefixroute valid_lft forever preferred_lft forever
-
Execute the following command on the first load balancer server to restore the running of HAProxy:
systemctl start haproxy
Feedback
Was this page Helpful?
Receive the latest news, articles and updates from KubeSphere
Thanks for the feedback. If you have a specific question about how to use KubeSphere, ask it on Slack. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.