kubernetes-1.20.x二进制部署_manual-installation-v1.20.x-程序员宅基地

技术标签: linux基础  运维  centos  云计算  docker  

资源规划

主机名 ip地址 配置 角色 系统版本
k8s-master01 192.168.0.201 2C2G master/Work centos7.6
k8s-master02 192.168.0.202 2C2G master/Work centos7.6
k8s-master03 192.168.0.203 2C2G master/Work centos7.6
k8s-k8s-master-lb 192.168.0.200 无需资源 k8s-master-lb centos7.6
k8s-node01 192.168.0.201 2C2G Work centos7.6
k8s-node02 192.168.0.201 2C2G Work centos7.6

版本信息

信息 备注
Docker版本 19.03.x
K8s版本 1.20.x
Pod网段 172.168.0.0/16
Service网段 10.96.0.0/12

1.节点初始化

1.1.配置yum源 –所有节点

curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

1.2安装基础工具 –所有节点

yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y

1.3关闭防火墙、selinux、dnsmasq、swap –所有节点

systemctl disable --now firewalld 
systemctl disable --now dnsmasq
systemctl disable --now NetworkManager
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.4关闭swap分区 –所有节点

swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

1.5安装ntpdata时间同步 –所有节点

rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
yum install ntpdate -y

1.6时间同步 –所有节点

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com

1.7加入crontab –所有节点

crontab -e
*/5 * * * * ntpdate time2.aliyun.com

1.8配置limit –所有节点

ulimit -SHn 65535

vim /etc/security/limits.conf              #末尾添加如下内容
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

1.9 做本地hosts/或内网DNS(建议内网DNS) –所有节点

vim /etc/hosts
192.168.0.201 k8s-master01
192.168.0.202 k8s-master02
192.168.0.203 k8s-master03
192.168.0.200 k8s-master-lb                    # 如果不是高可用集群,该IP为Master01的IP
192.168.0.204 k8s-node01
192.168.0.205 k8s-node02

1.10做免密登录 –master01节点

ssh-keygen -t rsa
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i $i;done

Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01 上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。因为阿里云的slb,后端服务器不允许反向连接slb,也可以在slb下面加一层haproxy,解决回环问题。

1.11 Master01下载安装所有的源码文件 –master01节点

mkdir -p /opt/k8s/haproxy/
cd /opt/k8s/haproxy/ ;git clone https://github.com/dotbalo/k8s-ha-install.git

1.12升级系统 –所有节点

yum update -y --exclude=kernel*    #CentOS7需要升级,CentOS8按需升级
reboot  

1.13升级内核 –master01节点

cd /opt/ && wget http://mirror.apad.pro/centos7/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm && wget http://mirror.apad.pro/centos7/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm

1.13.1拷贝至其他节点 –master01节点

for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/opt/; done

1.13.2 安装内核 –master01节点

for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh $i "cd /opt/ && yum localinstall -y kernel-ml*"; done

1.13.3更改内核启动顺序 –所有节点

grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg 
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

1.13.4查看默认内核版本 –所有节点

[root@k8s-master02 opt]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64

1.13.5重启查看 –所有节点

reboot
uname -a

1.14安装ipvsadm –所有节点

yum install ipvsadm ipset sysstat conntrack libseccomp -y

1.15配置ipvs模块 –所有节点

在内核4.19+版本 nf_conntrack_ipv4 已经改为nf_conntrack, 4.18以下使用nf_conntrack ipv4 即可:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
vim /etc/modules-load.d/ipvs.conf # 加入以下内容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

1.15.1 重启机器查看是否正确加载 –所有节点

reboot
lsmod | grep -e ip_vs -e nf_conntrack

1.15.2 修改内核参数 –所有节点

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

sysctl --system

2.基本组件部署

2.1docker安装 –所有节点

yum install docker-ce-19.03.* -y

由于新版kubelet建议使用systemd,所以把docker的CgroupDriver改成systemd

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
	"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

2.1.2设置开机自启 –所有节点

systemctl daemon-reload && systemctl enable --now docker

3 下载安装包

etcd链接地址

https://github.com/etcd-io/etcd/blob/main/CHANGELOG-3.4.md
kubernetes链接地址
https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG

3.1创建存放安装包目录 –master01节点

mkdir -p  /opt/k8s/src && cd  /opt/k8s/src

3.2下载kubernetes和etcd安装包 –master01节点

wget https://dl.k8s.io/v1.20.7/kubernetes-server-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz

3.3解压安装包 –master01节点

tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{
    ,ctl}
tar xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{
    let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

3.4查看etcd版本 –master01节点

[root@k8s-master01 ~]# etcd --version
etcd Version: 3.4.13
Git SHA: ae9734ed2
Go Version: go1.12.17
Go OS/Arch: linux/amd64

3.4查看kubernetes版本 –master01节点

[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.20.7

3.5 将组件拷贝至其他节点 –master01节点

MasterNodes='k8s-master02 k8s-master03'
WorkNodes='k8s-node01 k8s-node02'
for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{
    let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
for NODE in $WorkNodes; do scp /usr/local/bin/kube{
    let,-proxy} $NODE:/usr/local/bin/; done

3.6创建 /opt/cni/bin 目录 –所有节点

mkdir -p /opt/cni/bin                           #切换分支,切换到1.20.X分支(其他版本可以切换到其他分支)
cd /opt/kubernetes/k8s-ha-install/ 
git checkout manual-installation-v1.20.x

4. 生成证书

二进制安装最关键步骤,Master01下载生成证书工具

4.1下载证书生成工具 –master01节点

wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl
wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson 

4.2创建etcd证书目录 –所有节点

mkdir -p /etc/etcd/ssl 

4.3创建kubernetes相关目录 –所有节点

mkdir -p /etc/kubernetes/pki

4.4生成etcd证书 master01节点

cd /opt/kubernetes/k8s-ha-install/pki/

生成CA证书和CA证书的key
[root@k8s-master01 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca

签发etcd的客户端证书,可以在-hostname=xxxxxxx,此处多配置几个ip,方便以后扩容
[root@k8s-master01 pki]# cfssl gencert -ca=/etc/etcd/ssl/etcd-ca.pem -ca-key=/etc/etcd/ssl/etcd-ca-key.pem -config=ca-config.json -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.0.201,192.168.0.202,192.168.0.203 -profile=kubernetes etcd-csr.json |cfssljson -bare /etc/etcd/ssl/etcd

4.5将证书拷贝至其他节点 –master01节点

[root@k8s-master01 pki]# MasterNodes='k8s-master02 k8s-master03'
[root@k8s-master01 pki]# for NODE in $MasterNodes; do ssh $NODE "mkdir -p /etc/etcd/ssl";for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done done

4.6生成kubernetes证书 –master01节点

[root@k8s-master01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

10.96.0.0/24网段是k8s service的网段,如果说需要更改k8s service网段,就需要更改对应ip地址
[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,192.168.0.200,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.0.201,192.168.0.202,192.168.0.203 -profile=kubernetes apiserver-csr.json |cfssljson -bare /etc/kubernetes/pki/apiserver

生成apiserver聚合证书
[root@k8s-master01 pki]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca

[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json |cfssljson -bare /etc/kubernetes/pki/front-proxy-client


生成controller-manager证书
[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes manager-csr.json |cfssljson -bare /etc/kubernetes/pki/controller-manager

设置一个集群项
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.200:8443 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

设置一个用户项
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/controller-manager.pem --client-key=/etc/kubernetes/pki/controller-manager-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

设置一个环境项,一个上下文
[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

设置使用某个环境作为默认环境
[root@k8s-master01 pki]# kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes scheduler-csr.json |cfssljson -bare /etc/kubernetes/pki/scheduler

[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.200:8443 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

[root@k8s-master01 pki]# kubectl config set-credentials system:kube-scheduler \
--client-certificate=/etc/kubernetes/pki/scheduler.pem --client-key=/etc/kubernetes/pki/scheduler-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

[root@k8s-master01 pki]# kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes --user=system:kube-scheduler --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

[root@k8s-master01 pki]# kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/etc/kubernetes/scheduler.kubeconfig

[root@k8s-master01 pki]# cfssl gencert \
-ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin

[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://192.168.0.200:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig

[root@k8s-master01 pki]# kubectl config set-credentials kubernetes-admin \
--client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig


[root@k8s-master01 pki]# kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig

[root@k8s-master01 pki]# kubectl config use-context kubernetes-admin@kubernetes \
--kubeconfig=/etc/kubernetes/admin.kubeconfig

4.7创建ServiceAccount Key –master01节点

[root@k8s-master01 pki]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048
[root@k8s-master01 pki]# openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

4.8拷贝证书至其他master节点 –master01节点

[root@k8s-master01 pki]# for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki|grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE};done; done

4.9查看证书文件 –master01节点

[root@k8s-master01 pki]# ll /etc/kubernetes/pki/
total 92
-rw-r--r--. 1 root root 1025 Jun 28 19:12 admin.csr
-rw-------. 1 root root 1679 Jun 28 19:12 admin-key.pem
-rw-r--r--. 1 root root 1444 Jun 28 19:12 admin.pem
-rw-r--r--. 1 root root 1029 Jun 28 19:10 apiserver.csr
-rw-------. 1 root root 1675 Jun 28 19:10 apiserver-key.pem
-rw-r--r--. 1 root root 1692 Jun 28 19:10 apiserver.pem
-rw-r--r--. 1 root root 1025 Jun 28 19:10 ca.csr
-rw-------. 1 root root 1679 Jun 28 19:10 ca-key.pem
-rw-r--r--. 1 root root 1411 Jun 28 19:10 ca.pem
-rw-r--r--. 1 root root 1082 Jun 28 19:10 controller-manager.csr
-rw-------. 1 root root 1675 Jun 28 19:10 controller-manager-key.pem
-rw-r--r--. 1 root root 1501 Jun 28 19:10 controller-manager.pem
-rw-r--r--. 1 root root  891 Jun 28 19:10 front-proxy-ca.csr
-rw-------. 1 root root 1679 Jun 28 19:10 front-proxy-ca-key.pem
-rw-r--r--. 1 root root 1143 Jun 28 19:10 front-proxy-ca.pem
-rw-r--r--. 1 root root  903 Jun 28 19:10 front-proxy-client.csr
-rw-------. 1 root root 1679 Jun 28 19:10 front-proxy-client-key.pem
-rw-r--r--. 1 root root 1188 Jun 28 19:10 front-proxy-client.pem
-rw-r--r--. 1 root root 1679 Jun 28 19:17 sa.key
-rw-r--r--. 1 root root  451 Jun 28 19:17 sa.pub
-rw-r--r--. 1 root root 1058 Jun 28 19:12 scheduler.csr
-rw-------. 1 root root 1679 Jun 28 19:12 scheduler-key.pem
-rw-r--r--. 1 root root 1476 Jun 28 19:12 scheduler.pem

5.Kubernetes系统组件配置

5.1ETCD配置 –三台master节点

3台master的Etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址

5.1.1master01

vim /etc/etcd/etcd.config.yml
name: 'k8s-etcd01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: https://192.168.0.201:2380
listen-client-urls: https://192.168.0.201:2379,https://127.0.0.1:2379
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: https://192.168.0.201:2380
advertise-client-urls: https://192.168.0.201:2379
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: k8s-master01=https://192.168.0.201:2380,k8s-master02=https://192.168.0.202:2380,k8s-master03=https://192.168.0.203:2380
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
peer-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false
auto-compaction-mode: periodic
auto-compaction-retention: "1"

5.1.2 master02

vim /etc/etcd/etcd.config.yml
name: 'k8s-etcd02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: https://192.168.0.202:2380
listen-client-urls: https://192.168.0.202:2379,https://127.0.0.1:2379
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: https://192.168.0.202:2380
advertise-client-urls: https://192.168.0.202:2379
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: k8s-master01=https://192.168.0.201:2380,k8s-master02=https://192.168.0.202:2380,k8s-master03=https://192.168.0.203:2380
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
peer-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false
auto-compaction-mode: periodic
auto-compaction-retention: "1"

5.1.3master03

vim /etc/etcd/etcd.config.yml
name: 'k8s-etcd03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: https://192.168.0.203:2380
listen-client-urls: https://192.168.0.203:2379,https://127.0.0.1:2379
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: https://192.168.0.203:2380
advertise-client-urls: https://192.168.0.203:2379
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: k8s-master01=https://192.168.0.201:2380,k8s-master02=https://192.168.0.202:2380,k8s-master03=https://192.168.0.203:2380
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
peer-transport-security:
 cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
 key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
 client-cert-auth: true
 trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
 auto-tls: true
debug: false
log-outputs: [default]
force-new-cluster: false
auto-compaction-mode: periodic
auto-compaction-retention: "1"

5.2创建service –所有Master节点

[root@k8s-master01 pki]# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos/etcd
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd.service

5.3创建etcd的证书目录,软链,并启动 –所有master节点

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload && systemctl enable --now etcd

5.4查看etcd状态 –所有master节点

etcdctl --endpoints="192.168.0.201:2379,192.168.0.202:2379,192.168.0.203:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
[root@k8s-master01 pki]# etcdctl --endpoints="192.168.0.201:2379,192.168.0.202:2379,192.168.0.203:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|      ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.0.201:2379 | 6aa96476e0be7503 |  3.4.13 |   16 kB |      true |      false |       942 |         17 |                 17 |        |
| 192.168.0.202:2379 | c27d70d097466c8c |  3.4.13 |   20 kB |     false |      false |       942 |         17 |                 17 |        |
| 192.168.0.203:2379 | ba41e7802bd3a7a0 |  3.4.13 |   20 kB |     false |      false |       942 |         17 |                 17 |        |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

6.配置高可用 –所有master节点

高可用配置(注意:如果不是高可用集群,haproxy.和keepalived无需安装)
如果在云上安装也无需执行此章节的步骤,可以直接使用云上的lb,比如阿里云slb,腾讯云elb等

6.1安装haproxy –所有master节点

yum install haproxy -y

6.2配置HAProxy –所有master节点

vim /etc/haproxy/haproxy.cfg
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s

defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
  timeout http-keep-alive 15s

frontend k8s-master
  bind 0.0.0.0:8443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master

backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server k8s-master01	192.168.0.201:6443  check
  server k8s-master02	192.168.0.202:6443  check
  server k8s-master03	192.168.0.203:6443  check

6.2.1拷贝配置至其他节点 –master01节点

MasterNodes='k8s-master02 k8s-master03'
for NODE in $MasterNodes; do scp -rp /etc/haproxy/haproxy.cfg $NODE:/etc/haproxy/haproxy.cfg;done

6.3.安装keepalived –所有master节点

yum install keepalived -y

6.4配置KeepAlived –所有master节点

所有Master节点配置KeepAlived, 配置不一样,注意区分,注意每个节点的IP和网卡(interface 参数)

6.4.1配置keepalived主节点 –master01节点

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2  
    rise 1
}
vrrp_instance VI_1 {
    
    state MASTER
    interface eth0
    mcast_src_ip 192.168.0.201
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
    
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
    
        192.168.0.200
    }
    track_script {
    
       chk_apiserver
    }
}

6.4.2配置keepalived备节点 –master02节点

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    
    script "/etc/keepalived/check_apiserver.sh"
   interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    
    state BACKUP
    interface eth0
    mcast_src_ip 192.168.0.202
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
    
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
    
        192.168.0.200
    }
    track_script {
    
       chk_apiserver
    }
}

6.4.3配置keepalived备节点 –master03节点

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    
    script "/etc/keepalived/check_apiserver.sh"
 interval 5
    weight -5
    fall 2  
rise 1
}
vrrp_instance VI_1 {
    
    state BACKUP
    interface eth0
    mcast_src_ip 192.168.0.203
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
    
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
    
        192.168.0.200
    }
    track_script {
    
       chk_apiserver
    }
}

6.5创建keepalived健康检查脚本 –所有master节点

vim  /etc/keepalived/check_apiserver.sh
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

6.5.1添加执行权限 –所有master节点

chmod +x /etc/keepalived/check_apiserver.sh

6.5.2将脚本推送至其他master节点 –master01节点

MasterNodes='k8s-master02 k8s-master03'
for NODE in $MasterNodes; do scp -rp /etc/keepalived/check_apiserver.sh $NODE:/etc/keepalived/check_apiserver.sh;done

6.6启动haproxy和keepalived –所有master节点

systemctl daemon-reload
systemctl enable --now haproxy
systemctl enable --now keepalived

6.6.1测试keepalived是否正常

[root@k8s-master02]# ping 192.168.0.200
PING 192.168.0.200 (192.168.0.200) 56(84) bytes of data.
64 bytes from 192.168.0.200: icmp_seq=1 ttl=64 time=12.7 ms
64 bytes from 192.168.0.200: icmp_seq=2 ttl=64 time=0.749 ms
64 bytes from 192.168.0.200: icmp_seq=3 ttl=64 time=0.699 ms
64 bytes from 192.168.0.200: icmp_seq=4 ttl=64 time=0.342 ms
#停止haproxy看是ip否漂移到其他节点,测试完记得启动haproxy和keepalived
systemctl stop haproxy

7.Kubernetes组件配置

7.1创建相关目录 –所有节点

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

7.2 安装Apiserver –所有master节点

创建kube-apiserver service,#注意,如果不是高可用集群,192.168.0.200改为master01的ip

7.2.1配置kube-apiserver –所有master节点

注意本文档使用的k8s service网段为10.96.0.0/12,使用该网段则无需修改,该网段不能和宿主机的网段、Pod网段的重复,请按需修改,3台master的kube-apiserver.service一样,注意修改–advertise-address=ip

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
	--v=2 \
	--logtostderr=true \
	--allow-privileged=true \
	--bind-address=0.0.0.0 \
	--secure-port=6443 \
	--insecure-port=0 \
	--advertise-address=192.168.0.201 \
	--service-cluster-ip-range=10.96.0.0/12 \
	--service-node-port-range=30000-32767 \
	--etcd-servers=https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379 \
	--etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \
	--etcd-certfile=/etc/etcd/ssl/etcd.pem \
	--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
	--client-ca-file=/etc/kubernetes/pki/ca.pem \
	--tls-cert-file=/etc/kubernetes/pki/apiserver.pem \
	--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \
	--kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \
	--kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \
	--service-account-key-file=/etc/kubernetes/pki/sa.pub \
	--service-account-signing-key-file=/etc/kubernetes/pki/sa.key \
	--service-account-issuer=https://kubernetes.default.svc.cluster.local \
	--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \
	--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \
	--authorization-mode=Node,RBAC \
	--enable-bootstrap-token-auth=true \
	--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \
	--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \
	--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \
	--requestheader-allowed-names=aggregator \
	--requestheader-group-headers=X-Remote-Group \
	--requestheader-extra-headers-prefix=X-Remote-Extra- \
	--requestheader-username-headers=X-Remote-User
	# --token-auth-file=/etc/kubernetes/token.csv

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

7.2.1拷贝配置文件至其他节点 –master01节点

MasterNodes='k8s-master02 k8s-master03'
for NODE in $MasterNodes; do scp -rp /usr/lib/systemd/system/kube-apiserver.service $NODE:/usr/lib/systemd/system/kube-apiserver.service ;done

7.2.2启动kube-apiserver –所有master节点

systemctl start kube-apiserver
systemctl enable --now kube-apiserver

8.配置ControllerManager

注意本文档使用的k8s Pod网段为172.16.0.0/12,该网段不能和宿主机的网段、k8s Service网段重复,请按需修改,3台master的kube-controller-manager.service配置文件相同

8.1配置kube-controller-manager service –所有master节点

vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--v=2 \
--logtostderr=true \
--address=127.0.0.1 \
--root-ca-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/pki/sa.key \
--kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
--leader-elect=true \
--use-service-account-credentials=true \
--node-monitor-grace-period=40s \
--node-monitor-period=5s \
--pod-eviction-timeout=2m0s \
--controllers=*,bootstrapsigner,tokencleaner \
--allocate-node-cidrs=true \
--cluster-cidr=172.16.0.0/12 \
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

8.1.1拷贝配置文件至其他节点 –master01节点

MasterNodes='k8s-master02 k8s-master03'
for NODE in $MasterNodes; do scp -rp /usr/lib/systemd/system/kube-controller-manager.service $NODE:/usr/lib/systemd/system/kube-controller-manager.service;done

8.1.2启动kube-controller-manager –所有master节点

systemctl start  kube-controller-manager
systemctl enable --now kube-controller-manager

9.配置 Scheduler

9.1配置kube- Scheduler.service文件

vi /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --leader-elect=true \
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

9.1.1拷贝配置文件至其他节点 –master01节点

MasterNodes='k8s-master02 k8s-master03'
for NODE in $MasterNodes; do scp -rp /usr/lib/systemd/system/kube-scheduler.service $NODE:/usr/lib/systemd/system/kube-scheduler.service;done

9.1.2启动kube-scheduler –所有master节点

systemctl start kube-scheduler 
systemctl enable --now kube-scheduler

10配置TLS Bootstrapping

当集群开启了 TLS 认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯;此时如果节点多起来,为每个节点单独签署证书将是一件非常繁琐的事情;TLS bootstrapping 功能就是让 kubelet 先使用一个预定的低权限用户连接到 apiserver,然后向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署;

10.1创建bootstrap –所有master

如果不是高可用集群,192.168.0.200:8443改为master01的地址,8443改为apiserver的端口,默认是6443

[root@k8s-master01 ~]# cd /opt/kubernetes/k8s-ha-install/bootstrap
kubectl config set-cluster kubernetes  --certificate-authority=/etc/kubernetes/pki/ca.pem  --embed-certs=true --server=https://192.168.0.200:8443  --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config set-credentials tls-bootstrap-token-user  --token=c8ad9c.2e4d610cf3e7426e --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config set-context tls-bootstrap-token-user@kubernetes     --cluster=kubernetes   --user=tls-bootstrap-token-user --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

kubectl config use-context tls-bootstrap-token-user@kubernetes     --kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig

如果要修改bootstrap.secret.yaml的token-id和token-secret,需要保证下图红圈内的字符串一致的,并且位数是一样的。还要保证上个命令的黄色字体:c8ad9c.2e4d610cf3e7426e与你修改的字符串要一致

[root@k8s-master01 ~]# cd /opt/kubernetes/k8s-ha-install/bootstrap
[root@k8s-master01 bootstrap]# mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config
[root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml

11.Node节点配置

11.1.拷贝证书至Node节点 –master01节点

[root@k8s-master01 bootstrap]# cd /etc/kubernetes/
for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
     ssh $NODE mkdir -p /etc/kubernetes/pki /etc/etcd/ssl /etc/etcd/ssl
     for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
       scp /etc/etcd/ssl/$FILE $NODE:/etc/etcd/ssl/
     done
     for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig; do
       scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}
 done
 done

11.2.安装Kubelet

11.2.1创建相关目录 –所有节点

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/

11.2.2.配置kubelet service配置文件 –所有节点

vim /etc/systemd/system/kubelet.service.d/10-kubelet.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

11.2.3.配置kubelet service启动文件 –所有节点

vim  /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet

Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target

11.2.4.kubelet的配置文件 –所有节点

如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10(k8s的service网段开始设置的是10.96.0.0/12)

vim /etc/kubernetes/kubelet-conf.yml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

11.2.5启动kubelet –所有节点

systemctl daemon-reload
systemctl start kubelet
systemctl enable --now kubelet

12.配置kube-proxy

如果不是高可用集群,192.168.0.200:8443改为master01的地址,8443改为apiserver的端口,默认是6443

12.1.配置 –master01节点

[root@k8s-master01 ~]# cd /opt/kubernetes/k8s-ha-install

kubectl -n kube-system create serviceaccount kube-proxy
kubectl create clusterrolebinding system:kube-proxy   --clusterrole system:node-proxier   --serviceaccount kube-system:kube-proxy

SECRET=$(kubectl -n kube-system get sa/kube-proxy --output=jsonpath='{.secrets[0].name}')
JWT_TOKEN=$(kubectl -n kube-system get secret/$SECRET --output=jsonpath='{.data.token}' | base64 -d)
PKI_DIR=/etc/kubernetes/pki
K8S_DIR=/etc/kubernetes

kubectl config set-cluster kubernetes     --certificate-authority=/etc/kubernetes/pki/ca.pem     --embed-certs=true    --server=https://192.168.0.200:8443     --kubeconfig=${K8S_DIR}/kube-proxy.kubeconfig

kubectl config set-credentials kubernetes     --token=${JWT_TOKEN}     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config set-context kubernetes     --cluster=kubernetes     --user=kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

kubectl config use-context kubernetes     --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

12.2拷贝kube-proxy的systemd Service文件至其他节点 –master01节点

如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 172.16.0.0/12参数为pod的网段。

[root@k8s-master01 ~]# vim /opt/kubernetes/k8s-ha-install/kube-proxy/kube-proxy.conf
clusterCIDR: 172.16.0.0/12

12.3分发配置文件 –master01节点

[root@k8s-master01 ~]# cd /opt/kubernetes/k8s-ha-install
for NODE in k8s-master01 k8s-master02 k8s-master03; do
     scp ${K8S_DIR}/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
     scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
     scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
 done

for NODE in k8s-node01 k8s-node02; do
     scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig
     scp kube-proxy/kube-proxy.conf $NODE:/etc/kubernetes/kube-proxy.conf
     scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
 done

12.4启动kube-proxy –所有节点

systemctl daemon-reload 
systemctl enable --now kube-proxy

13. 安装calico

13.1.编辑配置文件 –master01节点

[root@k8s-master01 ~]# cd /opt/kubernetes/k8s-ha-install/calico/

# 修改calico-etcd.yaml的以下位置
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.0.201:2379,https://192.168.0.202:2379,https://192.168.0.203:2379"#g' calico-etcd.yaml

ETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'`
ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'`
ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`

sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml

sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

# 更改此处为自己的pod网段
POD_SUBNET="172.168.0.0/12"

sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml

13.2查看容器状态 –master01节点

容器状态异常可以使用kubectl describe 或者logs查看容器的日志

[root@k8s-master01 calico]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5f6d4b864b-wc28v   1/1     Running   0          7m22s
kube-system   calico-node-2nstt                          1/1     Running   0          7m16s
kube-system   calico-node-jn8k9                          1/1     Running   1          7m17s
kube-system   calico-node-qmpdx                          1/1     Running   0          7m20s
kube-system   calico-node-s8ldq                          1/1     Running   0          7m14s
kube-system   calico-node-wp9pp                          1/1     Running   0          7m22s

14.安装CoreDNS

如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第10个IP

14.1.配置资源清单 –master01节点

[root@k8s-master01 calico]# cd /opt/kubernetes/k8s-ha-install/
[root@k8s-master01 k8s-ha-install]# sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml

14.2.创建资源配置清单 –master01节点

[root@k8s-master01 k8s-ha-install]# kubectl  create -f CoreDNS/coredns.yaml

14.3检查是否创建成功 –master01节点

[root@k8s-master01 k8s-ha-install]# kubectl get pod -n kube-system |grep coredns
coredns-867d46bfc6-wr64l                   1/1     Running   0          67s

14.4.最新版CoreDNS方式(不建议安装最新版) –master01节点

git clone https://github.com/coredns/deployment.git
cd deployment/kubernetes
# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -

查看状态
[root@k8s-master01 kubernetes]# kubectl get po -n kube-system -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS   AGE
coredns-746fcb4bc5-fwx4r   1/1     Running   0          27s

15.安装Metrics Server –master01节点

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

15.1.创建资源配置清单 –master01节点

[root@k8s-master01 ~]# cd /opt/kubernetes/k8s-ha-install/metrics-server-0.4.x/
[root@k8s-master01 metrics-server-0.4.x]# kubectl apply -f comp.yaml

15.2.查看Metrics Server服务是否启动成功 –master01节点

[root@k8s-master01 metrics-server-0.4.x]# kubectl get po -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5f6d4b864b-wc28v   1/1     Running   0          4h3m
kube-system   calico-node-2nstt                          1/1     Running   0          4h2m
kube-system   calico-node-jn8k9                          1/1     Running   1          4h2m
kube-system   calico-node-qmpdx                          1/1     Running   0          4h3m
kube-system   calico-node-s8ldq                          1/1     Running   0          4h2m
kube-system   calico-node-wp9pp                          1/1     Running   0          4h3m
kube-system   metrics-server-595f65d8d5-p5jzr            1/1     Running   0          85s

15.3.验证metrics server服务 –master01节点

[root@k8s-master01 ~]# kubectl top nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   323m         16%    1158Mi          61%       
k8s-master02   286m         14%    1190Mi          63%       
k8s-master03   298m         14%    1207Mi          64%       
k8s-node01     135m         6%     690Mi           36%       
k8s-node02     137m         6%     733Mi           39% 

16.安装dashboard

Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等。

16.1安装指定版本dashboard –master01节点

[root@k8s-master01 ~]# cd /opt/kubernetes/k8s-ha-install/dashboard/
[root@k8s-master01 dashboard]# kubectl apply -f .

16.2安装最新版的dashboard –master01节点

# 官方GitHub地址:https://github.com/kubernetes/dashboard
# 可以在官方dashboard查看到最新版dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

# 创建管理员用户vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding 
metadata: 
  name: admin-user
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
  
#创建
kubectl apply -f admin.yaml -n kube-system

16.3登录dashboard

16.3.1.更改dashboard的svc为NodePort –master01节点

kubectl get svc -n kubernetes-dashboard
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

type: ClusterIP    改为---->     type: ClusterIPNodePort

根据自己的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard
页面访问:https://192.168.0.200:32401/ #需为https

16.3.2查看token值 –master01节点

[root@k8s-master01 dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

17.集群验证

17.1安装busybox –master01节点

cat<<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: busybox:1.28
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

17.2验证步骤 –master01节点

    1. Pod必须能解析Service
    1. Pod必须能解析跨namespace的Service* 3. 每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53
    1. Pod和Pod之前要能通
      4.1.同namespace能通信
      4.2.跨namespace能通信
      4.3.跨机器能通信

17.2.1检查过程 –master01节点

# 1.查看po是否启动成功
[root@k8s-master01 ~]# kubectl get pod
NAME      READY   STATUS    RESTARTS   AGE
busybox   1/1     Running   0          27s

#2.查看svc是否正常
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d

#3.查看Pod是否能能解析Service
[root@k8s-master01 CoreDNS]# kubectl exec  busybox -n default -- nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

#4.查看Pod是否能解析跨namespace的Service
[root@k8s-master01 CoreDNS]# kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

17.2.2使用telnet命令验证 –所有节点

yum install -y telnet

# 所有机器 10.96.0.1  443  kubernetes svc 443
# 所有机器 10.96.0.10 53   kube-dns的service 53

[root@k8s-master01 CoreDNS]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.

[root@k8s-master01 CoreDNS]# telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.

17.2.3curl命令验证 –所有节点

[root@k8s-master01 ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server

17.2.4容器验证 –master01节点

[root@k8s-master01 ~]# kubectl get po -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5f6d4b864b-wc28v   1/1     Running   0          5h53m
calico-node-2nstt                          1/1     Running   0          5h52m
calico-node-jn8k9                          1/1     Running   1          5h53m
calico-node-qmpdx                          1/1     Running   0          5h53m
calico-node-s8ldq                          1/1     Running   0          5h52m
calico-node-wp9pp                          1/1     Running   0          5h53m
coredns-867d46bfc6-pl425                   1/1     Running   0          17m
metrics-server-595f65d8d5-p5jzr            1/1     Running   0          111m

[root@k8s-master01 ~]# kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-5f6d4b864b-wc28v   1/1     Running   0          5h53m   192.168.0.202    k8s-master02   <none>           <none>
calico-node-2nstt                          1/1     Running   0          5h53m   192.168.0.204    k8s-node01     <none>           <none>
calico-node-jn8k9                          1/1     Running   1          5h53m   192.168.0.202    k8s-master02   <none>           <none>
calico-node-qmpdx                          1/1     Running   0          5h53m   192.168.0.203    k8s-master03   <none>           <none>
calico-node-s8ldq                          1/1     Running   0          5h53m   192.168.0.201    k8s-master01   <none>           <none>
calico-node-wp9pp                          1/1     Running   0          5h53m   192.168.0.205    k8s-node02     <none>           <none>
coredns-867d46bfc6-pl425                   1/1     Running   0          17m     172.162.195.2    k8s-master03   <none>           <none>
metrics-server-595f65d8d5-p5jzr            1/1     Running   0          111m    172.171.14.194   k8s-node02     <none>           <none>

[root@k8s-master01 ~]# kubectl exec -it calico-node-2nstt -n  kube-system  -- sh
sh-4.4#

# 进入node01,跨主机ping通node02
[root@k8s-master01 ~]# kubectl exec -it calico-node-2nstt -n  kube-system  -- sh
sh-4.4# ping 192.168.0.205
PING 192.168.0.205 (192.168.0.205) 56(84) bytes of data.
64 bytes from 192.168.0.205: icmp_seq=1 ttl=64 time=1.07 ms
64 bytes from 192.168.0.205: icmp_seq=2 ttl=64 time=0.268 ms
64 bytes from 192.168.0.205: icmp_seq=3 ttl=64 time=0.422 ms
64 bytes from 192.168.0.205: icmp_seq=4 ttl=64 time=0.512 ms
--- 192.168.0.205 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 48ms
rtt min/avg/max/mdev = 0.268/0.568/1.073/0.305 ms

18. 生产环境关键性配置

18.1 Docker配置

18.1.1配置daemon.json文件 –所有节点

vim /etc/docker/daemon.json
{
      "registry-mirrors": [
    "https://registry.docker-cn.com",
    "http://hub-mirror.c.163.com",
    "https://docker.mirrors.ustc.edu.cn"
  ],
 "exec-opts": ["native.cgroupdriver=systemd"],
 "max-concurrent-downloads": 10,  "max-concurrent-uploads": 5,  "log-opts": {
        "max-size": "300m",    "max-file": "2"  },  "live-restore": true } 
 

max-concurrent-downloads # 下载并发数,下载镜像相关
max-concurrent-uploads   # 上传并发数,上传镜像相关
max-size				 # 日志文件最大到多少切割 (此处是300m)
max-file				 # 日志文件保留个数	(此处是2个),按公司实际情况修改。
live-restore			 # 开启这个参数,重启docker不会影响上面的参数

# 所有节点改完重启docker
systemctl daemon-reload && systemctl restart docker

18.2 kube-controller-manager配置 –所有master节点

 vim /usr/lib/systemd/system/kube-controller-manager.service  #开启kubelet证书自动续期
 # 找个位置加上,在三个master节点
 --experimental-cluster-signing-duration=876000h0m0s \
 
 # 改完重启
 systemctl daemon-reload && systemctl restart kube-controller-manager

18.3 kubelet配置

18.3.1更换配置文件 –所有节点

[root@k8s-node02 ~]# cat /etc/systemd/system/kubelet.service.d/10-kubelet.conf 

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_CONFIG_ARGS=--config=/etc/kubernetes/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384    --image-pull-progress-deadline=30m "
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS

此文件只是在原有的基础上,增加了k8s的加密方式:
--tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384

以及镜像拉取超时时间:
--image-pull-progress-deadline=30m

18.4 kubelet配置

在linux系统里,sysctls 接口允许管理员在运行时修改内核参数,参数存在于/proc/sys/虚拟进程文件系统里,参数涉及到很多子模块,例如:

  • 内核(kernel)(常见前缀kernel.)

  • 网络(networking)(常见前缀net.)

  • 虚拟内存(virtual memory) (常见前缀 vm.)

  • MDADM(常见前缀dev.)

启用非安全sysctls

sysctls分为安全和非安全的,除了合理地划分名称空间外一个安全的sysctl必须在同一个节点上的pod间是隔离的,这就意味着为一个pod设置安全的sysctl需要考虑以下:

  • 必须不能影响同一节点上的其它pod

  • 必须不能危害节点的健康

  • 必须不能获取自身pod所限制以外的cpu或内存资源

截至目前,大部分名称空间下的sysctls都不被认为是安全的,以下列出被kubernetes安全支持:

  • kernel.shm_rmid_forced

  • net.ipv4.ip_local_port_range

  • net.ipv4.tcp_syncookies

谨慎修改!!!!!!

# 所有节点、添加如下配置-----注意:请根据生成环境配置
vim /etc/kubernetes/kubelet-conf.yml

rotateServerCertificates: true
allowedUnsafeSysctls:     #允许修改内核参数,按需修改,不是必须!!!
 - "net.core*"
 - "net.ipv4.*"
kubeReserved:     #设置预留给k8s组件的资源(主要组件)
  cpu: "10m"
  memory: 10Mi
  ephemeral-storage: 10Mi
systemReserved:   #设置预留系统服务的资源
  cpu: "1"
  memory: 20Mi
  ephemeral-storage: 1Gi
  

# 改完重启
systemctl daemon-reload && systemctl restart kubelet
 
# 查看日志没报错就行
[root@k8s-master01 ~]# tail -f /var/log/messages

18.5 角色名字更改 –master01节点

[root@k8s-master01 ~]#  kubectl label node k8s-master01 node-role.kubernetes.io/matser=''
node/k8s-master01 labeled
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    matser   20h   v1.20.7
k8s-master02   Ready    <none>   20h   v1.20.7
k8s-master03   Ready    <none>   20h   v1.20.7
k8s-node01     Ready    <none>   20h   v1.20.7
k8s-node02     Ready    <none>   20h   v1.20.7

19、安装总结

1、	kubeadm
2、	二进制
3、	自动化安装
	a)	Ansible
		i.	Master节点安装不需要写自动化。
		ii.	添加Node节点,playbook。
4、	安装需要注意的细节
	a)	上面的细节配置
	b)	生产环境中etcd一定要和系统盘分开,一定要用ssd硬盘。
	c)	Docker数据盘也要和系统盘分开,有条件的话可以使用ssd硬盘
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/wang688098/article/details/120325771

智能推荐

c# 调用c++ lib静态库_c#调用lib-程序员宅基地

文章浏览阅读2w次,点赞7次,收藏51次。四个步骤1.创建C++ Win32项目动态库dll 2.在Win32项目动态库中添加 外部依赖项 lib头文件和lib库3.导出C接口4.c#调用c++动态库开始你的表演...①创建一个空白的解决方案,在解决方案中添加 Visual C++ , Win32 项目空白解决方案的创建:添加Visual C++ , Win32 项目这......_c#调用lib

deepin/ubuntu安装苹方字体-程序员宅基地

文章浏览阅读4.6k次。苹方字体是苹果系统上的黑体,挺好看的。注重颜值的网站都会使用,例如知乎:font-family: -apple-system, BlinkMacSystemFont, Helvetica Neue, PingFang SC, Microsoft YaHei, Source Han Sans SC, Noto Sans CJK SC, W..._ubuntu pingfang

html表单常见操作汇总_html表单的处理程序有那些-程序员宅基地

文章浏览阅读159次。表单表单概述表单标签表单域按钮控件demo表单标签表单标签基本语法结构<form action="处理数据程序的url地址“ method=”get|post“ name="表单名称”></form><!--action,当提交表单时,向何处发送表单中的数据,地址可以是相对地址也可以是绝对地址--><!--method将表单中的数据传送给服务器处理,get方式直接显示在url地址中,数据可以被缓存,且长度有限制;而post方式数据隐藏传输,_html表单的处理程序有那些

PHP设置谷歌验证器(Google Authenticator)实现操作二步验证_php otp 验证器-程序员宅基地

文章浏览阅读1.2k次。使用说明:开启Google的登陆二步验证(即Google Authenticator服务)后用户登陆时需要输入额外由手机客户端生成的一次性密码。实现Google Authenticator功能需要服务器端和客户端的支持。服务器端负责密钥的生成、验证一次性密码是否正确。客户端记录密钥后生成一次性密码。下载谷歌验证类库文件放到项目合适位置(我这边放在项目Vender下面)https://github.com/PHPGangsta/GoogleAuthenticatorPHP代码示例://引入谷_php otp 验证器

【Python】matplotlib.plot画图横坐标混乱及间隔处理_matplotlib更改横轴间距-程序员宅基地

文章浏览阅读4.3k次,点赞5次,收藏11次。matplotlib.plot画图横坐标混乱及间隔处理_matplotlib更改横轴间距

docker — 容器存储_docker 保存容器-程序员宅基地

文章浏览阅读2.2k次。①Storage driver 处理各镜像层及容器层的处理细节,实现了多层数据的堆叠,为用户 提供了多层数据合并后的统一视图②所有 Storage driver 都使用可堆叠图像层和写时复制(CoW)策略③docker info 命令可查看当系统上的 storage driver主要用于测试目的,不建议用于生成环境。_docker 保存容器

随便推点

网络拓扑结构_网络拓扑csdn-程序员宅基地

文章浏览阅读834次,点赞27次,收藏13次。网络拓扑结构是指计算机网络中各组件(如计算机、服务器、打印机、路由器、交换机等设备)及其连接线路在物理布局或逻辑构型上的排列形式。这种布局不仅描述了设备间的实际物理连接方式,也决定了数据在网络中流动的路径和方式。不同的网络拓扑结构影响着网络的性能、可靠性、可扩展性及管理维护的难易程度。_网络拓扑csdn

JS重写Date函数,兼容IOS系统_date.prototype 将所有 ios-程序员宅基地

文章浏览阅读1.8k次,点赞5次,收藏8次。IOS系统Date的坑要创建一个指定时间的new Date对象时,通常的做法是:new Date("2020-09-21 11:11:00")这行代码在 PC 端和安卓端都是正常的,而在 iOS 端则会提示 Invalid Date 无效日期。在IOS年月日中间的横岗许换成斜杠,也就是new Date("2020/09/21 11:11:00")通常为了兼容IOS的这个坑,需要做一些额外的特殊处理,笔者在开发的时候经常会忘了兼容IOS系统。所以就想试着重写Date函数,一劳永逸,避免每次ne_date.prototype 将所有 ios

如何将EXCEL表导入plsql数据库中-程序员宅基地

文章浏览阅读5.3k次。方法一:用PLSQL Developer工具。 1 在PLSQL Developer的sql window里输入select * from test for update; 2 按F8执行 3 打开锁, 再按一下加号. 鼠标点到第一列的列头,使全列成选中状态,然后粘贴,最后commit提交即可。(前提..._excel导入pl/sql

Git常用命令速查手册-程序员宅基地

文章浏览阅读83次。Git常用命令速查手册1、初始化仓库git init2、将文件添加到仓库git add 文件名 # 将工作区的某个文件添加到暂存区 git add -u # 添加所有被tracked文件中被修改或删除的文件信息到暂存区,不处理untracked的文件git add -A # 添加所有被tracked文件中被修改或删除的文件信息到暂存区,包括untracked的文件...

分享119个ASP.NET源码总有一个是你想要的_千博二手车源码v2023 build 1120-程序员宅基地

文章浏览阅读202次。分享119个ASP.NET源码总有一个是你想要的_千博二手车源码v2023 build 1120

【C++缺省函数】 空类默认产生的6个类成员函数_空类默认产生哪些类成员函数-程序员宅基地

文章浏览阅读1.8k次。版权声明:转载请注明出处 http://blog.csdn.net/irean_lau。目录(?)[+]1、缺省构造函数。2、缺省拷贝构造函数。3、 缺省析构函数。4、缺省赋值运算符。5、缺省取址运算符。6、 缺省取址运算符 const。[cpp] view plain copy_空类默认产生哪些类成员函数

推荐文章

热门文章

相关标签