首页
导航
统计
留言
更多
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
Search
1
Ubuntu安装 kubeadm 部署k8s 1.30
218 阅读
2
kubeadm 部署k8s 1.30
133 阅读
3
rockylinux 9.3详细安装drbd
131 阅读
4
rockylinux 9.3详细安装drbd+keepalived
121 阅读
5
ceshi
82 阅读
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
登录
/
注册
Search
标签搜索
k8s
linux
docker
drbd+keepalivde
ansible
dcoker
webhook
星
累计撰写
117
篇文章
累计收到
940
条评论
首页
栏目
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
页面
导航
统计
留言
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
搜索到
78
篇与
的结果
2024-12-12
k9s
k9s介绍什么是k9s?k9s是一个基于终端的UI,用于与你的Kubernetes集群互动。这个项目的目的是使其更容易导航、观察和管理你在kubernetes集群部署的应用程序。k9s持续观察Kubernetes的变化,并提供后续的命令来与你观察到的资源进行互动。安装k9s(需要配合梯子拉取)curl -sS https://webinstall.dev/k9s | bash #拉取完以后执行 source ~/.config/envman/PATH.env命令# 列出所有可用的 CLI 选项 k9s help # 获取有关 K9s 运行时间的信息(日志、配置等)。 k9s info # 在一个现有的kubeconfig上下文中启动k9s k9s --context <your_context> # 在指定的命名空间中运行k9s k9s -n <your_namespace> # 以只读模式启动K9s--禁用所有集群修改命令 k9s --readonly体验k9s(基于一个kubernetes集群,每个kubernetes集群运行的应用都不一样,仅供参考)# 启动k9s k9s # 退出k9s q 或者 quit输入0,显示所有namespace的pod。选择某个pod,输入l,显示这个pod的日志。单击esc返回。选择某个pod,输入d,describe这个pod。单击esc返回。输入:svc或者:service,跳转到service视图。输入:deploy或:deployment,跳转到deployment视图。输入:rb,跳转到角色绑定视图,用于基于角色的访问控制(RBAC)管理。输入:ns或:namespace,跳转到命名空间视图。输入:cj或:cronjob,跳转到cronjob视图。输入pu或pulses,显示集群资源概览。输入:xray RESOURCE [NAMESPACE],显示集群资源关联关系。RESOURCE可以是po, svc, dp, rs, sts, ds中的一个,NAMESPACE是可选的。以:xray deploy oracle-project为例。k9s整合了Hey可以进行性能测试。Hey是一个CLI工具,用于对HTTP端点进行基准测试,类似于AB bench。这个初步的功能目前支持对端口转发和服务进行基准测试。 最初,这些基准将以下列默认值运行。 并发级别:1 请求数:200 HTTP动词:GET 路径:/ 要设置一个端口转发,你需要导航到PodView,选择一个pod和一个暴露于特定端口的容器。使用SHIFT-F,会出现一个对话框,让你指定一个本地端口进行转发。一旦确认,你可以输入f导航到PortForward视图,列出你的活动端口转发。选择一个端口转发并使用CTRL-L将在该HTTP端点上运行一个基准测试。要查看基准运行的结果,请进入Benchmarks视图(别名be)。现在你应该能够选择一个基准,并按<ENTER>查看运行统计的细节。注意:端口转发只在k9s会话期间持续,退出时将被终止。更多特性和功能请访问官方或github获得。https://k9scli.io/ https://github.com/derailed/k9s
2024年12月12日
32 阅读
0 评论
0 点赞
2024-09-02
k8s 高可用部署+升级
一、准备操作 (1)修改所有主机名和解析hostnamectl set-hostname master01 hostnamectl set-hostname master02 hostnamectl set-hostname master03 hostnamectl set-hostname node01(2)所有主机添加解析cat >> /etc/hosts <<EOF 192.168.110.101 master01 192.168.110.102 master02 192.168.110.103 master03 192.168.110.104 node01 192.168.110.200 api-server EOF(3)关闭防火墙和selinux等sed -i 's#enforcing#disabled#g' /etc/selinux/config setenforce 0 systemctl disable --now firewalld NetworkManager postfix swapoff -a(4)sshd服务优化# 1、加速访问(所有节点上) sed -ri 's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config sed -ri 's#^GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config grep ^UseDNS /etc/ssh/sshd_config grep ^GSSAPIAuthentication /etc/ssh/sshd_config systemctl restart sshd # 2、密钥登录(主机点做) # 目的:为了让后续一些远程拷贝操作更方便 ssh-keygen -t rsa -b 4096 ssh-copy-id -i ~/.ssh/id_rsa.pub root@master02 ssh-copy-id -i ~/.ssh/id_rsa.pub root@master03 ssh-copy-id -i ~/.ssh/id_rsa.pub root@node01(5)增大文件打开数量(退出当前会话立即生效)cat > /etc/security/limits.d/k8s.conf <<'EOF' * soft nofile 1048576 * hard nofile 1048576 EOF ulimit -Sn ulimit -Hn(6)所有节点配置模块自动加载,此步骤不做的话(kubeadm init时会直接失败!)modprobe br_netfilter modprobe ip_conntrack cat >/etc/rc.sysinit<<EOF #!/bin/bash for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file done EOF echo "modprobe br_netfilter" >/etc/sysconfig/modules/br_netfilter.modules echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules chmod 755 /etc/sysconfig/modules/br_netfilter.modules chmod 755 /etc/sysconfig/modules/ip_conntrack.modules lsmod | grep br_netfilter(7)同步集群时间采取的是master01做内网集群的ntp服务端,它与公网ntp服务同步时间,其他节点都跟master01同步时间# =====================》chrony服务端:服务端我们可以自己搭建,也可以直接用公网上的时间服务 器,所以是否部署服务端看你自己 # 1、安装 yum -y install chrony # 2、修改配置文件 mv /etc/chrony.conf /etc/chrony.conf.bak cat > /etc/chrony.conf << EOF server ntp1.aliyun.com iburst minpoll 4 maxpoll 10 server ntp2.aliyun.com iburst minpoll 4 maxpoll 10 server ntp3.aliyun.com iburst minpoll 4 maxpoll 10 server ntp4.aliyun.com iburst minpoll 4 maxpoll 10 server ntp5.aliyun.com iburst minpoll 4 maxpoll 10 server ntp6.aliyun.com iburst minpoll 4 maxpoll 10 server ntp7.aliyun.com iburst minpoll 4 maxpoll 10 driftfile /var/lib/chrony/drift makestep 10 3 rtcsync allow 0.0.0.0/0 local stratum 10 keyfile /etc/chrony.keys logdir /var/log/chrony stratumweight 0.05 noclientlog logchange 0.5 EOF # 4、启动chronyd服务 systemctl restart chronyd.service # 最好重启,这样无论原来是否启动都可以重新加载配置 systemctl enable chronyd.service systemctl status chronyd.service # =====================》chrony客户端:在需要与外部同步时间的机器上安装,启动后会自动与你指 定的服务端同步时间 # 下述步骤一次性粘贴到每个客户端执行即可 # 1、安装chrony yum -y install chrony # 2、需改客户端配置文件 /usr/bin/mv /etc/chrony.conf /etc/chrony.conf.bak cat > /etc/chrony.conf << EOF # server master01 iburst server master01 iburst driftfile /var/lib/chrony/drift makestep 10 3 rtcsync local stratum 10 keyfile /etc/chrony.key logdir /var/log/chrony stratumweight 0.05 noclientlog logchange 0.5 EOF # 3、启动chronyd systemctl restart chronyd.service systemctl enable chronyd.service systemctl status chronyd.service # 4、验证 chronyc sources -v(8)更新基础yum源(所有机器)# 1、清理 rm -rf /etc/yum.repos.d/* yum remove epel-release -y rm -rf /var/cache/yum/x86_64/6/epel/ # 2、安装阿里的base与epel源 curl -s -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo curl -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum clean all yum makecache # 或者用华为的也行 # curl -o /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-7-reg.repo # yum install -y https://repo.huaweicloud.com/epel/epel-release-latest-7.noarch.rpm(9)安装系统软件(排除内核)yum update -y --exclud=kernel*(10)安装基础常用软件yum -y install expect wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate chrony bind-utils rsync unzip git(11)更新内核(docker 对系统内核要求比较高,最好使用4.4+)主节点操作wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.274-1.el7.elrepo.x86_64.rpm wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.274-1.el7.elrepo.x86_64.rpm for i in n1 n2 m1 ; do scp kernel-lt-* $i:/opt; done 补充:如果下载的慢就从网盘里拿吧 链接:https://pan.baidu.com/s/1gVyeBQsJPZjc336E8zGjyQ 提取码:Egon 三个节点操作 #安装 yum localinstall -y /root/kernel-lt* yum localinstall -y /opt/kernel-lt* #调到默认启动 grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg #查看当前默认启动的内核 grubby --default-kernel #重启系统 reboot(12)所有节点安装IPVS# 1、安装ipvsadm等相关工具 yum -y install ipvsadm ipset sysstat conntrack libseccomp # 2、配置加载 cat > /etc/sysconfig/modules/ipvs.modules << "EOF" #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in ${ipvs_modules}; do /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe ${kernel_module} fi done EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs(13)所有机器修改内核参数cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp.keepaliv.probes = 3 net.ipv4.tcp_keepalive_intvl = 15 net.ipv4.tcp.max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp.max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.top_timestamps = 0 net.core.somaxconn = 16384 EOF # 立即生效 sysctl --system(14)安装containerd(所有节点都做)自Kubernetes1.24以后,K8S就不再原生支持docker了 我们都知道containerd来自于docker,后被docker捐献给了云原生计算基金会(我们安装docker会一并安装上containerd) 1、centos7默认的libseccomp的版本为2.3.1,不满足containerd的需求,需要下载2.4以上的版本即可,我这里部署2.5.1版本。 # 1、如果你不升级libseccomp的话,启动容器会报错 **Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ed17cbdc31099314dc8fd609d52 b0dfbd6fdf772b78aa26fbc9149ab089c6807/log.json: no such file or directory): runc did not terminate successfully: exit status 127: unknown** # 2、升级 rpm -e libseccomp-2.3.1-4.el7.x86_64 --nodeps # wget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm wget https://mirrors.aliyun.com/centos/8/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm # 官网已经gg了,不更新了,请用阿里云 rpm -qa | grep libseccomp 安装方式一:( 基于阿里云的源)推荐用这种方式,安装的是 # 1、卸载之前的 yum remove docker docker-ce containerd docker-common docker-selinux docker-engine-y # 2、准备repo cd /etc/yum.repos.d/ wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 3、安装 yum install containerd* -y配置# 1、配置 mkdir -pv /etc/containerd containerd config default > /etc/containerd/config.toml # 为containerd生成配置文件 # 2、替换默认pause镜像地址: 这一步非常非常非常非常重要 # 这一步非常非常非常非常重要,国内的镜像地址可能导致下载失败,最终kubeadm安装失败!!!!!!!!!!!!!! grep sandbox_image /etc/containerd/config.toml sed -i 's/registry.k8s.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/' /etc/containerd/config.toml grep sandbox_image /etc/containerd/config.toml # 请务必确认新地址是可用的:sandbox_image = "registry.cnhangzhou.aliyuncs.com/google_containers/pause:3.6" # 3、配置systemd作为容器的cgroup driver grep SystemdCgroup /etc/containerd/config.toml sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml grep SystemdCgroup /etc/containerd/config.toml # 4、配置加速器(必须配置,否则后续安装cni网络插件时无法从docker.io里下载镜像) #参考: https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registryconfiguration #添加 config_path = "/etc/containerd/certs.d" sed -i 's/config_path\ =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.toml mkdir /etc/containerd/certs.d/docker.io -p # docker hub镜像加速 mkdir -p /etc/containerd/certs.d/docker.io cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF server = "https://docker.io" [host."https://dockerproxy.com"] capabilities = ["pull", "resolve"] [host."https://docker.m.daocloud.io"] capabilities = ["pull", "resolve"] [host."https://registry.docker-cn.com"] capabilities = ["pull", "resolve"] [host."http://hub-mirror.c.163.com"] capabilities = ["pull", "resolve"] EOF # 5、配置containerd开机自启动 # 5.1 启动containerd服务并配置开机自启动 systemctl daemon-reload && systemctl restart containerd systemctl enable --now containerd # 5.2 查看containerd状态 systemctl status containerd # 5.3 查看containerd的版本 ctr version二、部署负载均衡+keepalived 部署负载均衡+keepalived对外提供vip:192.168.110.200,三台master上部署配置nginx# 1、添加repo源 cat > /etc/yum.repos.d/nginx.repo << "EOF" [nginx-stable] name=nginx stable repo baseurl=http://nginx.org/packages/centos/$releasever/$basearch/ gpgcheck=1 enabled=1 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=true [nginx-mainline] name=nginx mainline repo baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/ gpgcheck=1 enabled=0 gpgkey=https://nginx.org/keys/nginx_signing.key module_hotfixes=true EOF # 2、安装 yum install nginx -y # 3、配置 cat > /etc/nginx/nginx.conf <<'EOF' user nginx nginx; worker_processes auto; events { worker_connections 20240; use epoll; } error_log /var/log/nginx_error.log info; stream { upstream kube-servers { hash $remote_addr consistent; server master01:6443 weight=5 max_fails=1 fail_timeout=3s; server master02:6443 weight=5 max_fails=1 fail_timeout=3s; server master03:6443 weight=5 max_fails=1 fail_timeout=3s; } server { listen 8443 reuseport; # 监听8443端口 proxy_connect_timeout 3s; proxy_timeout 3000s; proxy_pass kube-servers; } } EOF # 4、启动 systemctl restart nginx systemctl enable nginx systemctl status nginx三台master部署keepalived1、安装 yum -y install keepalived 2、修改keepalive的配置文件(根据实际环境,interface eth0可能需要修改为interface ens33) # 编写配置文件,各个master节点需要修改router_id和mcast_src_ip的值即可。 # ==================================> master01 cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id 192.168.110.101 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 8443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 100 priority 100 advert_int 1 mcast_src_ip 192.168.110.101 # nopreempt # 这行注释掉,否则即使一个具有更高优先级的备份节点出现,当前的 MASTER 也不会 # 被抢占,直至 MASTER 失效。 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 192.168.110.200 } } EOF # ==================================> master02 cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id 192.168.110.102 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 8443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 100 priority 100 advert_int 1 mcast_src_ip 192.168.110.102 # nopreempt # 这行注释掉,否则即使一个具有更高优先级的备份节点出现,当前的 MASTER 也不会 # 被抢占,直至 MASTER 失效。 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 192.168.110.200 } } EOF # ==================================> master03 cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id 192.168.110.103 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 8443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 100 priority 100 advert_int 1 mcast_src_ip 192.168.110.103 # nopreempt # 这行注释掉,否则即使一个具有更高优先级的备份节点出现,当前的 MASTER 也不会 被抢占,直至 MASTER 失效。 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 192.168.110.200 } } EOF # ==================================> master03 cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id 192.168.71.103 } vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 8443" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface ens36 virtual_router_id 100 priority 100 advert_int 1 mcast_src_ip 192.168.71.103 # nopreempt # 这行注释掉,否则即使一个具有更高优先级的备份节点出现,当前的 MASTER 也不会 # 被抢占,直至 MASTER 失效。 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 192.168.71.200 } } EOF cat > /etc/keepalived/check_port.sh << 'EOF' # 设置环境变量,确保所有必要的命令路径正确 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin CHK_PORT=$1 if [ -n "$CHK_PORT" ];then PORT_PROCESS=$(/usr/sbin/ss -lt | grep ":$CHK_PORT" | wc -l) if [ $PORT_PROCESS -eq 0 ];then echo "Port $CHK_PORT Is Not Used,End." exit 1 fi else echo "Check Port Cant Be Empty!" fi EOF chmod +x /etc/keepalived/check_port.sh 启动 systemctl restart keepalived systemctl enable keepalived systemctl status keepalived 6、去到master01上停掉nginx,8443端口就没了 [root@master01 ~]# systemctl stop nginx 会发现vip漂移走了,注意因为你的检测脚本/etc/keepalived/check_port.sh检测端口失效exit非0后,当前master的权重会-20,此时想其他节点能够抢走vip,你必须注释掉 nopreempt 7、动态查看keepalived日志 journalctl -u keepalived -f三、安装k8s# 1、所有机器准备k8s源 cat > /etc/yum.repos.d/kubernetes.repo <<"EOF" [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/ enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key EOF #参考:https://developer.aliyun.com/mirror/kubernetes/setenforce yum install -y kubelet-1.30* kubeadm-1.30* kubectl-1.30* systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet # 2、master01上操作 初始化master节点(仅在master01节点上执行): # 可以kubeadm config images list查看 [root@maste01 ~]# kubeadm config images list registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 先生成配置文件,编辑修改后,再部署(推荐此方式,因为高级配置只能通过配置文件指定,例如配置使用ipvs模式直接用kubeadm init则无法指定) kubeadm config print init-defaults > kubeadm.yaml # 先生成配置文件,内容及修改如下 apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 0.0.0.0 # 统一监听在0.0.0.0即可 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock #指定containerd容器运行时 imagePullPolicy: IfNotPresent name: master01 # 你当前的主机名 taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd # 内部etcd服务就直接指定本地文件夹就行 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # 换成阿里云镜 像仓库地址 kind: ClusterConfiguration kubernetesVersion: 1.30.0 # 指定k8s版本 controlPlaneEndpoint: "api-server:8443" # 指定你的vip地址192.168.71.200与负载均可暴 漏的端口,建议用主机名 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 # 指定Service网段 podSubnet: 10.244.0.0/16 # 增加一行,指定pod网段 scheduler: {} #在文件最后,插入以下内容,(复制时,要带着---): --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs # 表示kube-proxy代理模式是ipvs,如果不指定ipvs,会默认使用iptables,但是 iptables效率低,所以我们生产环境建议开启ipvs,阿里云和华为云托管的K8s,也提供ipvs模式 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd 执行成功显示如下结果: [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.110.200:8443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:3ee0ed5be62b44ac86b9413494371508e190af319fcd782b75b5f998e05a5024 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.110.200:8443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:3ee0ed5be62b44ac86b9413494371508e190af319fcd782b75b5f998e05a5024 [root@master01 certs.d]# kubeadm init phase upload-certs --upload-certs I0902 19:21:21.049884 24825 version.go:256] remote version is much newer: v1.31.0; falling back to: stable-1.30 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 3fd8d7a91a1d6172ca81e17194d340dca19c64fb245c95f62e3c0175eb6a7452 加上证书给其他两个master节点输入上 kubeadm join 192.168.110.200:8443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:3ee0ed5be62b44ac86b9413494371508e190af319fcd782b75b5f998e05a5024 \ --control-plane \ --certificate-key 3fd8d7a91a1d6172ca81e17194d340dca19c64fb245c95f62e3c0175eb6a7452 成功后按照提示执行 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config四、k8s版本升级 如果需要使用 1.28 及以上版本,请使用 新版配置方法 进行配置。新版下载地址:https://mirrors.aliyun.com/kubernetes-new/yum源配置如下[root@k8s-master-01 /etc/yum.repos.d]# cat kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/ enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetesnew/core/stable/v1.30/rpm/repodata/repomd.xml.key下面是老版yum源配置,最多支持到k8s1.28.2-0版本[root@k8s-master1 yum.repos.d]# pwd /etc/yum.repos.d [root@k8s-master1 yum.repos.d]# cat kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg配置后更新yum源,执行命令yum clean all yum makecache [root@master01 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com 已安装的软件包 kubeadm.x86_64 1.30.4-150500.1.1 @kubernetes 可安装的软件包 kubeadm.x86_64 1.31.0-150500.1.1 具体操作过程如下: # 1.标记节点不可调度 [root@master01 ~]# kubectl cordon master03 node/master01 cordoned [root@master01 ~]# [root@master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION master01 Ready,SchedulingDisabled control-plane 57m v1.30.4 master02 Ready control-plane 43m v1.30.4 master03 Ready control-plane 43m v1.30.4 node01 Ready <none> 41m v1.30.4 # 2.驱逐pod [root@master01 ~]# kubectl drain master03 --delete-local-data --ignore-daemonsets --force Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data. node/master01 already cordoned Warning: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-j9dt7, kube-system/kube-proxy-mmnk5 evicting pod kube-system/coredns-7c445c467-rzdzc evicting pod default/nginx-788f75444d-6lr87 evicting pod default/nginx-788f75444d-gvrr9 evicting pod default/nginx-788f75444d-rkh68 evicting pod default/nginx-788f75444d-txqvj evicting pod kube-system/coredns-7c445c467-4w2bc pod/nginx-788f75444d-txqvj evicted pod/nginx-788f75444d-6lr87 evicted pod/nginx-788f75444d-rkh68 evicted pod/nginx-788f75444d-gvrr9 evicted pod/coredns-7c445c467-4w2bc evicted pod/coredns-7c445c467-rzdzc evicted node/master01 drained yum install -y kubeadm-1.31.0-150500.1.1 --disableexcludes=kubernetes 升级 1 软件包 总下载量:11 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. kubeadm-1.31.0-150500.1.1.x86_64.rpm | 11 MB 00:00:40 Running transaction check Running transaction test Transaction test succeeded Running transaction 正在更新 : kubeadm-1.31.0-150500.1.1.x86_64 1/2 清理 : kubeadm-1.30.4-150500.1.1.x86_64 2/2 验证中 : kubeadm-1.31.0-150500.1.1.x86_64 1/2 验证中 : kubeadm-1.30.4-150500.1.1.x86_64 2/2 更新完毕: kubeadm.x86_64 0:1.31.0-150500.1.1 完毕! Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT NODE CURRENT TARGET kubelet master01 v1.30.4 v1.31.0 kubelet master02 v1.30.4 v1.31.0 kubelet master03 v1.30.4 v1.31.0 kubelet node01 v1.30.4 v1.31.0 Upgrade to the latest stable version: COMPONENT NODE CURRENT TARGET kube-apiserver master01 v1.30.0 v1.31.0 kube-apiserver master02 v1.30.0 v1.31.0 kube-apiserver master03 v1.30.0 v1.31.0 kube-controller-manager master01 v1.30.0 v1.31.0 kube-controller-manager master02 v1.30.0 v1.31.0 kube-controller-manager master03 v1.30.0 v1.31.0 kube-scheduler master01 v1.30.0 v1.31.0 kube-scheduler master02 v1.30.0 v1.31.0 kube-scheduler master03 v1.30.0 v1.31.0 kube-proxy 1.30.0 v1.31.0 CoreDNS v1.11.1 v1.11.1 etcd master01 3.5.12-0 3.5.15-0 etcd master02 3.5.12-0 3.5.15-0 etcd master03 3.5.12-0 3.5.15-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.31.0 _____________________________________________________________________ The table below shows the current state of component configs as understood by this version of kubeadm. Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually upgrade to is denoted in the "PREFERRED VERSION" column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED kubeproxy.config.k8s.io v1alpha1 v1alpha1 no kubelet.config.k8s.io v1beta1 v1beta1 no _____________________________________________________________________ kubeadm upgrade apply v1.31.0 yum install -y kubelet-1.31.0 --disableexcludes=kubernetes yum install -y kubeadm-1.31.0 --disableexcludes=kubernetes [root@master01 ~]# systemctl daemon-reload && systemctl restart kubelet [root@master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION master01 Ready,SchedulingDisabled control-plane 84m v1.31.0 master02 Ready control-plane 70m v1.30.4 master03 Ready control-plane 70m v1.30.4 node01 Ready <none> 68m v1.30.4 #恢复节点 kubectl uncordon k8s-master03 升级node节点 前提:升级节点的软件,或者是硬件例如增大内存,都会涉及到的节点的暂时不可用,这与停机维护是一个问题,我们需要考虑的核心问题是节点不可用过程中,如何确保节点上的pod服务不中断!!! 升级或更新node节点步骤:(其他node节点如果要升级,则采用一样的步骤) 1、先隔离Node节点的业务流量 2、cordon禁止新pod调度到当前node kubectl cordon node01 3、对关键服务创建PDB保护策略,确保下一步的排空时,关键服务的pod至少有1个副本可用(在当前节点以外的节点上有分布) 4、drain排空pod kubectl drain node01 --delete-local-data --ignore-daemonsets --force 5、升级当前node上的软件 5.1 在所有的 node 节点上执行如下命令,升级 kubeadm [root@k8s-node2 ~]# yum install -y kubeadm-1.31.0-150500.1.1 --disableexcludes=kubernetes 5.2 升级 kubelet 的配置,在所有node节点上执行 yum install -y kubelet-1.31.0 --disableexcludes=kubernetes yum install -y kubeadm-1.31.0 --disableexcludes=kubernetes 5.3 升级 kubelet 和 kubectl 执行如下命令,以重启 kubelet yum install -y kubelet-1.21.4-0 kubectl-1.21.4-0 --disableexcludes=kubernetes oot@node01 yum.repos.d]# kubectl get pods -n kube-system E0902 21:15:29.561945 60008 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused E0902 21:15:29.562332 60008 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused cat ~/.kube/config
2024年09月02日
44 阅读
0 评论
0 点赞
2024-08-07
Ubuntu安装 kubeadm 部署k8s 1.30
一、准备工作0、ubuntu 添加root用户sudo passwd root su - root # 输入你刚刚设置的密码即可,退出,下次就可以用root登录 #关闭防火墙 systemctl status ufw.service systemctl stop ufw.service #ssh禁用了root连接可以开启 设置vi /etc/ssh/sshd_config配置开启 PermitRootLogin yes 重启服务 systemctl restart sshd1、打开Netplan配置文件sudo nano /etc/netplan/00-installer-config.yaml # 根据实际文件名修改2、修改配置文件2.1动态IPnetwork: ethernets: ens33: # 网卡名(用 `ip a` 查看) dhcp4: true version: 22.2静态IPnetwork: ethernets: ens33: dhcp4: no addresses: [192.168.1.100/24] # IP/子网掩码 gateway4: 192.168.1.1 # 网关 nameservers: addresses: [8.8.8.8, 1.1.1.1] # DNS服务器 version: 23、应用配置sudo netplan apply4、SSH远程登录#修改/etc/ssh/sshd_config PermitRootLogin yessudo systemctl restart sshd三台主机ubuntu20.04.4使用阿里云的apt源先备份一份 sudo cp /etc/apt/sources.list /etc/apt/sources.list.bakvi /etc/apt/sources.list deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse机器配置#修改主机名 sudo hostnamectl set-hostname 主机名 #刷新主机名无需重启 sudo hostname -F /etc/hostnamecat >> /etc/hosts << "EOF" 192.168.110.88 k8s-master-01 m1 192.168.110.70 k8s-node-01 n1 192.168.110.176 k8s-node-02 n2 EOF 集群通信ssh-keygen ssh-copy-id m1 ssh-copy-id n1 ssh-copy-id n2关闭系统的交换分区swap集群内主机都需要执行sed -ri 's/^([^#].*swap.*)$/#\1/' /etc/fstab && grep swap /etc/fstab && swapoff -a && free -h同步时间主节点做sudo apt install chrony -y mv /etc/chrony/conf.d /etc/chrony/conf.d.bak cat > /etc/chrony/conf.d << EOF server ntp1.aliyun.com iburst minpoll 4 maxpoll 10 server ntp2.aliyun.com iburst minpoll 4 maxpoll 10 server ntp3.aliyun.com iburst minpoll 4 maxpoll 10 server ntp4.aliyun.com iburst minpoll 4 maxpoll 10 server ntp5.aliyun.com iburst minpoll 4 maxpoll 10 server ntp6.aliyun.com iburst minpoll 4 maxpoll 10 server ntp7.aliyun.com iburst minpoll 4 maxpoll 10 driftfile /var/lib/chrony/drift makestep 10 3 rtcsync allow 0.0.0.0/0 local stratum 10 keyfile /etc/chrony.keys logdir /var/log/chrony stratumweight 0.05 noclientlog logchange 0.5 EOF systemctl restart chronyd.service # 最好重启,这样无论原来是否启动都可以重新加载配置 systemctl enable chronyd.service systemctl status chronyd.service从节点做sudo apt install chrony -y mv /etc/chrony/conf.d /etc/chrony/conf.d.bak cat > /etc/chrony/conf.d << EOF server 192.168.110.88 iburst driftfile /var/lib/chrony/drift makestep 10 3 rtcsync local stratum 10 keyfile /etc/chrony.key logdir /var/log/chrony stratumweight 0.05 noclientlog logchange 0.5 EOF设置内核参数集群内主机都需要执行cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl = 15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF # 立即生效 sysctl --system# 1. 加载必要的内核模块 sudo modprobe br_netfilter # 2. 确保模块开机自动加载 echo "br_netfilter" | sudo tee /etc/modules-load.d/k8s.conf # 3. 配置网络参数 cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # 4. 应用配置 sudo sysctl --system # 5. 验证配置 ls /proc/sys/net/bridge/ # 应该显示 bridge-nf-call-iptables cat /proc/sys/net/bridge/bridge-nf-call-iptables # 应该输出 1安装常用工具sudo apt update sudo apt install -y expect wget jq psmisc vim net-tools telnet lvm2 git ntpdate chrony bind9-utils rsync unzip git安装ipvsadm安装ipvsadmsudo apt install -y ipvsadm ipset sysstat conntrack #libseccomp 是预装好的 dpkg -l | grep libseccomp在 Ubuntu 22.04.4 中,/etc/sysconfig/modules/ 目录通常不是默认存在的,因为 Ubuntu 使用的是 systemd 作为初始化系统,而不是传统的 SysVinit 或者其他初始化系统。因此,Ubuntu 不使用 /etc/sysconfig/modules/ 来管理模块加载。如果你想确保 IPVS 模块在系统启动时自动加载,你可以按照以下步骤操作:创建一个 /etc/modules-load.d/ipvs.conf 文件: 在这个文件中,你可以列出所有需要在启动时加载的模块。这样做可以确保在启动时自动加载这些模块。echo "ip_vs" > /etc/modules-load.d/ipvs.conf echo "ip_vs_lc" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_wlc" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_rr" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_wrr" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_lblc" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_lblcr" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_dh" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_sh" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_fo" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_nq" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_sed" >> /etc/modules-load.d/ipvs.conf echo "ip_vs_ftp" >> /etc/modules-load.d/ipvs.conf echo "nf_conntrack" >> /etc/modules-load.d/ipvs.conf加载模块: 你可以使用 modprobe 命令来手动加载这些模块,或者让系统在下次重启时自动加载。sudo modprobe ip_vs sudo modprobe ip_vs_lc sudo modprobe ip_vs_wlc sudo modprobe ip_vs_rr sudo modprobe ip_vs_wrr sudo modprobe ip_vs_lblc sudo modprobe ip_vs_lblcr sudo modprobe ip_vs_dh sudo modprobe ip_vs_sh sudo modprobe ip_vs_fo sudo modprobe ip_vs_nq sudo modprobe ip_vs_sed sudo modprobe ip_vs_ftp sudo modprobe nf_conntrack验证模块是否加载: 你可以使用 lsmod 命令来验证这些模块是否已经被成功加载。lsmod | grep ip_vs二、安装containerd(三台节点都要做)#只要超过2.4就不用再安装了 root@k8s-master-01:/etc/modules-load.d# dpkg -l | grep libseccomp ii libseccomp2:amd64 2.5.3-2ubuntu2 amd64 high level interface to Linux seccomp filter开始安装apt install containerd* -y containerd --version #查看版本配置mkdir -pv /etc/containerd containerd config default > /etc/containerd/config.toml #为containerd生成配置文件 vi /etc/containerd/config.toml 把下面改为自己构建的仓库 sandbox_image = sandbox_image = "registry.cn-guangzhou.aliyuncs.com/xingcangku/eeeee:3.8"#配置systemd作为容器的cgroup driver grep SystemdCgroup /etc/containerd/config.toml sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml grep SystemdCgroup /etc/containerd/config.toml 配置加速器(必须配置,否则后续安装cni网络插件时无法从docker.io里下载镜像) #参考:https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration #添加 config_path="/etc/containerd/certs.d" sed -i 's/config_path\ =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.tomlmkdir -p /etc/containerd/certs.d/docker.io cat>/etc/containerd/certs.d/docker.io/hosts.toml << EOF server ="https://docker.io" [host."https ://dockerproxy.com"] capabilities = ["pull","resolve"] [host."https://docker.m.daocloud.io"] capabilities = ["pull","resolve"] [host."https://docker.chenby.cn"] capabilities = ["pull","resolve"] [host."https://registry.docker-cn.com"] capabilities = ["pull","resolve" ] [host."http://hub-mirror.c.163.com"] capabilities = ["pull","resolve" ] EOF#配置containerd开机自启动 #启动containerd服务并配置开机自启动 systemctl daemon-reload && systemctl restart containerd systemctl enable --now containerd #查看containerd状态 systemctl status containerd #查看containerd的版本 ctr version三、安装最新版本的kubeadm、kubelet 和 kubectl1、三台机器准备k8s配置安装源apt-get update && apt-get install -y apt-transport-https sudo mkdir -p /etc/apt/keyrings curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo tee /etc/apt/keyrings/kubernetes-apt-keyring.asc > /dev/null echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.asc] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list # 2. 添加阿里云镜像源(无需密钥验证) sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF deb [trusted=yes] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF # 3. 更新并安装 sudo apt-get update sudo apt-get install -y --allow-unauthenticated kubelet=1.27* kubeadm=1.27* kubectl=1.27* sudo apt-mark hold kubelet kubeadm kubectlapt-get update apt-get install -y kubelet kubeadm kubectl2、主节点操作(node节点不执行)初始化master节点(仅在master节点上执行) #可以kubeadm config images list查看 [root@k8s-master-01 ~]# kubeadm config images list registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0kubeadm config print init-defaults > kubeadm.yamlroot@k8s-master-01:~# cat kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.110.88 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: k8s-master-01 taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: 1.30.3 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd 部署K8Skubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=Swap部署网络插件下载网络插件wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml[root@k8s-master-01 ~]# grep -i image kube-flannel.yml image: docker.io/flannel/flannel:v0.25.5 image: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1 image: docker.io/flannel/flannel:v0.25.5 改为下面 要去阿里云上面构建自己的镜像root@k8s-master-01:~# grep -i image kube-flannel.yml image: registry.cn-guangzhou.aliyuncs.com/xingcangku/cccc:0.25.5 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/ddd:1.5.1 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/cccc:0.25.5 部署在master上即可kubectl apply -f kube-flannel.yml kubectl delete -f kube-flannel.yml #这个是删除网络插件的查看状态kubectl -n kube-flannel get pods kubectl -n kube-flannel get pods -w [root@k8s-master-01 ~]# kubectl get nodes # 全部ready [root@k8s-master-01 ~]# kubectl -n kube-system get pods # 两个coredns的pod也都ready部署kubectl命令提示(在所有节点上执行)yum install bash-completion* -y kubectl completion bash > ~/.kube/completion.bash.inc echo "source '$HOME/.kube/completion.bash.inc'" >> $HOME/.bash_profile source $HOME/.bash_profile出现root@k8s-node-01:~# kubectl get node E0720 07:32:10.289542 18062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E0720 07:32:10.290237 18062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E0720 07:32:10.292469 18062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E0720 07:32:10.292759 18062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E0720 07:32:10.294655 18062 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused The connection to the server localhost:8080 was refused - did you specify the right host or port? #在node节点执行下面命令修改ip地址 mkdir -p $HOME/.kube scp root@192.168.30.135:/etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config重新触发证书上传(核心操作)在首次成功初始化控制平面(kubeadm init)后,需再次执行以下命令(秘钥有效期是两小时):root@k8s-01:~# sudo kubeadm init phase upload-certs --upload-certs I0807 05:49:38.988834 143146 version.go:256] remote version is much newer: v1.33.3; falling back to: stable-1.27 W0807 05:49:48.990339 143146 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.27.txt": Get "https://cdn.dl.k8s.io/release/stable-1.27.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers) W0807 05:49:48.990372 143146 version.go:105] falling back to the local client version: v1.27.6 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 52cb628f88aefbb45cccb94f09bb4e27f9dc77aff464e7bc60af0a9843f41a3fkubeadm join <MASTER_IP>:6443 --token <TOKEN> \ --discovery-token-ca-cert-hash sha256:<HASH> \ --control-plane --certificate-key <KEY>
2024年08月07日
218 阅读
184 评论
0 点赞
2024-08-06
kubeadm 部署k8s 1.30
一、k8s包yum源介绍二、准备工作准备3台机器修改好网络改为固定IPcd /etc/NetworkManager/system-connections/ cp /etc/NetworkManager/system-connections/ens160.nmconnection /etc/NetworkManager/system-connections/ens160.nmconnection.backup vi ens160.nmconnection TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes NAME=ens33 DEVICE=ens33 ONBOOT=yes #这个可以让开机不用nmcli IPADDR=192.168.110.97 GATEWAY=192.168.110.1 NETSTAT=255.255.255.0 DNS1=8.8.8.8 DNS2=192.168.110.1 sudo systemctl restart NetworkManager nmcli conn up ens33修改主机名及解析(三台节点)# 1、修改主机名 hostnamectl set-hostname k8s-master-01 hostnamectl set-hostname k8s-node-01 hostnamectl set-hostname k8s-node-02 # 2、三台机器添加host解析 cat >> /etc/hosts << "EOF" 192.168.110.97 k8s-master-01 m1 192.168.110.213 k8s-node-01 n1 192.168.110.2 k8s-node-02 n2 EOF关闭一些服务(三台节点)# 1、关闭selinux sed -i 's#enforcing#disabled#g' /etc/selinux/config setenforce 0 # 2、禁用防火墙,网络管理,邮箱 systemctl disable --now firewalld NetworkManager postfix # 3、关闭swap分区 swapoff -a # 注释swap分区 cp /etc/fstab /etc/fstab_bak sed -i '/swap/d' /etc/fstabsshd服务优化# 1、加速访问 sed -ri 's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config sudo sed -ri 's@^#?\s*GSSAPIAuthentication\s+yes@GSSAPIAuthentication no@gi' /etc/ssh/sshd_config grep ^UseDNS /etc/ssh/sshd_config grep ^GSSAPIAuthentication /etc/ssh/sshd_config systemctl restart sshd # 2、密钥登录(主机点做):为了让后续一些远程拷贝操作更方便 ssh-keygen ssh-copy-id -i root@k8s-master-01 ssh-copy-id -i root@k8s-node-01 ssh-copy-id -i root@k8s-node-02 #连接测试 [root@m01 ~]# ssh 172.16.1.7 Last login: Tue Nov 24 09:02:26 2020 from 10.0.0.1 [root@web01 ~]#6.增大文件标识符数量(退出当前会话立即生效)cat > /etc/security/limits.d/k8s.conf <<EOF * soft nofile 65535 * hard nofile 131070 EOF ulimit -Sn ulimit -Hn所有节点配置模块自动加载,此步骤不做的话(kubeadm init时会直接失败)modprobe br_netfilter modprobe ip_conntrack cat >>/etc/rc.sysinit<<EOF #!/bin/bash for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file done EOF echo "modprobe br_netfilter" >/etc/sysconfig/modules/br_netfilter.modules echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules chmod 755 /etc/sysconfig/modules/br_netfilter.modules chmod 755 /etc/sysconfig/modules/ip_conntrack.modules lsmod | grep br_netfilter 同步集群时间# =====================》chrony服务端:服务端我们可以自己搭建,也可以直接用公网上的时间服务器,所以是否部署服务端看你自己 # 1、安装 yum -y install chrony # 2、修改配置文件 mv /etc/chrony.conf /etc/chrony.conf.bak cat > /etc/chrony.conf << EOF server ntp1.aliyun.com iburst minpoll 4 maxpoll 10 server ntp2.aliyun.com iburst minpoll 4 maxpoll 10 server ntp3.aliyun.com iburst minpoll 4 maxpoll 10 server ntp4.aliyun.com iburst minpoll 4 maxpoll 10 server ntp5.aliyun.com iburst minpoll 4 maxpoll 10 server ntp6.aliyun.com iburst minpoll 4 maxpoll 10 server ntp7.aliyun.com iburst minpoll 4 maxpoll 10 driftfile /var/lib/chrony/drift makestep 10 3 rtcsync allow 0.0.0.0/0 local stratum 10 keyfile /etc/chrony.keys logdir /var/log/chrony stratumweight 0.05 noclientlog logchange 0.5 EOF # 4、启动chronyd服务 systemctl restart chronyd.service # 最好重启,这样无论原来是否启动都可以重新加载配置 systemctl enable chronyd.service systemctl status chronyd.service # =====================》chrony客户端:在需要与外部同步时间的机器上安装,启动后会自动与你指定的服务端同步时间 # 下述步骤一次性粘贴到每个客户端执行即可 # 1、安装chrony yum -y install chrony # 2、需改客户端配置文件 mv /etc/chrony.conf /etc/chrony.conf.bak cat > /etc/chrony.conf << EOF server 192.168.110.97 iburst driftfile /var/lib/chrony/drift makestep 10 3 rtcsync local stratum 10 keyfile /etc/chrony.key logdir /var/log/chrony stratumweight 0.05 noclientlog logchange 0.5 EOF # 3、启动chronyd systemctl restart chronyd.service systemctl enable chronyd.service systemctl status chronyd.service # 4、验证 chronyc sources -v更新基础yum源(三台机器)# 1、清理 rm -rf /etc/yum.repos.d/* yum remove epel-release -y rm -rf /var/cache/yum/x86_64/6/epel/ # 2、安装阿里的base与epel源 curl -s -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo curl -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum clean all yum makecache # 或者用华为的也行 # curl -o /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-7-reg.repo # yum install -y https://repo.huaweicloud.com/epel/epel-release-latest-7.noarch.rpm更新基础yum源(三台机器)# 1、清理 rm -rf /etc/yum.repos.d/* yum remove epel-release -y rm -rf /var/cache/yum/x86_64/6/epel/ # 2、安装阿里的base与epel源 curl -s -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo curl -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum clean all yum makecache # 或者用华为的也行 # curl -o /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-7-reg.repo # yum install -y https://repo.huaweicloud.com/epel/epel-release-latest-7.noarch.rpm更新系统软件(排除内核) yum update -y --exclud=kernel*安装基础常用软件yum -y install expect wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate chrony bind-utils rsync unzip git更新内核(docker对系统内核要求比较高,最好使用4.4+)主节点操作wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.274-1.el7.elrepo.x86_64.rpm wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.274-1.el7.elrepo.x86_64.rpm for i in n1 n2 ; do scp kernel-lt-* $i:/root; done 补充:如果下载的慢就从网盘里拿吧 链接:https://pan.baidu.com/s/1gVyeBQsJPZjc336E8zGjyQ 提取码:Egon 三个节点操作 #安装 yum localinstall -y /root/kernel-lt* #调到默认启动 grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg #查看当前默认启动的内核 grubby --default-kernel #重启系统 reboot三个节点安装IPVS# 1、安装ipvsadm等相关工具 yum -y install ipvsadm ipset sysstat conntrack libseccomp # 2、配置加载 cat > /etc/sysconfig/modules/ipvs.modules <<"EOF" #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in ${ipvs_modules}; do /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe ${kernel_module} fi done EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs三台机器修改内核参数cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp.keepaliv.probes = 3 net.ipv4.tcp_keepalive_intvl = 15 net.ipv4.tcp.max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp.max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.top_timestamps = 0 net.core.somaxconn = 16384 EOF # 立即生效 sysctl --system三、 安装containerd(三台节点都要做)自Kubernetes1.24以后,K8S就不再原生支持docker了我们都知道containerd来自于docker,后被docker捐献给了云原生计算基金会(我们安装docker当然会一并安装上containerd)安装方法:centos的libseccomp的版本为2.3.1,不满足containerd的需求,需要下载2.4以上的版本即可,我这里部署2.5.1版本。 rpm -e libseccomp-2.5.1-1.el8.x86_64 --nodeps rpm -ivh libseccomp-2.5.1-1.e18.x8664.rpm #官网已经gg了,不更新了,请用阿里云 # wget http://rpmfind.net/linux/centos/8-stream/Base0s/x86 64/0s/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm wget https://mirrors.aliyun.com/centos/8/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm cd /root/rpms sudo yum localinstall libseccomp-2.5.1-1.el8.x86_64.rpm -y #yum libseccomp-2.5.1-1.el8.x86_64.rpm -y rpm -qa | grep libseccomp 安装方式一:(基于阿里云的源)推荐用这种方式,安装的是#1、卸载之前的 yum remove docker docker-ce containerd docker-common docker-selinux docker-engine -y #2、准备repo cd /etc/yum.repos.d/ wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 3、安装 yum install containerd* -y配置# 1、配置 mkdir -pv /etc/containerd containerd config default > /etc/containerd/config.toml #为containerd生成配置文件 #2、替换默认pause镜像地址:这一步非常非常非常非常重要 grep sandbox_image /etc/containerd/config.toml sed -i 's/registry.k8s.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/' /etc/containerd/config.toml grep sandbox_image /etc/containerd/config.toml #请务必确认新地址是可用的: sandbox_image="registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" #3、配置systemd作为容器的cgroup driver grep SystemdCgroup /etc/containerd/config.toml sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml grep SystemdCgroup /etc/containerd/config.toml # 4、配置加速器(必须配置,否则后续安装cni网络插件时无法从docker.io里下载镜像) #参考:https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration #添加 config_path="/etc/containerd/certs.d" sed -i 's/config_path\ =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.tomlmkdir -p /etc/containerd/certs.d/docker.io cat>/etc/containerd/certs.d/docker.io/hosts.toml << EOF server ="https://docker.io" [host."https ://dockerproxy.com"] capabilities = ["pull","resolve"] [host."https://docker.m.daocloud.io"] capabilities = ["pull","resolve"] [host."https://docker.chenby.cn"] capabilities = ["pull","resolve"] [host."https://registry.docker-cn.com"] capabilities = ["pull","resolve" ] [host."http://hub-mirror.c.163.com"] capabilities = ["pull","resolve" ] EOF#5、配置containerd开机自启动 #5.1 启动containerd服务并配置开机自启动 systemctl daemon-reload && systemctl restart containerd systemctl enable --now containerd #5.2 查看containerd状态 systemctl status containerd #5.3查看containerd的版本 ctr version-------------------------配置docker(下述内容不用操作,因为k8s1.30直接对接containerd) # 1、配置docker # 修改配置:驱动与kubelet保持一致,否则会后期无法启动kubelet cat > /etc/docker/daemon.json << "EOF" { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors":["https://reg-mirror.qiniu.com/"] } EOF # 2、重启docker systemctl restart docker.service systemctl enable docker.service # 3、查看验证 [root@k8s-master-01 ~]# docker info |grep -i cgroup Cgroup Driver: systemd Cgroup Version: 1四、 安装k8s官网:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/1、三台机器准备k8s源cat > /etc/yum.repos.d/kubernetes.repo <<"EOF" [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/ enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key EOF #参考:https://developer.aliyun.com/mirror/kubernetes/setenforce yum install -y kubelet-1.30* kubeadm-1.30* kubectl-1.30* systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet2、主节点操作(node节点不执行)初始化master节点(仅在master节点上执行) #可以kubeadm config images list查看 [root@k8s-master-01 ~]# kubeadm config images list registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0kubeadm config print init-defaults > kubeadm.yamlvi kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.110.97 #这里要改为控制节点 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: k8s-master-01 #这里要修改 taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #要去阿里云创建仓库 kind: ClusterConfiguration kubernetesVersion: 1.30.3 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 #添加这行 scheduler: {} #在最后插入以下内容 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd部署K8Skubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=Swap部署网络插件下载网络插件wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml[root@k8s-master-01 ~]# grep -i image kube-flannel.yml image: docker.io/flannel/flannel:v0.25.5 image: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel1 image: docker.io/flannel/flannel:v0.25.5 改为下面 要去阿里云上面构建自己的镜像[root@k8s-master-01 ~]# grep -i image kube-flannel.yml image: registry.cn-guangzhou.aliyuncs.com/xingcangku/cccc:0.25.5 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/ddd:1.5.1 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/cccc:0.25.5 部署在master上即可kubectl apply -f kube-flannel.yml kubectl delete -f kube-flannel.yml #这个是删除网络插件的查看状态kubectl -n kube-flannel get pods kubectl -n kube-flannel get pods -w [root@k8s-master-01 ~]# kubectl get nodes # 全部ready [root@k8s-master-01 ~]# kubectl -n kube-system get pods # 两个coredns的pod也都ready部署kubectl命令提示(在所有节点上执行)yum install bash-completion* -y kubectl completion bash > ~/.kube/completion.bash.inc echo "source '$HOME/.kube/completion.bash.inc'" >> $HOME/.bash_profile source $HOME/.bash_profile排错解决方法:===========================================部署遇到问题之后,铲掉环境重新部署 # 在master节点上 kubeadm reset -f # 在所有节点包括master节点在内上执行如下命令 cd /tmp # 有时候在当前目录下可能与要卸载的包重名的而导致卸载报错,可以切个目录 rm -rf ~/.kube/ rm -rf /etc/kubernetes/ rm -rf /etc/cni rm -rf /opt/cni rm -rf /var/lib/etcd rm -rf /var/etcd rm -rf /run/flannel rm -rf /opt/cni rm -rf /etc/cni/net.d rm -rf /run/xtables.lock systemctl stop kubelet yum remove kube* -y for i in `df |grep kubelet |awk '{print $NF}'`;do umount -l $i ;done # 先卸载所有kubelet挂载否则下条命令无法删除 rm -rf /var/lib/kubelet rm -rf /etc/systemd/system/kubelet.service.d rm -rf /etc/systemd/system/kubelet.service rm -rf /usr/bin/kube* iptables -F reboot # 重新启动,从头再来 # 第一步:在所有节点执行 yum install -y kubelet-1.30* kubeadm-1.30* kubectl-1.30* systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet # 第二步:只在master节点上执行 [root@k8s-master-01 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=Swap # 第三步:部署网络插件 kubectl apply -f kube-flannel.yml kubectl delete -f kube-flannel.yml mkdir -p /etc/containerd/certs.d/registry.aliyuncs.com tee /etc/containerd/certs.d/registry.aliyuncs.com/hosts.toml <<EOF server = "https://registry.aliyuncs.com" [host."https://registry.aliyuncs.com"] capabilities = ["pull", "resolve"] EOF
2024年08月06日
133 阅读
192 评论
0 点赞
2023-09-26
operator开发 mysql一主多从
CRD | | | 定义出/创建出 | | ↓ CR,即resource type ----------------》受自定义的控制器watch监听并控制 | | | 定义出/创建出 | | ↓ 一条具体的resource 实现的功能: 1. 支持一主多从 采用GID的自动备份 2. 支持主从的自动选举切换 3. 支持在线扩容 副本不足时会自动拉起 4. 支持就绪探针的检测 5. .........一、go环境准备wget https://golang.google.cn/dl/go1.22.5.linux-amd64.tar.gz tar zxvf go1.22.5.linux-amd64.tar.gz mv go /usr/local/ cat >> /etc/profile << 'EOF' export GOROOT=/usr/local/go export PATH=$PATH:$GOROOT/bin EOF source /etc/profile go version #查看是否生效 # 设置go代理 # 1、也可以用全球cdn加速 export GOPROXY=https://goproxy.cn,direct go env -w GOPROXY=https://goproxy.cn,direct二、安装kubebuilder框架# 1、下载最新版本的kubebuilder(下载慢的话你就手动下载然后上传) wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v4.1.1/kubebuilder_linux_amd64 chmod +x kubebuilder && mv kubebuilder /usr/local/bin/ $ kubebuilder version #安装必要工具 sudo apt update && sudo apt install -y make sudo apt install -y build-essential git curl 三、初始化项目# 创建项目 mkdir -p /src/application-operator cd /src/application-operator go mod init application-operator kubebuilder init --domain=egonlin.com --owner egonlin # 创建api $ kubebuilder create api --group apps --version v1 --kind Application # 设定的kind的首字母必须大写 Create Resource [y/n] y Create Controller [y/n] y # --kind Application,指定你要创建的resource type的名字,注意首字母必须大写#项目地址直接拉取 https://gitee.com/axzys/mysqlcluster-operator/tree/slave/四、可以先在本地测试执行# 一、修改文件:文件utils.go #1、文件开头增加导入:"k8s.io/client-go/tools/clientcmd" 删除导入:"k8s.io/client-go/rest" #2、方法execCommandOnPod修改 config, err := clientcmd.BuildConfigFromFlags("", KubeConfigPath) // 打开注释 // config, err := rest.InClusterConfig() // 加上注释 # 二、mysqlcluster_controller.go修改 const ( ...... KubeConfigPath = "/root/.kube/config" // 打开注释 ...... ) # 并且确保宿主机上存在/root/.kube/config # 测试yaml apiVersion: apps.egonlin.com/v1 kind: MysqlCluster metadata: name: mysqlcluster-sample labels: app.kubernetes.io/name: mysql-operator app.kubernetes.io/managed-by: kustomize spec: image: registry.cn-shanghai.aliyuncs.com/egon-k8s-test/mysql:5.7 replicas: 4 masterService: master-service slaveService: slave-service storage: storageClassName: "local-path" size: 1Gi resources: requests: cpu: "500m" memory: "1Gi" limits: cpu: "1" memory: "2Gi" livenessProbe: initialDelaySeconds: 30 timeoutSeconds: 5 tcpSocket: port: 3306 先执行make install 然后执行 make run 然后创建测试pod创建测试功能正常以后。可以把控制器放进k8s里面。五、以容器形式部署controller如果想要部署在k8s里面需要把上面修改的配置还原回去。# dockerfile文件中的FROM镜像无法拉取,要换成自己的 $ vi Dockerfile # FROM golang:1.22 AS builder FROM registry.cn-hangzhou.aliyuncs.com/egon-k8s-test/golang:1.22 AS builder #FROM gcr.io/distroless/static:nonroot FROM registry.cn-shanghai.aliyuncs.com/egon-k8s-test/static:nonroot #并且构建过程中需要执行go mod download,默认从国外源下载非常慢需要再该命令前设置好环境变量 # 在go mod download前设置好环境变量 ENV GOPROXY=https://mirrors.aliyun.com/goproxy/,direct RUN go mod download 然后构建 docker 镜像make docker-build IMG=mysql-operator-master:v0.01 #然后启动推上阿里云仓库# 使用 docker 镜像, 部署 controller 到 k8s 集群,会部署成一个deployment make deploy IMG=registry.cn-guangzhou.aliyuncs.com/xingcangku/bendi:v0.8#查询: 默认在system名称空间下 [root@master01 mysql-operator-master]# kubectl get namespace NAME STATUS AGE application-operator-system Active 3d default Active 23d kube-flannel Active 23d kube-node-lease Active 23d kube-public Active 23d kube-system Active 23d monitor Active 22d system Active 36s [root@master01 mysql-operator-master]# kubectl -n system get api/ cmd/ Dockerfile .git/ .golangci.yml go.sum internal/ PROJECT test/ bin/ config/ .dockerignore .gitignore go.mod hack/ Makefile README.md test.yaml [root@master01 mysql-operator-master]# kubectl -n system get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE controller-manager 1/1 1 1 52s [root@master01 mysql-operator-master]# kubectl -n controller-manager get pods No resources found in controller-manager namespace. [root@master01 mysql-operator-master]# kubectl delete -f ./config/samples/apps_v1_mysqlcluster.yaml Error from server (NotFound): error when deleting "./config/samples/apps_v1_mysqlcluster.yaml": mysqlclusters.apps.egonlin.com "mysqlcluster-sample" not found [root@master01 mysql-operator-master]# kubectl apply -f ./config/samples/apps_v1_mysqlcluster.yaml mysqlcluster.apps.egonlin.com/mysqlcluster-sample created [root@master01 mysql-operator-master]# kubectl -n controller-manager get pods No resources found in controller-manager namespace. [root@master01 mysql-operator-master]# kubectl get pods -n system NAME READY STATUS RESTARTS AGE controller-manager-5699b5b476-4ngwd 1/1 Running 0 3m3s# 如果发现pod没有起来可能是存储的问题。项目来面有个文件local-path-provisioner-0.0.29 进入然后再进入deploy这个文件 [root@master01 deploy]# kubectl apply -f local-path-storage.yaml namespace/local-path-storage created serviceaccount/local-path-provisioner-service-account created role.rbac.authorization.k8s.io/local-path-provisioner-role created clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created rolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created deployment.apps/local-path-provisioner created storageclass.storage.k8s.io/local-path created configmap/local-path-config created [root@master01 deploy]# kubectl get pods NAME READY STATUS RESTARTS AGE axing-zzz-7d5cb7df74-4lbqn 1/1 Running 6 (31m ago) 16d mysql-01 1/1 Running 0 7m50s mysql-02 1/1 Running 0 40s mysql-03 0/1 ContainerCreating 0 30s [root@master01 deploy]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE mysql-01 Bound pvc-c4ffa04d-78bc-44e5-9948-8dd23e8197d4 1Gi RWO local-path <unset> 8m4s mysql-02 Bound pvc-9870b7dc-274f-48d9-ab9c-12fdad4ab267 1Gi RWO local-path <unset> 8m4s mysql-03 Bound pvc-517035dc-ec28-4733-8d8d-244cce025604 1Gi RWO local-path <unset> 8m4s [root@master01 mysql-operator-master]# kubectl get pod -n system 'NAME READY STATUS RESTARTS AGE controller-manager-5699b5b476-4ngwd 1/1 Running 0 103m [root@master01 mysql-operator-master]# [root@master01 mysql-operator-master]# kubectl -n system get deployments.apps NAME READY UP-TO-DATE AVAILABLE AGE controller-manager 1/1 1 1 103m # 可以看日志的情况 [root@master01 mysql-operator-master]# kubectl -n system logs -f controller-manager-5699b5b476-4ngwd正常最后是会一直更新日志{lamp/}最后问题总结# 启动operator的时候第三个pod无法拉起,一直pending,查看 [root@k8s-node-01 ~]# kubectl describe pod mysql-03 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 11m (x3 over 17m) default-scheduler 0/3 nodes are available: 1 Insufficient cpu, 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod. Warning FailedScheduling 89s (x2 over 6m30s) default-scheduler 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. [root@k8s-node-01 ~]## 报错磁disk磁盘资源不足,因为我们用的存储卷是local-path-storage,所以会有卷亲和,msyql-03固定调度到卷所在的节点,卷所在的节点为k8s-node-01节点,通过查看也能分析出来 [root@k8s-node-01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-01 1/1 Running 0 18m 10.244.0.103 k8s-master-01 <none> <none> mysql-02 1/1 Running 0 18m 10.244.2.184 k8s-node-02 <none> <none> mysql-03 0/1 Pending 0 18m <none> <none> <none> <none> # 于是去k8s-node-01节点上查看,发现磁盘空间确实占满了,如下先尝试把该节点的一些安装包,/tmp目录,yum缓存,/var/log都清理掉 kubelet的日志轮转也设置了 [root@k8s-node-01 ~]# cat /var/lib/kubelet/kubeadm-flags.env KUBELET_KUBEADM_ARGS="--container-runtimeendpoint=unix:///var/run/containerd/containerd.sock --pod-infra-containerimage=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 --containerlog-max-files=2 --container-log-max-size='1Ki'" # 注意:--container-log-max-files=2必须大于1,不能小于或等于1,否则无法启动# go build缓存(/root/.cache)还是别清了,否则make run或花很久时间 # 并且把一些没有用的镜像也清理掉 docker system prune -a nerdctl system prune -a # 作用解释: system prune:这个命令用于清理 Docker 系统,删除不再使用的容器、镜像、网络等资源。 -a(--all):此选项会使命令删除所有未使用的镜像,而不仅仅是无标签的镜像。 运行 docker/nerdctl system prune -a 后,系统会问你是否确认要删除这些资源。确认后,Docker会清理掉停止的容器、未使用的镜像和网络,从而释放磁盘空间。发现空间得到了一定程度的释放查看已删除但仍被占用的文件 当一个文件被删除后,如果有进程仍然在使用它,那么这个文件所占用的空间并不会立即被释放。文件 系统的空间使用会显示为已用,但 du 无法检测到这些被删除的文件。 检测被删除但仍然占用的文件 可以使用 lsof 来列出所有仍然被进程占用但已删除的文件。 lsof | grep deleted 如果发现某些文件已经被删除,但仍然被进程占用,可以通过重启相应的进程来释放这些文件占用的空间。 发现一堆这种文件查找该进程,发现就是一个裸启动的mysql进程,无用,可以kill杀掉kill -9 1100
2023年09月26日
27 阅读
0 评论
0 点赞
1
...
12
13
14
...
16