首页
导航
统计
留言
更多
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
Search
1
Ubuntu安装 kubeadm 部署k8s 1.30
211 阅读
2
kubeadm 部署k8s 1.30
131 阅读
3
rockylinux 9.3详细安装drbd
130 阅读
4
rockylinux 9.3详细安装drbd+keepalived
118 阅读
5
ceshi
81 阅读
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
登录
/
注册
Search
标签搜索
k8s
linux
docker
drbd+keepalivde
ansible
dcoker
webhook
星
累计撰写
115
篇文章
累计收到
940
条评论
首页
栏目
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
页面
导航
统计
留言
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
搜索到
77
篇与
的结果
2025-09-01
rocky-linux-9离线安装k8s 1.27
一、阶段 A:在「有网打包机」制作离线包打包机推荐同为 Rocky 9;也可用任意 x86_64 Linux。以下默认使用 dnf 和 ctr/docker 二选一抓镜像。1.1 目录与变量export K8S_VER="1.27.16" export K8S_MINOR="v1.27" export WORK="/opt/k8s-offline-${K8S_VER}" sudo mkdir -p $WORK/{rpms,images,cni,calico,tools}1.2 配置 Kubernetes 1.27 专属 RPM 仓库(仅打包机临时用)#/etc/yum.repos.d/kubernetes-1.27.repo [kubernetes-1.27] name=Kubernetes 1.27 baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.keyKubernetes 从 2023 起使用 pkgs.k8s.io 的分小版本仓库,上面这个是 1.27 专用源。1.3 下载 RPM(含依赖,供离线节点安装)sudo dnf -y install dnf-plugins-core # containerd / runc / 常用依赖 sudo dnf -y download --resolve --destdir=$WORK/rpms \ containerd runc conntrack-tools iptables iproute-tc ethtool socat \ tar openssl curl bash-completion #Rocky 默认仓库里没有叫 containerd 的包,所以 dnf download 在严格模式下直接退出了 所以要加下面这步 # 安装 dnf 插件并添加 Docker CE 源(RHEL/EL9 适用) sudo dnf -y install dnf-plugins-core sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo # 更新元数据 sudo dnf clean all && sudo dnf makecache # kube 组件(固定 1.27.16) #sudo dnf -y download --resolve --destdir=$WORK/rpms \ #kubelet-${K8S_VER} kubeadm-${K8S_VER} kubectl-${K8S_VER} \ #kubernetes-cni cri-tools #上面别用 # 仅下载,不解析依赖 sudo dnf -y download --destdir="$WORK/rpms" \ kubelet-${K8S_VER} kubeadm-${K8S_VER} kubectl-${K8S_VER} \ kubernetes-cni cri-tools1.4 下载 CNI 插件与 crictl 工具# CNI plugins(官方二进制包,放到 /opt/cni/bin) curl -L -o $WORK/cni/cni-plugins-linux-amd64-v1.3.0.tgz \ https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz # crictl(来自 cri-tools) CRICTL_VER="v1.27.0" # 与集群兼容即可 curl -L -o $WORK/tools/crictl-${CRICTL_VER}-linux-amd64.tar.gz \ https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VER}/crictl-${CRICTL_VER}-linux-amd64.tar.gz1.5 下载 Calico 清单与镜像curl -L -o $WORK/calico/calico-v3.26.4.yaml \ https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/calico.yaml # 提取镜像名(也可手工列出) grep -E "image: .*calico" $WORK/calico/calico-v3.26.4.yaml | awk '{print $2}' | sort -u > $WORK/images/calico-images.txt [root@localhost ~]# cat $WORK/images/calico-images.txt docker.io/calico/cni:v3.26.4 docker.io/calico/kube-controllers:v3.26.4 docker.io/calico/node:v3.26.4 1.6 生成 kubeadm 所需镜像清单(精确到 v1.27.16)# 本机先临时装 kubeadm(或用容器)来打印镜像列表 sudo dnf -y install kubeadm-${K8S_VER} kubeadm config images list --kubernetes-version v${K8S_VER} > $WORK/images/k8s-images.txt #kubeadm config images list 是官方推荐获取离线镜像列表的方式;也支持 --config 指定自定义仓库。1.7 拉取并打包镜像(二选一:有 Docker 或有 containerd)# 方式 A:Docker while read -r img; do docker pull "$img"; done < $WORK/images/k8s-images.txt while read -r img; do docker pull "$img"; done < $WORK/images/calico-images.txt docker save $(cat $WORK/images/k8s-images.txt $WORK/images/calico-images.txt) \ -o $WORK/images/k8s-${K8S_VER}-and-calico-v3.26.4.tar # 方式 B:containerd(ctr) sudo systemctl enable --now containerd || true while read -r img; do sudo ctr -n k8s.io i pull "$img"; done < $WORK/images/k8s-images.txt while read -r img; do sudo ctr -n k8s.io i pull "$img"; done < $WORK/images/calico-images.txt sudo ctr -n k8s.io i export $WORK/images/k8s-${K8S_VER}-and-calico-v3.26.4.tar $(cat $WORK/images/k8s-images.txt $WORK/images/calico-images.txt) 1.8 打总包cd $(dirname $WORK) sudo tar czf k8s-offline-${K8S_VER}-rocky9.tar.gz $(basename $WORK) # 把这个 tar.gz 拷贝到所有离线节点(控制面/工作节点) 二、docker安装离线包 2.1在线机器上打离线包# 0) 变量 export WORK="/opt/docker-offline-$(date +%F)" sudo mkdir -p "$WORK"/{rpms,images,scripts} ARCH=$(uname -m) # 一般是 x86_64;如是 ARM64 则为 aarch64 # 1) 加 Docker 官方仓库(RHEL/EL 系列通用,Rocky 9 适用) sudo dnf -y install dnf-plugins-core sudo dnf config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo sudo dnf clean all && sudo dnf makecache # 2) 下载“完整功能”所需 RPM(含依赖) PKGS="docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras" # 用 --resolve 拉全依赖;若个别包临时不可用,strict=0 可跳过不中断 sudo dnf -y download --resolve --setopt=strict=0 \ --destdir="$WORK/rpms" --arch="$ARCH" $PKGS # 同时把 Rootless 相关常见依赖也一并打包(如尚未被上面带下) sudo dnf -y download --resolve --setopt=strict=0 \ --destdir="$WORK/rpms" --arch="$ARCH" \ slirp4netns fuse-overlayfs container-selinux # 3)(可选)打基础测试镜像离线包 docker pull hello-world:latest docker pull alpine:latest docker pull busybox:stable docker save hello-world:latest alpine:latest busybox:stable -o "$WORK/images/docker-base-images.tar" # 4) 生成本地仓库元数据 + 安装脚本 sudo dnf -y install createrepo_c createrepo_c "$WORK/rpms" cat > "$WORK/scripts/install-offline.sh" <<"EOF" #!/usr/bin/env bash set -euo pipefail DIR="$(cd "$(dirname "$0")"/.. && pwd)" # 临时本地仓库安装方法(更稳妥) sudo dnf -y install createrepo_c || true sudo createrepo_c "$DIR/rpms" sudo tee /etc/yum.repos.d/docker-offline.repo >/dev/null <<REPO [docker-offline] name=Docker Offline baseurl=file://$DIR/rpms enabled=1 gpgcheck=0 REPO # 安装 sudo dnf -y install docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras # 启动并开机自启 sudo systemctl enable --now docker # 可选:把当前用户加入 docker 组(需要重新登录生效) if id -u "$SUDO_USER" &>/dev/null; then sudo usermod -aG docker "$SUDO_USER" || true fi # 导入基础镜像(如存在) if [ -f "$DIR/images/docker-base-images.tar" ]; then sudo docker load -i "$DIR/images/docker-base-images.tar" fi echo "Done. Check: docker version && docker compose version && docker buildx version" EOF chmod +x "$WORK/scripts/install-offline.sh" # 5) 打一个总包 sudo tar -C "$(dirname "$WORK")" -czf "${WORK}.tar.gz" "$(basename "$WORK")" echo "离线包已生成:${WORK}.tar.gz" 2.2 离线机器上安装#把 ${WORK}.tar.gz 拷贝到离线主机,解压并执行脚本: sudo tar -C /opt -xzf /path/to/docker-offline-*.tar.gz cd /opt/docker-offline-*/scripts #sudo ./install-offline.sh sudo dnf -y --disablerepo='*' --nogpgcheck install \ /opt/docker-offline-2025-09-01/rpms/*.rpm # 重新登录后验证 docker version [root@localhost opt]# docker version Client: Docker Engine - Community Version: 28.3.3 API version: 1.51 Go version: go1.24.5 Git commit: 980b856 Built: Fri Jul 25 11:36:28 2025 OS/Arch: linux/amd64 Context: default Server: Docker Engine - Community Engine: Version: 28.3.3 API version: 1.51 (minimum version 1.24) Go version: go1.24.5 Git commit: bea959c Built: Fri Jul 25 11:33:28 2025 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.7.27 GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da runc: Version: 1.2.5 GitCommit: v1.2.5-0-g59923ef docker-init: Version: 0.19.0 GitCommit: de40ad0 docker compose version # 注意:是 docker compose(v2 插件),不是老的 docker-compose docker run --rm hello-world三、阶段 B:在「离线节点」安装与初始化 3.1 系统准备(所有节点)sudo tar xzf k8s-offline-1.27.16-rocky9.tar.gz -C / OFF="/opt/k8s-offline-1.27.16" hostnamectl set-hostname k8s-01 echo "192.168.30.150 k8s-01" >> /etc/hosts ping -c1 k8s-01swapoff -a sed -ri 's/^\s*([^#].*\sswap\s)/#\1/' /etc/fstab cat >/etc/sysctl.d/k8s.conf <<'EOF' net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 EOF sysctl --system #先加载 IPVS 内核模块 cat >/etc/modules-load.d/ipvs.conf <<'EOF' ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOF for m in ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack; do modprobe $m; done 3.1.1 关闭 swap(含 zram)#Rocky 9 默认启用 zram,kubelet 需要禁用 swap: sudo swapoff -a # 永久:卸载 zram 生成器或禁用其单元 sudo dnf -y remove zram-generator-defaults || true # 如有 /etc/fstab 的 swap 条目,注释掉;并确认: lsblk | grep -E 'SWAP|zram' || true #RHEL9/基于 systemd 的发行版一般通过 zram-generator 提供 swap;禁用/移除是官方建议之一。 3.1.2 内核模块与 sysctl(bridge/overlay/IP 转发)# /etc/modules-load.d/k8s.conf echo -e "overlay\nbr_netfilter" | sudo tee /etc/modules-load.d/k8s.conf sudo modprobe overlay && sudo modprobe br_netfilter # /etc/sysctl.d/k8s.conf cat <<'EOF' | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system #(这些设置是 Kubernetes 官方与 Fedora/Rocky 指南里明确要求的) 3.1.3 SELinux 与防火墙1. 建议保留 SELinux Enforcing(若遇容器标记问题可先设为 Permissive 再排障)。 2. 防火墙可开放必要端口或临时停用;端口清单见官方“Ports and Protocols”。至少: 控制面:6443/TCP(API)、2379-2380/TCP(etcd)、10250/10257/10259/TCP 所有节点:10250/TCP;CNI 端口(如 Calico VXLAN 默认 4789/UDP)等按 CNI 文档配置。3.2 安装 RPM(离线目录直接安装)cd $OFF/rpms sudo dnf -y --disablerepo='*' install ./*.rpm sudo systemctl enable --now containerd #(--disablerepo='*' 可避免 dnf 去查线上元数据,离线时很有用)3.2.1 安装 CNI 与 crictlsudo mkdir -p /opt/cni/bin sudo tar -xzf $OFF/cni/cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin sudo tar -xzf $OFF/tools/crictl-v1.27.0-linux-amd64.tar.gz -C /usr/local/bin3.3配置 containerd(systemd cgroup & pause 镜像)# 生成默认配置后修改 mkdir -p /etc/containerd containerd config default > /etc/containerd/config.toml # 关键点:设置 SystemdCgroup=true,并确保 sandbox_image 使用我们已导入的 pause:3.9 sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml sudo sed -i 's@sandbox_image = .*@sandbox_image = "registry.k8s.io/pause:3.9"@' /etc/containerd/config.toml #打开 /etc/containerd/config.toml,确保这几处: disabled_plugins = [] #如果看到 io.containerd.grpc.v1.cri 出现在 disabled_plugins 里,删掉它。 #存在并启用 CRI 插件段落(一般默认就有): [plugins."io.containerd.grpc.v1.cri"] # 这里还有一堆子配置,保持默认即可 #kubelet 要求 systemd cgroup,改成: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true #建议把 pause 镜像固定为 3.9(1.27.x 对应): [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry.k8s.io/pause:3.9" # 离线或私有仓库环境就改成你的地址,比如: # sandbox_image = "192.168.30.150:5000/pause:3.9" #打开 /etc/containerd/config.toml,确认/修改以下几处(都在同一文件里): # 顶部:不要禁用 CRI disabled_plugins = [] # ← 把 ["cri"] 改成 [],或直接删掉此行 version = 2 # 如果模板没有这一行,建议加上 [plugins."io.containerd.grpc.v1.cri"] sandbox_image = "registry.k8s.io/pause:3.9" # 你已离线导入了这个镜像,正好保持一致 # 如用私有仓库,写成 "你的仓库/pause:3.9" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] runtime_type = "io.containerd.runc.v2" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true # kubelet 要求 systemd cgroup #重启并自检 systemctl daemon-reload systemctl enable --now containerd #kubectl暂时不用启动 等kubeadm启动 systemctl status containerd --no-pager -l #确认 CRI 插件已加载(任一条有结果即可): ctr plugins ls | grep cri # 期望看到 io.containerd.grpc.v1.cri <OK> # 或者 crictl --runtime-endpoint unix:///run/containerd/containerd.sock info # 能输出 runtimeName 等信息即 OK;若没装 crictl 可跳过 sudo systemctl restart containerd #(K8s 在 RHEL9/cgroup v2 上推荐 systemd cgroup 驱动;containerd 侧需显式开启3.4 预载镜像(离线导入)sudo ctr -n k8s.io images import $OFF/images/k8s-1.27.16-and-calico-v3.26.4.tar sudo ctr -n k8s.io images ls | grep -E 'kube-|coredns|etcd|pause|calico' 3.5 kubeadm 初始化(控制面节点)创建 kubeadm-config.yaml(按需改 advertiseAddress、Pod/Service 网段;Calico 习惯 192.168.0.0/16):# kubeadm-config.yaml [root@k8s-01 ~]# cat kubeadm.yaml # kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.30.150 bindPort: 6443 nodeRegistration: criSocket: unix:///run/containerd/containerd.sock imagePullPolicy: IfNotPresent --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration clusterName: kubernetes kubernetesVersion: v1.27.16 imageRepository: registry.k8s.io networking: serviceSubnet: 10.96.0.0/12 podSubnet: 172.20.0.0/16 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd 下面是开启ipvs apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.30.151 # ← 改成本机控制面IP bindPort: 6443 nodeRegistration: criSocket: unix:///run/containerd/containerd.sock imagePullPolicy: IfNotPresent --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration clusterName: kubernetes kubernetesVersion: v1.27.16 imageRepository: registry.k8s.io # 离线/内网镜像时改成你的私仓 networking: serviceSubnet: 10.96.0.0/12 podSubnet: 172.20.0.0/16 # 要与 Calico 使用的网段一致(你现在就是用这个) dns: type: CoreDNS --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs ipvs: scheduler: rr # 可选:rr / wrr / wlc / sh / mh 等 # strictARP: true # 以后用 MetalLB L2 时再打开 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd# 0) 主机名解析(避免之前的 hostname 警告) hostnamectl set-hostname k8s-01 grep -q '192.168.30.150 k8s-01' /etc/hosts || echo '192.168.30.150 k8s-01' >> /etc/hosts # 1) 关闭 swap(若未关) swapoff -a sed -ri 's/^\s*([^#].*\sswap\s)/#\1/' /etc/fstab # 2) 必要内核 & sysctl(kubelet 常见阻塞点) modprobe br_netfilter || true cat >/etc/modules-load.d/k8s.conf <<'EOF' br_netfilter EOF cat >/etc/sysctl.d/k8s.conf <<'EOF' net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 EOF sysctl --system # 3) (可选)避免策略阻塞:SELinux/防火墙(离线/内网先松) setenforce 0 2>/dev/null || true sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config 2>/dev/null || true systemctl disable --now firewalld 2>/dev/null || true # 4) 重启关键服务 systemctl restart containerd systemctl restart kubelet # 5) 再次观察 crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | egrep 'kube-(apiserver|controller-manager|scheduler)|etcd' journalctl -u kubelet -e --no-pager | tail -n 200 #执行初始化: sudo kubeadm init --config kubeadm-config.yaml #初始化离线 不联网 kubeadm init --config kubeadm.yaml --upload-certs -v=5 # 成功后配置 kubectl mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configsudo systemctl disable --now firewalld || true # 立刻加载模块 sudo modprobe overlay && sudo modprobe br_netfilter # 持久化 echo -e "overlay\nbr_netfilter" | sudo tee /etc/modules-load.d/k8s.conf # 必要 sysctl sudo tee /etc/sysctl.d/k8s.conf >/dev/null <<'EOF' net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system # 快速确认三项都为 1 sysctl net.ipv4.ip_forward sysctl net.bridge.bridge-nf-call-iptables sysctl net.bridge.bridge-nf-call-ip6tables 安装 Calico(离线文件): kubectl apply -f $OFF/calico/calico-v3.26.4.yaml kubectl -n kube-system get pods -w mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config kubectl get pods -n kube-system -o wide kubectl get nodes -o wide #kubelet 开机自启(kubeadm 已临时启动,设为自启更规范) systemctl enable --now kubelet #配好 kubectl,并验证控制面 # 生成过 admin.conf 的话(kubeadm 已经写过) [ -f /etc/kubernetes/admin.conf ] && { mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config } kubectl cluster-info kubectl get pods -n kube-system -o wide kubectl get nodes -o wide # 现在控制面起来了,但在装 CNI 前 Node 可能是 NotReady #如果意外没有 /etc/kubernetes/admin.conf(极少数情况),可补一条: kubeadm init phase kubeconfig admin #加载 IPVS 内核模块(你 kube-proxy 设了 ipvs) modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack cat >/etc/modules-load.d/ipvs.conf <<'EOF' ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack EOF #安装你的 CNI(Calico 离线) 确保清单里 CALICO_IPV4POOL_CIDR 与你 kubeadm 的 podSubnet: 172.20.0.0/16 一致。 你本地已导入镜像 calico/node|cni|kube-controllers:v3.26.4,直接套用离线 calico.yaml 即可: kubectl apply -f /path/to/calico.yaml kubectl -n kube-system get pods -w # 等 calico-*、coredns、kube-proxy 全部 Running kubectl get nodes # 状态应变为 Ready 3.6 加入工作节点#在每个工作节点重复 系统准备/安装 RPM/导入镜像 的步骤,然后在控制面上生成 join 命令: [root@k8s-01 ~]# kubeadm token create --print-join-command kubeadm join 192.168.30.150:6443 --token fnturx.ph8jg99zgdmze81w --discovery-token-ca-cert-hash sha256:1ef5e1f3558c8f9336dd4785c0207cb837cceb37c253179e9988f03dc0c00146 #把输出的 kubeadm join ... 在各工作节点执行即可。 #拿到的命令在每个 worker 上执行即可加入集群。 #若以后要加 额外控制面节点,再执行: kubeadm init phase upload-certs --skip-certificate-key-print kubeadm token create --print-join-command --certificate-key <上一步输出的key> #持久化服务 systemctl enable --now kubelet systemctl enable --now containerd
2025年09月01日
3 阅读
0 评论
0 点赞
2025-08-27
利用公有云访问内网k8s集群
一、准备条件和安装wireguard1.需要一台有公网IP的机器 公有云也行 2.k8s集群有traefik或者inginx 3.两边都要安装wireguard 4.准备一个域名解析到公有云IP root@k8s-01:~# dnf install -y wireguard #dnf install -y wireguard-tools二、创建公钥秘钥#两个秘钥privatekey(私钥)publickey(公钥) wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey三、配置root@VM-12-5-ubuntu:~# wg show wg0 interface: wg0 public key: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= private key: (hidden) listening port: 51820 peer: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= endpoint: 113.108.37.18:2103 allowed ips: 10.88.0.2/32 latest handshake: 44 seconds ago transfer: 279.00 KiB received, 137.54 KiB sent persistent keepalive: every 25 seconds [root@k8s-01 ~]# wg show wg0 interface: wg0 public key: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= private key: (hidden) listening port: 46509 peer: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= endpoint: 43.138.186.171:51820 allowed ips: 10.88.0.1/32 latest handshake: 37 seconds ago transfer: 130.80 KiB received, 259.68 KiB sent persistent keepalive: every 25 seconds公有云 wg0 的 interface 公钥是 4GSWTJ...,它的 peer 公钥是 dF92nK... K8s 侧 wg0 的 interface 公钥是 dF92nK...,它的 peer 公钥是 4GSWTJ... 也就是“本机 interface 公钥 == 对端 peer 公钥”,两边互为一对,没问题。#公有云(43.138.186.171)上的 wg0.conf [Interface] Address = 10.88.0.1/24 ListenPort = 51820 PrivateKey = (你的云主机私钥) PostUp = sysctl -w net.ipv4.ip_forward=1 ; iptables -A FORWARD -i wg0 -j ACCEPT ; iptables -A FORWARD -o wg0 -j ACCEPT PostDown = iptables -D FORWARD -i wg0 -j ACCEPT ; iptables -D FORWARD -o wg0 -j ACCEPT [Interface]:本机(云主机)这个 WireGuard 网卡 wg0 的本地配置。 Address = 10.88.0.1/24:给 wg0 配的“隧道内网 IP”。 10.88.0.1 就是云主机在隧道里的地址。 /24 代表这个隧道网段是 10.88.0.0/24,以后可以再加别的 peer(10.88.0.x)。 ListenPort = 51820:云主机监听的 UDP 端口,等着别的 peer 来连它(常用默认就是 51820/udp)。 PrivateKey:云主机的私钥(保密)。公钥是用它导出的,发给对端使用。 PostUp/PostDown:wg-quick up/down 时要执行的钩子。 sysctl -w net.ipv4.ip_forward=1:开启三层转发(允许这台机子在不同网卡间转发 IP 包)。 两条 iptables FORWARD ACCEPT:允许数据包通过 wg0 转发。 注意这里没有做 SNAT/MASQUERADE,因为你现在只在 10.88 网段里互通,不需要 NAT。如果以后要让云主机访问 K8s 后面的 其它网段(比如 192.168.173.0/24),可能需要再加路由或 SNAT,这个在文末扩展里说。 [Peer] PublicKey = (k8s-01 的公钥) AllowedIPs = 10.88.0.2/32 PersistentKeepalive = 25 [Peer]:描述一个对端(这里就是 K8s 节点)的配置。 PublicKey:K8s 节点的公钥(用来验证/加密,不能拿私钥填)。 AllowedIPs = 10.88.0.2/32:两层含义(这是 WireGuard 的“妙处”) 路由:告诉本机“发往 10.88.0.2 的包,从这个 peer 走”。 ACL:只接受对端发来的、源地址是 10.88.0.2 的流量(更安全)。 /32 表示只这一个 IP。如果以后你想通过这个 peer 转更多网段(比如 192.168.173.0/24),就把它们加进 AllowedIPs。 PersistentKeepalive = 25:每 25 秒发一个空包“打洞/保活”。 对 NAT/防火墙后的 peer 很重要,防止 UDP 映射过期。放在云主机这边不是必须,但也无害。#K8s 节点(192.168.173.101)上的 wg0.conf [Interface] ListenPort = 46509 Address = 10.88.0.2/24 PrivateKey = (k8s-01 的私钥) ListenPort = 46509:K8s 节点也在本地开了一个 UDP 监听端口。 实践里 客户端可不必监听(可以省略),只要它主动连云主机就行;但保留也没问题。 Address = 10.88.0.2/24:K8s 在隧道里的地址。 PrivateKey:K8s 的私钥(保密)。 [Peer] PublicKey = (云主机的公钥) Endpoint = 43.138.186.171:51820 AllowedIPs = 10.88.0.1/32 PersistentKeepalive = 25 PublicKey:云主机的公钥。 Endpoint = 43.138.186.171:51820:要去连接的对端外网地址+端口(云主机的公网IP+ListenPort)。 客户端必须写 Endpoint 才知道从哪拨号。 服务器端(云主机)通常不写 Endpoint,它会从对端第一个包学到真实来源地址。 AllowedIPs = 10.88.0.1/32: 路由:发往 10.88.0.1 的包走这个 peer; ACL:只接受源地址是 10.88.0.1 的流量。 PersistentKeepalive = 25:这个放在客户端非常关键(多数客户端在 NAT 后)。这两份配置整体在做什么? 建立了一个点对点的三层隧道: 云主机(10.88.0.1) ↔ K8s 节点(10.88.0.2) 两边 AllowedIPs 互指对方的 /32 地址,因此: 发往 10.88.0.1 的流量从 K8s → 隧道走; 发往 10.88.0.2 的流量从云主机 → 隧道走; 因为 Interface Address 都是 /24,你将来可以很自然地再加更多 peer(10.88.0.3、10.88.0.4…),形成一个小型“星型/网状” VPN。常见问题 / 最佳实践 1./24 vs /32 有啥区别? Address 的 /24 只是本机网卡的掩码(影响本机“直连子网”的认知),不直接决定能不能通谁。真正决定“隧道里允许/路由哪些网段”的是 各个 peer 的 AllowedIPs。 你现在 AllowedIPs 都是 /32,所以只有两个 IP 会走这条隧道,干净且安全。 2.为什么服务器端不需要写 Endpoint? 因为它“被拨号”,从首包能自动学到对端外网地址(NAT 环境下也能工作)。 3.防火墙要开啥? 云主机:放行 UDP 51820(外网入站)。 两端:放行 wg0 的 FORWARD(你已加),如果还要访问 K8s 的 NodePort,记得在 K8s 节点上放行对应端口来自 wg0 的入站(例如 iptables -I INPUT -i wg0 -p tcp --dport 32150 -j ACCEPT)。 4.密钥怎么生成? umask 077 wg genkey | tee privatekey | wg pubkey > publickey 本机 PrivateKey 用在 [Interface];对端的 PublicKey 用在本机的 [Peer]。 5.开机自启 systemctl enable --now wg-quick@wg0 如果出现下面这种报错大概率是wg0已经存在了 可能手动 wg-quick up wg0 过,结果现在 systemctl enable --now 在“启用的同时再次启动”,启动阶段发现 wg0 已经在运行,于是报错。 root@k8s-01:~# systemctl enable --now wg-quick@wg0 Created symlink /etc/systemd/system/multi-user.target.wants/wg-quick@wg0.service → /lib/systemd/system/wg-quick@.service. Job for wg-quick@wg0.service failed because the control process exited with error code. See "systemctl status wg-quick@wg0.service" and "journalctl -xeu wg-quick@wg0.service" for details. root@k8s-01:~# systemctl status wg-quick@wg0.service × wg-quick@wg0.service - WireGuard via wg-quick(8) for wg0 Loaded: loaded (/lib/systemd/system/wg-quick@.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2025-08-28 03:01:09 UTC; 58s ago Docs: man:wg-quick(8) man:wg(8) https://www.wireguard.com/ https://www.wireguard.com/quickstart/ https://git.zx2c4.com/wireguard-tools/about/src/man/wg-quick.8 https://git.zx2c4.com/wireguard-tools/about/src/man/wg.8 Process: 66728 ExecStart=/usr/bin/wg-quick up wg0 (code=exited, status=1/FAILURE) Main PID: 66728 (code=exited, status=1/FAILURE) CPU: 11ms Aug 28 03:01:09 k8s-01 systemd[1]: Starting WireGuard via wg-quick(8) for wg0... Aug 28 03:01:09 k8s-01 wg-quick[66728]: wg-quick: `wg0' already exists Aug 28 03:01:09 k8s-01 systemd[1]: wg-quick@wg0.service: Main process exited, code=exited, status=1/FAILURE Aug 28 03:01:09 k8s-01 systemd[1]: wg-quick@wg0.service: Failed with result 'exit-code'. Aug 28 03:01:09 k8s-01 systemd[1]: Failed to start WireGuard via wg-quick(8) for wg0. # 1) 看看当前是否已 up(可选) wg show ip a show dev wg0 # 2) 先优雅下线(会执行 PostDown,清理 iptables) wg-quick down wg0 || true 重载秘钥 wg set wg0 private-key /etc/wireguard/privatekey # 3) 重新用 systemd 启动(让开机自启 + 由 systemd 接管) systemctl start wg-quick@wg0 systemctl enable wg-quick@wg0 # 4) 确认状态与连通性 systemctl status wg-quick@wg0 --no-pager wg show ping -c 3 10.88.0.1 # 在 k8s-01 上测试到云主机 6.排错检查 wg show ip a show wg0 ip r | grep 10.88. ping 10.88.0.1 # 从 k8s ping 10.88.0.2 # 从云主机 加载秘钥命令 root@VM-12-5-ubuntu:~# wg set wg0 private-key /etc/wireguard/privatekey四、连接测试root@VM-12-5-ubuntu:~# wg show wg0 interface: wg0 public key: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= private key: (hidden) listening port: 51820 peer: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= endpoint: 113.108.37.18:2103 allowed ips: 10.88.0.2/32 latest handshake: 44 seconds ago transfer: 279.00 KiB received, 137.54 KiB sent persistent keepalive: every 25 seconds [root@k8s-01 ~]# wg show wg0 interface: wg0 public key: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= private key: (hidden) listening port: 46509 peer: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= endpoint: 43.138.186.171:51820 allowed ips: 10.88.0.1/32 latest handshake: 37 seconds ago transfer: 130.80 KiB received, 259.68 KiB sent persistent keepalive: every 25 seconds 握手正常 两边都显示 latest handshake 为几十秒前,且 transfer 有收发字节在增加——说明隧道已联通。 Endpoint 显示解释 公有云上看到的对端 endpoint: 113.108.37.18:2103 是 K8s 侧出网的公网/NAT 地址与端口(被动学习到的)。 K8s 侧配置中的 Endpoint = 43.138.186.171:51820 是 主动连向公有云的固定地址。这种一主一从(NAT 穿透)是正常现象。互ping测试 root@VM-12-5-ubuntu:~# ping -c 3 10.88.0.2 PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data. 64 bytes from 10.88.0.2: icmp_seq=1 ttl=64 time=913 ms 64 bytes from 10.88.0.2: icmp_seq=2 ttl=64 time=1112 ms 64 bytes from 10.88.0.2: icmp_seq=3 ttl=64 time=1083 ms --- 10.88.0.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 913.232/1036.088/1112.054/87.679 ms, pipe 2 root@k8s-01:~# ping -c 3 10.88.0.1 PING 10.88.0.1 (10.88.0.1) 56(84) bytes of data. 64 bytes from 10.88.0.1: icmp_seq=1 ttl=64 time=1090 ms 64 bytes from 10.88.0.1: icmp_seq=2 ttl=64 time=1110 ms 64 bytes from 10.88.0.1: icmp_seq=3 ttl=64 time=1094 ms --- 10.88.0.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2016ms rtt min/avg/max/mdev = 1089.897/1098.075/1110.163/8.722 ms, pipe 2如果你要从公有云访问 K8s 集群内部 把 公有云侧指向 K8s 侧的 [Peer] 的 AllowedIPs 扩到需要的网段,例如: AllowedIPs = 10.88.0.2/32, 10.244.0.0/16, 10.96.0.0/12, 192.168.30.0/24 并在 K8s 出口网关(运行 wireguard 的那台) 开启转发和 SNAT(若需要): sysctl -w net.ipv4.ip_forward=1 iptables -t nat -A POSTROUTING -s 10.244.0.0/16 -o <外网网卡> -j MASQUERADE # 视需要把 10.96.0.0/12、192.168.30.0/24 也做 MASQUERADE 注:如果你在 K8s 里用了 CNI 的内置 MASQ 或有专用出口网关,按实际网络结构调整。五、域名测试 5.1 方案 A(简单入门):在公有云 Nginx 终止 TLS,回源走 HTTP 到 Traefik(NodePort 32150) 5.1.1 K8s 侧:放一个测试应用 + Ingress#用 Traefik 当 IngressClass,域名就用 zhuanfa.axzys.cn(跟外网一致,方便验证“相同”)。 # demo-whoami.yaml apiVersion: apps/v1 kind: Deployment metadata: name: whoami namespace: demo spec: replicas: 1 selector: matchLabels: { app: whoami } template: metadata: labels: { app: whoami } spec: containers: - name: whoami image: traefik/whoami:v1.10 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: whoami namespace: demo spec: selector: { app: whoami } ports: - port: 80 targetPort: 80 protocol: TCP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: whoami namespace: demo annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/router.entrypoints: web spec: rules: - host: zhuanfa.axzys.cn http: paths: - path: / pathType: Prefix backend: service: name: whoami port: number: 80 kubectl create ns demo kubectl apply -f demo-whoami.yaml kubectl -n demo get ingressroot@k8s-01:~# kubectl get svc -n traefik NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE traefik NodePort 10.100.109.7 <none> 80:30080/TCP,443:30443/TCP 27d 小检查:从公有云主机上(10.88.0.1)直连 Traefik 的 NodePort 看看有没有返回: root@VM-12-5-ubuntu:~# curl -H 'Host: zhuanfa.axzys.cn' http://10.88.0.2:30080 Hostname: whoami-678b958ccd-mqx5f IP: 127.0.0.1 IP: ::1 IP: 10.244.2.74 IP: fe80::f097:a9ff:fe0e:b981 RemoteAddr: 10.244.2.25:35632 GET / HTTP/1.1 Host: zhuanfa.axzys.cn User-Agent: curl/7.81.0 Accept: */* Accept-Encoding: gzip X-Forwarded-For: 10.244.0.0 X-Forwarded-Host: zhuanfa.axzys.cn X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Server: traefik-release-589c7ff647-2668z X-Real-Ip: 10.244.0.0 #能看到 whoami 的 JSON/文本就 OK。如果服务器有防火墙,请在 K8s 节点上放行来自 wg0 的 NodePort: iptables -I INPUT -i wg0 -p tcp --dport 30080 -j ACCEPT # (如开启 https 回源,再放 30080)5.1.2 公有云 Nginx:终止 TLS,并通过 WireGuard 走 HTTP 回源HTTP 先通(测试用) /etc/nginx/sites-available/zhuanfa.axzys.cnupstream traefik_via_wg_http { server 10.88.0.2:30080; # Traefik web (NodePort) keepalive 32; } server { listen 80; server_name zhuanfa.axzys.cn; # 如需把所有 80 跳 443,等签好证书后再加: # return 301 https://$host$request_uri; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_read_timeout 120s; proxy_send_timeout 120s; proxy_pass http://traefik_via_wg_http; } } #启用站点并重载 ln -s /etc/nginx/sites-available/zhuanfa.axzys.cn /etc/nginx/sites-enabled/ nginx -t && systemctl reload nginxhttp://zhuanfa.axzys.cn/#再加 HTTPS(证书放在公有云) 最方便的是用 certbot 自动签: apt-get update && apt-get install -y certbot python3-certbot-nginx certbot --nginx -d zhuanfa.axzys.cnroot@VM-12-5-ubuntu:~# apt-get install -y certbot python3-certbot-nginx Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: python3-acme python3-certbot python3-certifi python3-configargparse python3-icu python3-josepy python3-parsedatetime python3-requests python3-requests-toolbelt python3-rfc3339 python3-tz python3-urllib3 python3-zope.component python3-zope.event python3-zope.hookable Suggested packages: python-certbot-doc python3-certbot-apache python-acme-doc python-certbot-nginx-doc python3-socks python-requests-doc The following NEW packages will be installed: certbot python3-acme python3-certbot python3-certbot-nginx python3-certifi python3-configargparse python3-icu python3-josepy python3-parsedatetime python3-requests python3-requests-toolbelt python3-rfc3339 python3-tz python3-urllib3 python3-zope.component python3-zope.event python3-zope.hookable 0 upgraded, 17 newly installed, 0 to remove and 202 not upgraded. Need to get 1,322 kB of archives. After this operation, 6,211 kB of additional disk space will be used. Get:1 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-josepy all 1.10.0-1 [22.0 kB] Get:2 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 python3-certifi all 2020.6.20-1 [150 kB] Get:3 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 python3-urllib3 all 1.26.5-1~exp1ubuntu0.3 [98.6 kB] Get:4 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 python3-requests all 2.25.1+dfsg-2ubuntu0.3 [48.8 kB] Get:5 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 python3-requests-toolbelt all 0.9.1-1 [38.0 kB] Get:6 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 python3-tz all 2022.1-1ubuntu0.22.04.1 [30.7 kB] Get:7 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 python3-rfc3339 all 1.1-3 [7,110 B] Get:8 http://mirrors.tencentyun.com/ubuntu jammy-updates/universe amd64 python3-acme all 1.21.0-1ubuntu0.1 [36.4 kB] Get:9 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-configargparse all 1.5.3-1 [26.9 kB] Get:10 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-parsedatetime all 2.6-2 [32.9 kB] Get:11 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-zope.hookable amd64 5.1.0-1build1 [11.6 kB] Get:12 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-zope.event all 4.4-3 [8,180 B] Get:13 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-zope.component all 4.3.0-3 [38.3 kB] Get:14 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-certbot all 1.21.0-1build1 [175 kB] Get:15 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 certbot all 1.21.0-1build1 [21.3 kB] Get:16 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-certbot-nginx all 1.21.0-1 [35.4 kB] Get:17 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 python3-icu amd64 2.8.1-0ubuntu2 [540 kB] Fetched 1,322 kB in 2s (759 kB/s) Preconfiguring packages ... Selecting previously unselected package python3-josepy. (Reading database ... 88845 files and directories currently installed.) Preparing to unpack .../00-python3-josepy_1.10.0-1_all.deb ... Unpacking python3-josepy (1.10.0-1) ... Selecting previously unselected package python3-certifi. Preparing to unpack .../01-python3-certifi_2020.6.20-1_all.deb ... Unpacking python3-certifi (2020.6.20-1) ... Selecting previously unselected package python3-urllib3. Preparing to unpack .../02-python3-urllib3_1.26.5-1~exp1ubuntu0.3_all.deb ... Unpacking python3-urllib3 (1.26.5-1~exp1ubuntu0.3) ... Selecting previously unselected package python3-requests. Preparing to unpack .../03-python3-requests_2.25.1+dfsg-2ubuntu0.3_all.deb ... Unpacking python3-requests (2.25.1+dfsg-2ubuntu0.3) ... Selecting previously unselected package python3-requests-toolbelt. Preparing to unpack .../04-python3-requests-toolbelt_0.9.1-1_all.deb ... Unpacking python3-requests-toolbelt (0.9.1-1) ... Selecting previously unselected package python3-tz. Preparing to unpack .../05-python3-tz_2022.1-1ubuntu0.22.04.1_all.deb ... Unpacking python3-tz (2022.1-1ubuntu0.22.04.1) ... Selecting previously unselected package python3-rfc3339. Preparing to unpack .../06-python3-rfc3339_1.1-3_all.deb ... Unpacking python3-rfc3339 (1.1-3) ... Selecting previously unselected package python3-acme. Preparing to unpack .../07-python3-acme_1.21.0-1ubuntu0.1_all.deb ... Unpacking python3-acme (1.21.0-1ubuntu0.1) ... Selecting previously unselected package python3-configargparse. Preparing to unpack .../08-python3-configargparse_1.5.3-1_all.deb ... Unpacking python3-configargparse (1.5.3-1) ... Selecting previously unselected package python3-parsedatetime. Preparing to unpack .../09-python3-parsedatetime_2.6-2_all.deb ... Unpacking python3-parsedatetime (2.6-2) ... Selecting previously unselected package python3-zope.hookable. Preparing to unpack .../10-python3-zope.hookable_5.1.0-1build1_amd64.deb ... Unpacking python3-zope.hookable (5.1.0-1build1) ... Selecting previously unselected package python3-zope.event. Preparing to unpack .../11-python3-zope.event_4.4-3_all.deb ... Unpacking python3-zope.event (4.4-3) ... Selecting previously unselected package python3-zope.component. Preparing to unpack .../12-python3-zope.component_4.3.0-3_all.deb ... Unpacking python3-zope.component (4.3.0-3) ... Selecting previously unselected package python3-certbot. Preparing to unpack .../13-python3-certbot_1.21.0-1build1_all.deb ... Unpacking python3-certbot (1.21.0-1build1) ... Selecting previously unselected package certbot. Preparing to unpack .../14-certbot_1.21.0-1build1_all.deb ... Unpacking certbot (1.21.0-1build1) ... Selecting previously unselected package python3-certbot-nginx. Preparing to unpack .../15-python3-certbot-nginx_1.21.0-1_all.deb ... Unpacking python3-certbot-nginx (1.21.0-1) ... Selecting previously unselected package python3-icu. Preparing to unpack .../16-python3-icu_2.8.1-0ubuntu2_amd64.deb ... Unpacking python3-icu (2.8.1-0ubuntu2) ... Setting up python3-configargparse (1.5.3-1) ... Setting up python3-parsedatetime (2.6-2) ... Setting up python3-icu (2.8.1-0ubuntu2) ... Setting up python3-zope.event (4.4-3) ... Setting up python3-tz (2022.1-1ubuntu0.22.04.1) ... Setting up python3-zope.hookable (5.1.0-1build1) ... Setting up python3-certifi (2020.6.20-1) ... Setting up python3-urllib3 (1.26.5-1~exp1ubuntu0.3) ... Setting up python3-josepy (1.10.0-1) ... Setting up python3-rfc3339 (1.1-3) ... Setting up python3-zope.component (4.3.0-3) ... Setting up python3-requests (2.25.1+dfsg-2ubuntu0.3) ... Setting up python3-requests-toolbelt (0.9.1-1) ... Setting up python3-acme (1.21.0-1ubuntu0.1) ... Setting up python3-certbot (1.21.0-1build1) ... Setting up certbot (1.21.0-1build1) ... Created symlink /etc/systemd/system/timers.target.wants/certbot.timer → /lib/systemd/system/certbot.timer. Setting up python3-certbot-nginx (1.21.0-1) ... Processing triggers for man-db (2.10.2-1) ... Scanning processes... Scanning linux images... Running kernel seems to be up-to-date. No services need to be restarted. No containers need to be restarted. No user sessions are running outdated binaries. No VM guests are running outdated hypervisor (qemu) binaries on this host. root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# certbot --nginx -d zhuanfa.axzys.cn Saving debug log to /var/log/letsencrypt/letsencrypt.log Enter email address (used for urgent renewal and security notices) (Enter 'c' to cancel): 7902731@qq.com - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Please read the Terms of Service at https://letsencrypt.org/documents/LE-SA-v1.5-February-24-2025.pdf. You must agree in order to register with the ACME server. Do you agree? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: Y - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Would you be willing, once your first certificate is successfully issued, to share your email address with the Electronic Frontier Foundation, a founding partner of the Let's Encrypt project and the non-profit organization that develops Certbot? We'd like to send you email about our work encrypting the web, EFF news, campaigns, and ways to support digital freedom. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: Y Account registered. Requesting a certificate for zhuanfa.axzys.cn Successfully received certificate. Certificate is saved at: /etc/letsencrypt/live/zhuanfa.axzys.cn/fullchain.pem Key is saved at: /etc/letsencrypt/live/zhuanfa.axzys.cn/privkey.pem This certificate expires on 2025-11-25. These files will be updated when the certificate renews. Certbot has set up a scheduled task to automatically renew this certificate in the background. Deploying certificate Successfully deployed certificate for zhuanfa.axzys.cn to /etc/nginx/sites-enabled/zhuanfa.axzys.cn Congratulations! You have successfully enabled HTTPS on https://zhuanfa.axzys.cn - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - If you like Certbot, please consider supporting our work by: * Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate * Donating to EFF: https://eff.org/donate-le - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -certbot 会把 Nginx 配置自动改成 80→443 跳转 + SSL 证书挂载。之后再访问: https://zhuanfa.axzys.cn/#链路 说明:TLS 在公有云 Nginx 终止,Nginx → Traefik 用 HTTP(wg 隧道里,安全性没问题)。5.2 公有云 Nginx(使用 stream 透传 TCP)在 /etc/nginx/nginx.conf 里添加 stream {} 段(与 http {} 同级),并把 80/443 都透传到 Traefik 的 NodePort:stream { upstream traefik_http { server 10.88.0.2:30080; # Traefik web (HTTP) } upstream traefik_https { server 10.88.0.2:30443; # Traefik websecure (HTTPS) } server { listen 80; proxy_pass traefik_http; proxy_timeout 120s; proxy_connect_timeout 5s; } server { listen 443; proxy_pass traefik_https; proxy_timeout 120s; proxy_connect_timeout 5s; } } #重载 nginx -t && systemctl reload nginx现在访问 http://zhuanfa.axzys.cn 会被 Traefik 处理(你可以在 Traefik 里做 80→443 跳转),https://zhuanfa.axzys.cn 证书也由 Traefik 下发与续期。 注意:Nginx 必须是带 stream 模块的(Ubuntu/Debian 默认包一般已带)。六、常见陷阱 & 排错清单NodePort 只在内网开放: 若你担心 NodePort 泄露到其他网卡,可以把 kube-proxy 配成只监听 WireGuard 网段: 在 kube-proxy 的 ConfigMap 里设置 nodePortAddresses: ["10.88.0.0/24"],然后滚动重启 kube-proxy。 或临时用 iptables 只允许 -i wg0 的 32150/31948。 请求头: 方案 A 用 HTTP 反代时,务必保留 Host 和 X-Forwarded-* 头(上面 Nginx 配置已加),否则基于 Host 的 Ingress 匹配会失败。 Traefik Dashboard 405: 你之前用 curl -I 访问 /dashboard/ 出现 405,这是正常的(Dashboard 不回应 HEAD)。用 curl -v http://.../dashboard/ 或浏览器 GET 即可。 隧道路由: 你已经能互 ping(10.88.0.1 ↔ 10.88.0.2),说明 AllowedIPs/防火墙没问题;如果 Nginx 回源连不上,优先 curl 10.88.0.2:32150/31948 验证链路。你可以这样快速验证 选 方案 A: 1)应用上面 demo YAML(web 入口)。 2)公有云 Nginx 配好 HTTP 反代。 3)本地打开 http://zhuanfa.axzys.cn/ → 看到 whoami。 4)再跑 certbot → 用 https://zhuanfa.axzys.cn/ 访问。 选 方案 B: 1)Ingress 切到 websecure(或同时配置 web 做 80→443 跳转)。 2)公有云 Nginx 加 stream 透传 80/443 到 32150/31948。 3)浏览器直接 https://zhuanfa.axzys.cn/,证书由 Traefik 管。七、扩展 7.1 想让云主机直接访问 K8s 后面的内网(比如 192.168.173.0/24)怎么办?#在现在的点对点配置里,只通了 10.88.0.1 ↔ 10.88.0.2。如果还想从云主机访问 K8s 节点所在内网(或 Pod/Service 网段),有两种常见做法: {dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}{dotted startColor="#ff6c6c" endColor="#1989fa"/}Last login: Wed Aug 27 13:32:47 2025 from 183.14.30.81 root@VM-12-5-ubuntu:~# apt-get update Hit:1 http://mirrors.tencentyun.com/ubuntu jammy InRelease Get:2 http://mirrors.tencentyun.com/ubuntu jammy-updates InRelease [128 kB] Get:3 http://mirrors.tencentyun.com/ubuntu jammy-security InRelease [129 kB] Get:4 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 Packages [2,843 kB] Get:5 http://mirrors.tencentyun.com/ubuntu jammy-updates/main Translation-en [447 kB] Get:6 http://mirrors.tencentyun.com/ubuntu jammy-updates/restricted amd64 Packages [4,269 kB] Get:7 http://mirrors.tencentyun.com/ubuntu jammy-updates/restricted Translation-en [778 kB] Get:8 http://mirrors.tencentyun.com/ubuntu jammy-updates/universe amd64 Packages [1,227 kB] Get:9 http://mirrors.tencentyun.com/ubuntu jammy-updates/universe Translation-en [304 kB] Get:10 http://mirrors.tencentyun.com/ubuntu jammy-updates/multiverse amd64 Packages [59.5 kB] Get:11 http://mirrors.tencentyun.com/ubuntu jammy-updates/multiverse Translation-en [14.2 kB] Get:12 http://mirrors.tencentyun.com/ubuntu jammy-security/main amd64 Packages [2,595 kB] Get:13 http://mirrors.tencentyun.com/ubuntu jammy-security/main Translation-en [383 kB] Get:14 http://mirrors.tencentyun.com/ubuntu jammy-security/restricted amd64 Packages [4,118 kB] Get:15 http://mirrors.tencentyun.com/ubuntu jammy-security/restricted Translation-en [751 kB] Get:16 http://mirrors.tencentyun.com/ubuntu jammy-security/universe amd64 Packages [994 kB] Get:17 http://mirrors.tencentyun.com/ubuntu jammy-security/universe Translation-en [217 kB] Get:18 http://mirrors.tencentyun.com/ubuntu jammy-security/multiverse amd64 Packages [40.3 kB] Get:19 http://mirrors.tencentyun.com/ubuntu jammy-security/multiverse Translation-en [8,908 B] Fetched 19.3 MB in 4s (4,695 kB/s) Reading package lists... Done root@VM-12-5-ubuntu:~# apt-get install -y wireguard Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: wireguard-tools Suggested packages: openresolv | resolvconf The following NEW packages will be installed: wireguard wireguard-tools 0 upgraded, 2 newly installed, 0 to remove and 202 not upgraded. Need to get 90.0 kB of archives. After this operation, 345 kB of additional disk space will be used. Get:1 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 wireguard-tools amd64 1.0.20210914-1ubuntu2 [86.9 kB] Get:2 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 wireguard all 1.0.20210914-1ubuntu2 [3,114 B] Fetched 90.0 kB in 0s (505 kB/s) Selecting previously unselected package wireguard-tools. (Reading database ... 88675 files and directories currently installed.) Preparing to unpack .../wireguard-tools_1.0.20210914-1ubuntu2_amd64.deb ... Unpacking wireguard-tools (1.0.20210914-1ubuntu2) ... Selecting previously unselected package wireguard. Preparing to unpack .../wireguard_1.0.20210914-1ubuntu2_all.deb ... Unpacking wireguard (1.0.20210914-1ubuntu2) ... Setting up wireguard-tools (1.0.20210914-1ubuntu2) ... wg-quick.target is a disabled or a static unit not running, not starting it. Setting up wireguard (1.0.20210914-1ubuntu2) ... Processing triggers for man-db (2.10.2-1) ... Scanning processes... Scanning linux images... Running kernel seems to be up-to-date. No services need to be restarted. No containers need to be restarted. No user sessions are running outdated binaries. No VM guests are running outdated hypervisor (qemu) binaries on this host. root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# ls snap root@VM-12-5-ubuntu:~# vi /etc/wireguard/wg0.conf root@VM-12-5-ubuntu:~# wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey root@VM-12-5-ubuntu:~# sudo ufw status Status: inactive root@VM-12-5-ubuntu:~# vi /etc/wireguard/wg0.conf root@VM-12-5-ubuntu:~# systemctl enable --now wg-quick@wg0 Created symlink /etc/systemd/system/multi-user.target.wants/wg-quick@wg0.service → /lib/systemd/system/wg-quick@.service. root@VM-12-5-ubuntu:~# wg show interface: wg0 public key: Vl13ICrsWW4tODYv94bNV2Es9FPY4/6MoJ0hO1YXG3I= private key: (hidden) listening port: 51820 peer: n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= allowed ips: 10.88.0.2/32 persistent keepalive: every 25 seconds root@VM-12-5-ubuntu:~# ping -c 3 10.88.0.2 PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data. From 10.88.0.1 icmp_seq=1 Destination Host Unreachable ping: sendmsg: Destination address required From 10.88.0.1 icmp_seq=2 Destination Host Unreachable ping: sendmsg: Destination address required From 10.88.0.1 icmp_seq=3 Destination Host Unreachable ping: sendmsg: Destination address required --- 10.88.0.2 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2028ms root@VM-12-5-ubuntu:~# curl -I -H 'Host: zhuanfa.axzys.cn' http://10.88.0.2:30080 curl: (7) Failed to connect to 10.88.0.2 port 30080 after 0 ms: No route to host root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# # 检查是否有任何阻止规则 sudo iptables -L -n -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 56282 49M YJ-FIREWALL-INPUT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- wg0 * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * wg0 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain YJ-FIREWALL-INPUT (1 references) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 94.181.229.254 0.0.0.0/0 reject-with icmp-port-unreachable root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# sudo tcpdump -i any -n port 51820 tcpdump: data link type LINUX_SLL2 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes 14:06:35.051827 eth0 In IP 113.108.37.18.2102 > 10.1.12.5.51820: UDP, length 148 14:06:40.683575 eth0 In IP 113.108.37.18.2102 > 10.1.12.5.51820: UDP, length 148 14:06:45.803948 eth0 In IP 113.108.37.18.2102 > 10.1.12.5.51820: UDP, length 148 14:06:51.435594 eth0 In IP 113.108.37.18.2102 > 10.1.12.5.51820: UDP, length 148 14:06:57.067465 eth0 In IP 113.108.37.18.2102 > 10.1.12.5.51820: UDP, length 148 ^C 5 packets captured 6 packets received by filter 0 packets dropped by kernel root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# sudo iptables -t nat -A POSTROUTING -o eth33 -j MASQUERADE root@VM-12-5-ubuntu:~# sudo sysctl -w net.ipv4.ip_forward=1 net.ipv4.ip_forward = 1 root@VM-12-5-ubuntu:~# vi /etc/wireguard/wg0.conf root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# sudo ss -lunp | grep 51820 UNCONN 0 0 0.0.0.0:51820 0.0.0.0:* UNCONN 0 0 [::]:51820 [::]:* root@VM-12-5-ubuntu:~# sudo wg show interface: wg0 public key: Vl13ICrsWW4tODYv94bNV2Es9FPY4/6MoJ0hO1YXG3I= private key: (hidden) listening port: 51820 peer: n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= allowed ips: 10.88.0.2/32 persistent keepalive: every 25 seconds root@VM-12-5-ubuntu:~# cat /etc/wireguard/wg0.conf [Interface] Address = 10.88.0.1/24 ListenPort = 51820 PrivateKey = BgxjDizUdEATpdh0iZ7Y+zQo2iVyqRBgp70CemeZ30A= # 允许转发 PostUp = sysctl -w net.ipv4.ip_forward=1 ; iptables -A FORWARD -i wg0 -j ACCEPT ; iptables -A FORWARD -o wg0 -j ACCEPT PostDown = iptables -D FORWARD -i wg0 -j ACCEPT ; iptables -D FORWARD -o wg0 -j ACCEPT [Peer] PublicKey = n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= AllowedIPs = 10.88.0.2/32 PersistentKeepalive = 25 root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# umask 077 wg genkey | tee /etc/wireguard/server.priv | wg pubkey > /etc/wireguard/server.pub cat /etc/wireguard/server.pub 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# cat /etc/wireguard/server.priv qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= root@VM-12-5-ubuntu:~# vi /etc/wireguard/wg0.conf root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# systemctl restart wg-quick@wg0 root@VM-12-5-ubuntu:~# wg show interface: wg0 public key: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= private key: (hidden) listening port: 51820 peer: n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= allowed ips: 10.88.0.2/32 persistent keepalive: every 25 seconds root@VM-12-5-ubuntu:~# ping -c 3 10.88.0.2 PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data. From 10.88.0.1 icmp_seq=1 Destination Host Unreachable ping: sendmsg: Destination address required From 10.88.0.1 icmp_seq=2 Destination Host Unreachable ping: sendmsg: Destination address required From 10.88.0.1 icmp_seq=3 Destination Host Unreachable ping: sendmsg: Destination address required --- 10.88.0.2 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2042ms root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# ping -c 3 10.88.0.2 PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data. From 10.88.0.1 icmp_seq=1 Destination Host Unreachable ping: sendmsg: Destination address required From 10.88.0.1 icmp_seq=2 Destination Host Unreachable ping: sendmsg: Destination address required From 10.88.0.1 icmp_seq=3 Destination Host Unreachable ping: sendmsg: Destination address required --- 10.88.0.2 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2038ms root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# ip -c a show dev wg0 4: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 10.88.0.1/24 scope global wg0 valid_lft forever preferred_lft forever root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# sudo ufw allow 51820/udp Rules updated Rules updated (v6) root@VM-12-5-ubuntu:~# sudo ufw status Status: inactive root@VM-12-5-ubuntu:~# sudo tcpdump -ni any udp port 51820 tcpdump: data link type LINUX_SLL2 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes 14:41:19.914736 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: 14:41:25.547105 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: 14:41:31.178873 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: 14:41:36.811278 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: 14:41:41.931850 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: 14:41:47.563886 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: 14:41:52.682762 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: 14:41:58.314897 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: 14:42:03.947282 eth0 In IP 113.108.37.18.2103 > 10.1.12.5.51820: ^C 9 packets captured 10 packets received by filter 0 packets dropped by kernel root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# watch -n1 wg show root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# sudo wg showconf wg0 [Interface] ListenPort = 51820 PrivateKey = qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= [Peer] PublicKey = n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= AllowedIPs = 10.88.0.2/32 PersistentKeepalive = 25 root@VM-12-5-ubuntu:~# sudo cat /etc/wireguard/wg0.conf [Interface] Address = 10.88.0.1/24 ListenPort = 51820 PrivateKey = qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= PostUp = sysctl -w net.ipv4.ip_forward=1 ; iptables -A FORWARD -i wg0 -j ACCEPT ; iptables -A FORWARD -o wg0 -j ACCEPT PostDown = iptables -D FORWARD -i wg0 -j ACCEPT ; iptables -D FORWARD -o wg0 -j ACCEPT [Peer] PublicKey = n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= AllowedIPs = 10.88.0.2/32 PersistentKeepalive = 25 root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# sudo sed -i 's|^PublicKey = .*|PublicKey = dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik=|' /etc/wireguard/wg0.conf root@VM-12-5-ubuntu:~# sudo systemctl restart wg-quick@wg0 root@VM-12-5-ubuntu:~# wg show interface: wg0 public key: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= private key: (hidden) listening port: 51820 peer: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= allowed ips: 10.88.0.2/32 persistent keepalive: every 25 seconds root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# wg show interface: wg0 public key: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= private key: (hidden) listening port: 51820 peer: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= endpoint: 113.108.37.18:2103 allowed ips: 10.88.0.2/32 latest handshake: 19 seconds ago transfer: 180 B received, 124 B sent persistent keepalive: every 25 seconds root@VM-12-5-ubuntu:~# ping -c 3 10.88.0.2 PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data. 64 bytes from 10.88.0.2: icmp_seq=1 ttl=64 time=6.01 ms 64 bytes from 10.88.0.2: icmp_seq=2 ttl=64 time=5.91 ms 64 bytes from 10.88.0.2: icmp_seq=3 ttl=64 time=5.88 ms --- 10.88.0.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 5.879/5.932/6.006/0.053 ms root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# S_PRIV=$(sudo wg showconf wg0 | awk '/^PrivateKey/ {print $3; exit}') sudo sed -i "s|^PrivateKey = .*|PrivateKey = $S_PRIV|" /etc/wireguard/wg0.conf root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# ping -c 3 10.88.0.2 PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data. 64 bytes from 10.88.0.2: icmp_seq=1 ttl=64 time=6.23 ms 64 bytes from 10.88.0.2: icmp_seq=2 ttl=64 time=6.17 ms 64 bytes from 10.88.0.2: icmp_seq=3 ttl=64 time=5.91 ms --- 10.88.0.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 5.906/6.101/6.229/0.140 ms root@VM-12-5-ubuntu:~# sudo tcpdump -ni any udp port 51820 tcpdump: data link type LINUX_SLL2 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes ^C 0 packets captured 1 packet received by filter 0 packets dropped by kernel root@VM-12-5-ubuntu:~# ^C root@VM-12-5-ubuntu:~# ping -c 3 10.88.0.2 PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data. 64 bytes from 10.88.0.2: icmp_seq=1 ttl=64 time=6.36 ms 64 bytes from 10.88.0.2: icmp_seq=2 ttl=64 time=5.91 ms 64 bytes from 10.88.0.2: icmp_seq=3 ttl=64 time=5.89 ms --- 10.88.0.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 5.886/6.053/6.362/0.218 ms root@VM-12-5-ubuntu:~# sudo cat /etc/wireguard/wg0.conf [Interface] Address = 10.88.0.1/24 ListenPort = 51820 PrivateKey = qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= PostUp = sysctl -w net.ipv4.ip_forward=1 ; iptables -A FORWARD -i wg0 -j ACCEPT ; iptables -A FORWARD -o wg0 -j ACCEPT PostDown = iptables -D FORWARD -i wg0 -j ACCEPT ; iptables -D FORWARD -o wg0 -j ACCEPT [Peer] PublicKey = dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= AllowedIPs = 10.88.0.2/32 PersistentKeepalive = 25 root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# sudo cat /etc/wireguard/wg0.conf [Interface] Address = 10.88.0.1/24 ListenPort = 51820 PrivateKey = qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= PostUp = sysctl -w net.ipv4.ip_forward=1 ; iptables -A FORWARD -i wg0 -j ACCEPT ; iptables -A FORWARD -o wg0 -j ACCEPT PostDown = iptables -D FORWARD -i wg0 -j ACCEPT ; iptables -D FORWARD -o wg0 -j ACCEPT [Peer] PublicKey = dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= AllowedIPs = 10.88.0.2/32 PersistentKeepalive = 25 root@VM-12-5-ubuntu:~# curl -H 'Host: zhuanfa.axzys.cn' http://10.88.0.2:32150/ Hostname: whoami-678b958ccd-5x2gj IP: 127.0.0.1 IP: ::1 IP: 10.244.2.60 IP: fe80::6498:49ff:fe98:6d1 RemoteAddr: 10.244.2.47:48502 GET / HTTP/1.1 Host: zhuanfa.axzys.cn User-Agent: curl/7.81.0 Accept: */* Accept-Encoding: gzip X-Forwarded-For: 10.244.0.0 X-Forwarded-Host: zhuanfa.axzys.cn X-Forwarded-Port: 80 X-Forwarded-Proto: http X-Forwarded-Server: traefik-release-589c7ff647-r2txc X-Real-Ip: 10.244.0.0 root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# curl -H 'Host: zhuanfa.axzys.cn' http://10.88.0.2:32150/ ^C root@VM-12-5-ubuntu:~# yum install nginx -y Command 'yum' not found, did you mean: command 'gum' from snap gum (0.13.0) command 'uum' from deb freewnn-jserver (1.1.1~a021+cvs20130302-7build1) command 'sum' from deb coreutils (8.32-4.1ubuntu1.2) command 'zum' from deb perforate (1.2-5.1) command 'yum4' from deb nextgen-yum4 (4.5.2-6) command 'num' from deb quickcal (2.4-1) See 'snap info <snapname>' for additional versions. root@VM-12-5-ubuntu:~# atp install nginx -y Command 'atp' not found, but there are 18 similar ones. root@VM-12-5-ubuntu:~# apt install nginx Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: libnginx-mod-http-geoip2 libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libnginx-mod-stream-geoip2 nginx-common nginx-core Suggested packages: fcgiwrap nginx-doc ssl-cert The following NEW packages will be installed: libnginx-mod-http-geoip2 libnginx-mod-http-image-filter libnginx-mod-http-xslt-filter libnginx-mod-mail libnginx-mod-stream libnginx-mod-stream-geoip2 nginx nginx-common nginx-core 0 upgraded, 9 newly installed, 0 to remove and 202 not upgraded. Need to get 698 kB of archives. After this operation, 2,391 kB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 nginx-common all 1.18.0-6ubuntu14.7 [40.1 kB] Get:2 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 libnginx-mod-http-geoip2 amd64 1.18.0-6ubuntu14.7 [12.0 kB] Get:3 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 libnginx-mod-http-image-filter amd64 1.18.0-6ubuntu14.7 [15.5 kB] Get:4 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 libnginx-mod-http-xslt-filter amd64 1.18.0-6ubuntu14.7 [13.8 kB] Get:5 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 libnginx-mod-mail amd64 1.18.0-6ubuntu14.7 [45.8 kB] Get:6 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 libnginx-mod-stream amd64 1.18.0-6ubuntu14.7 [73.0 kB] Get:7 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 libnginx-mod-stream-geoip2 amd64 1.18.0-6ubuntu14.7 [10.1 kB] Get:8 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 nginx-core amd64 1.18.0-6ubuntu14.7 [483 kB] Get:9 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 nginx amd64 1.18.0-6ubuntu14.7 [3,878 B] Fetched 698 kB in 1s (745 kB/s) Preconfiguring packages ... Selecting previously unselected package nginx-common. (Reading database ... 88755 files and directories currently installed.) Preparing to unpack .../0-nginx-common_1.18.0-6ubuntu14.7_all.deb ... Unpacking nginx-common (1.18.0-6ubuntu14.7) ... Selecting previously unselected package libnginx-mod-http-geoip2. Preparing to unpack .../1-libnginx-mod-http-geoip2_1.18.0-6ubuntu14.7_amd64.deb ... Unpacking libnginx-mod-http-geoip2 (1.18.0-6ubuntu14.7) ... Selecting previously unselected package libnginx-mod-http-image-filter. Preparing to unpack .../2-libnginx-mod-http-image-filter_1.18.0-6ubuntu14.7_amd64.deb ... Unpacking libnginx-mod-http-image-filter (1.18.0-6ubuntu14.7) ... Selecting previously unselected package libnginx-mod-http-xslt-filter. Preparing to unpack .../3-libnginx-mod-http-xslt-filter_1.18.0-6ubuntu14.7_amd64.deb ... Unpacking libnginx-mod-http-xslt-filter (1.18.0-6ubuntu14.7) ... Selecting previously unselected package libnginx-mod-mail. Preparing to unpack .../4-libnginx-mod-mail_1.18.0-6ubuntu14.7_amd64.deb ... Unpacking libnginx-mod-mail (1.18.0-6ubuntu14.7) ... Selecting previously unselected package libnginx-mod-stream. Preparing to unpack .../5-libnginx-mod-stream_1.18.0-6ubuntu14.7_amd64.deb ... Unpacking libnginx-mod-stream (1.18.0-6ubuntu14.7) ... Selecting previously unselected package libnginx-mod-stream-geoip2. Preparing to unpack .../6-libnginx-mod-stream-geoip2_1.18.0-6ubuntu14.7_amd64.deb ... Unpacking libnginx-mod-stream-geoip2 (1.18.0-6ubuntu14.7) ... Selecting previously unselected package nginx-core. Preparing to unpack .../7-nginx-core_1.18.0-6ubuntu14.7_amd64.deb ... Unpacking nginx-core (1.18.0-6ubuntu14.7) ... Selecting previously unselected package nginx. Preparing to unpack .../8-nginx_1.18.0-6ubuntu14.7_amd64.deb ... Unpacking nginx (1.18.0-6ubuntu14.7) ... Setting up nginx-common (1.18.0-6ubuntu14.7) ... Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /lib/systemd/system/nginx.service. Setting up libnginx-mod-http-xslt-filter (1.18.0-6ubuntu14.7) ... Setting up libnginx-mod-http-geoip2 (1.18.0-6ubuntu14.7) ... Setting up libnginx-mod-mail (1.18.0-6ubuntu14.7) ... Setting up libnginx-mod-http-image-filter (1.18.0-6ubuntu14.7) ... Setting up libnginx-mod-stream (1.18.0-6ubuntu14.7) ... Setting up libnginx-mod-stream-geoip2 (1.18.0-6ubuntu14.7) ... Setting up nginx-core (1.18.0-6ubuntu14.7) ... * Upgrading binary nginx [ OK ] Setting up nginx (1.18.0-6ubuntu14.7) ... Processing triggers for man-db (2.10.2-1) ... Processing triggers for ufw (0.36.1-4build1) ... Scanning processes... Scanning linux images... Running kernel seems to be up-to-date. No services need to be restarted. No containers need to be restarted. No user sessions are running outdated binaries. No VM guests are running outdated hypervisor (qemu) binaries on this host. root@VM-12-5-ubuntu:~# vi /etc/nginx/sites-available/zhuanfa.axzys.cn root@VM-12-5-ubuntu:~# ln -s /etc/nginx/sites-available/zhuanfa.axzys.cn /etc/nginx/sites-enabled/ root@VM-12-5-ubuntu:~# nginx -t && systemctl reload nginx nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# apt-get install -y certbot python3-certbot-nginx Reading package lists... Done Building dependency tree... Done Reading state information... Done The following additional packages will be installed: python3-acme python3-certbot python3-certifi python3-configargparse python3-icu python3-josepy python3-parsedatetime python3-requests python3-requests-toolbelt python3-rfc3339 python3-tz python3-urllib3 python3-zope.component python3-zope.event python3-zope.hookable Suggested packages: python-certbot-doc python3-certbot-apache python-acme-doc python-certbot-nginx-doc python3-socks python-requests-doc The following NEW packages will be installed: certbot python3-acme python3-certbot python3-certbot-nginx python3-certifi python3-configargparse python3-icu python3-josepy python3-parsedatetime python3-requests python3-requests-toolbelt python3-rfc3339 python3-tz python3-urllib3 python3-zope.component python3-zope.event python3-zope.hookable 0 upgraded, 17 newly installed, 0 to remove and 202 not upgraded. Need to get 1,322 kB of archives. After this operation, 6,211 kB of additional disk space will be used. Get:1 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-josepy all 1.10.0-1 [22.0 kB] Get:2 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 python3-certifi all 2020.6.20-1 [150 kB] Get:3 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 python3-urllib3 all 1.26.5-1~exp1ubuntu0.3 [98.6 kB] Get:4 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 python3-requests all 2.25.1+dfsg-2ubuntu0.3 [48.8 kB] Get:5 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 python3-requests-toolbelt all 0.9.1-1 [38.0 kB] Get:6 http://mirrors.tencentyun.com/ubuntu jammy-updates/main amd64 python3-tz all 2022.1-1ubuntu0.22.04.1 [30.7 kB] Get:7 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 python3-rfc3339 all 1.1-3 [7,110 B] Get:8 http://mirrors.tencentyun.com/ubuntu jammy-updates/universe amd64 python3-acme all 1.21.0-1ubuntu0.1 [36.4 kB] Get:9 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-configargparse all 1.5.3-1 [26.9 kB] Get:10 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-parsedatetime all 2.6-2 [32.9 kB] Get:11 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-zope.hookable amd64 5.1.0-1build1 [11.6 kB] Get:12 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-zope.event all 4.4-3 [8,180 B] Get:13 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-zope.component all 4.3.0-3 [38.3 kB] Get:14 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-certbot all 1.21.0-1build1 [175 kB] Get:15 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 certbot all 1.21.0-1build1 [21.3 kB] Get:16 http://mirrors.tencentyun.com/ubuntu jammy/universe amd64 python3-certbot-nginx all 1.21.0-1 [35.4 kB] Get:17 http://mirrors.tencentyun.com/ubuntu jammy/main amd64 python3-icu amd64 2.8.1-0ubuntu2 [540 kB] Fetched 1,322 kB in 2s (759 kB/s) Preconfiguring packages ... Selecting previously unselected package python3-josepy. (Reading database ... 88845 files and directories currently installed.) Preparing to unpack .../00-python3-josepy_1.10.0-1_all.deb ... Unpacking python3-josepy (1.10.0-1) ... Selecting previously unselected package python3-certifi. Preparing to unpack .../01-python3-certifi_2020.6.20-1_all.deb ... Unpacking python3-certifi (2020.6.20-1) ... Selecting previously unselected package python3-urllib3. Preparing to unpack .../02-python3-urllib3_1.26.5-1~exp1ubuntu0.3_all.deb ... Unpacking python3-urllib3 (1.26.5-1~exp1ubuntu0.3) ... Selecting previously unselected package python3-requests. Preparing to unpack .../03-python3-requests_2.25.1+dfsg-2ubuntu0.3_all.deb ... Unpacking python3-requests (2.25.1+dfsg-2ubuntu0.3) ... Selecting previously unselected package python3-requests-toolbelt. Preparing to unpack .../04-python3-requests-toolbelt_0.9.1-1_all.deb ... Unpacking python3-requests-toolbelt (0.9.1-1) ... Selecting previously unselected package python3-tz. Preparing to unpack .../05-python3-tz_2022.1-1ubuntu0.22.04.1_all.deb ... Unpacking python3-tz (2022.1-1ubuntu0.22.04.1) ... Selecting previously unselected package python3-rfc3339. Preparing to unpack .../06-python3-rfc3339_1.1-3_all.deb ... Unpacking python3-rfc3339 (1.1-3) ... Selecting previously unselected package python3-acme. Preparing to unpack .../07-python3-acme_1.21.0-1ubuntu0.1_all.deb ... Unpacking python3-acme (1.21.0-1ubuntu0.1) ... Selecting previously unselected package python3-configargparse. Preparing to unpack .../08-python3-configargparse_1.5.3-1_all.deb ... Unpacking python3-configargparse (1.5.3-1) ... Selecting previously unselected package python3-parsedatetime. Preparing to unpack .../09-python3-parsedatetime_2.6-2_all.deb ... Unpacking python3-parsedatetime (2.6-2) ... Selecting previously unselected package python3-zope.hookable. Preparing to unpack .../10-python3-zope.hookable_5.1.0-1build1_amd64.deb ... Unpacking python3-zope.hookable (5.1.0-1build1) ... Selecting previously unselected package python3-zope.event. Preparing to unpack .../11-python3-zope.event_4.4-3_all.deb ... Unpacking python3-zope.event (4.4-3) ... Selecting previously unselected package python3-zope.component. Preparing to unpack .../12-python3-zope.component_4.3.0-3_all.deb ... Unpacking python3-zope.component (4.3.0-3) ... Selecting previously unselected package python3-certbot. Preparing to unpack .../13-python3-certbot_1.21.0-1build1_all.deb ... Unpacking python3-certbot (1.21.0-1build1) ... Selecting previously unselected package certbot. Preparing to unpack .../14-certbot_1.21.0-1build1_all.deb ... Unpacking certbot (1.21.0-1build1) ... Selecting previously unselected package python3-certbot-nginx. Preparing to unpack .../15-python3-certbot-nginx_1.21.0-1_all.deb ... Unpacking python3-certbot-nginx (1.21.0-1) ... Selecting previously unselected package python3-icu. Preparing to unpack .../16-python3-icu_2.8.1-0ubuntu2_amd64.deb ... Unpacking python3-icu (2.8.1-0ubuntu2) ... Setting up python3-configargparse (1.5.3-1) ... Setting up python3-parsedatetime (2.6-2) ... Setting up python3-icu (2.8.1-0ubuntu2) ... Setting up python3-zope.event (4.4-3) ... Setting up python3-tz (2022.1-1ubuntu0.22.04.1) ... Setting up python3-zope.hookable (5.1.0-1build1) ... Setting up python3-certifi (2020.6.20-1) ... Setting up python3-urllib3 (1.26.5-1~exp1ubuntu0.3) ... Setting up python3-josepy (1.10.0-1) ... Setting up python3-rfc3339 (1.1-3) ... Setting up python3-zope.component (4.3.0-3) ... Setting up python3-requests (2.25.1+dfsg-2ubuntu0.3) ... Setting up python3-requests-toolbelt (0.9.1-1) ... Setting up python3-acme (1.21.0-1ubuntu0.1) ... Setting up python3-certbot (1.21.0-1build1) ... Setting up certbot (1.21.0-1build1) ... Created symlink /etc/systemd/system/timers.target.wants/certbot.timer → /lib/systemd/system/certbot.timer. Setting up python3-certbot-nginx (1.21.0-1) ... Processing triggers for man-db (2.10.2-1) ... Scanning processes... Scanning linux images... Running kernel seems to be up-to-date. No services need to be restarted. No containers need to be restarted. No user sessions are running outdated binaries. No VM guests are running outdated hypervisor (qemu) binaries on this host. root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# certbot --nginx -d zhuanfa.axzys.cn Saving debug log to /var/log/letsencrypt/letsencrypt.log Enter email address (used for urgent renewal and security notices) (Enter 'c' to cancel): 7902731@qq.com - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Please read the Terms of Service at https://letsencrypt.org/documents/LE-SA-v1.5-February-24-2025.pdf. You must agree in order to register with the ACME server. Do you agree? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: Y - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Would you be willing, once your first certificate is successfully issued, to share your email address with the Electronic Frontier Foundation, a founding partner of the Let's Encrypt project and the non-profit organization that develops Certbot? We'd like to send you email about our work encrypting the web, EFF news, campaigns, and ways to support digital freedom. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: Y Account registered. Requesting a certificate for zhuanfa.axzys.cn Successfully received certificate. Certificate is saved at: /etc/letsencrypt/live/zhuanfa.axzys.cn/fullchain.pem Key is saved at: /etc/letsencrypt/live/zhuanfa.axzys.cn/privkey.pem This certificate expires on 2025-11-25. These files will be updated when the certificate renews. Certbot has set up a scheduled task to automatically renew this certificate in the background. Deploying certificate Successfully deployed certificate for zhuanfa.axzys.cn to /etc/nginx/sites-enabled/zhuanfa.axzys.cn Congratulations! You have successfully enabled HTTPS on https://zhuanfa.axzys.cn - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - If you like Certbot, please consider supporting our work by: * Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate * Donating to EFF: https://eff.org/donate-le - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - root@VM-12-5-ubuntu:~# root@VM-12-5-ubuntu:~# [root@k8s-01 ~]# sudo dnf install -y wireguard-tools Last metadata expiration check: 0:00:43 ago on Wed Aug 27 13:52:47 2025. Dependencies resolved. ===================================================================================================================================================================================================================================== Package Architecture Version Repository Size ===================================================================================================================================================================================================================================== Installing: wireguard-tools x86_64 1.0.20210914-3.el9 appstream 114 k Installing dependencies: systemd-resolved x86_64 252-51.el9_6.1 baseos 380 k Transaction Summary ===================================================================================================================================================================================================================================== Install 2 Packages Total download size: 494 k Installed size: 1.0 M Downloading Packages: (1/2): wireguard-tools-1.0.20210914-3.el9.x86_64.rpm 598 kB/s | 114 kB 00:00 (2/2): systemd-resolved-252-51.el9_6.1.x86_64.rpm 1.5 MB/s | 380 kB 00:00 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 293 kB/s | 494 kB 00:01 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: systemd-resolved-252-51.el9_6.1.x86_64 1/2 Installing : systemd-resolved-252-51.el9_6.1.x86_64 1/2 Running scriptlet: systemd-resolved-252-51.el9_6.1.x86_64 1/2 Installing : wireguard-tools-1.0.20210914-3.el9.x86_64 2/2 Running scriptlet: wireguard-tools-1.0.20210914-3.el9.x86_64 2/2 Verifying : systemd-resolved-252-51.el9_6.1.x86_64 1/2 Verifying : wireguard-tools-1.0.20210914-3.el9.x86_64 2/2 Installed: systemd-resolved-252-51.el9_6.1.x86_64 wireguard-tools-1.0.20210914-3.el9.x86_64 Complete! [root@k8s-01 ~]# wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey [root@k8s-01 ~]# vi cat /etc/wireguard/publickey 2 files to edit [root@k8s-01 ~]# cat /etc/wireguard/publickey n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= [root@k8s-01 ~]# sudo systemctl stop firewalld [root@k8s-01 ~]# vi /etc/wireguard/wg0.conf [root@k8s-01 ~]# systemctl enable --now wg-quick@wg0 Created symlink /etc/systemd/system/multi-user.target.wants/wg-quick@wg0.service → /usr/lib/systemd/system/wg-quick@.service. [root@k8s-01 ~]# wg show interface: wg0 public key: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= private key: (hidden) listening port: 42179 peer: BgxjDizUdEATpdh0iZ7Y+zQo2iVyqRBgp70CemeZ30A= endpoint: 43.138.186.171:51820 allowed ips: 10.88.0.1/32 transfer: 0 B received, 148 B sent persistent keepalive: every 25 seconds [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:d4:4f:e7 brd ff:ff:ff:ff:ff:ff altname enp2s1 inet 192.168.173.101/24 brd 192.168.173.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fed4:4fe7/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether b6:ee:bd:b4:cf:87 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 4: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default link/ether c2:cd:7c:86:14:bd brd ff:ff:ff:ff:ff:ff inet 10.106.48.170/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.100.101.161/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.0.10/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.98.232.237/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.100.223.32/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.96.0.1/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.100.147.23/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.101.189.236/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever inet 10.97.132.101/32 scope global kube-ipvs0 valid_lft forever preferred_lft forever 5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 5e:9d:fa:96:af:2b brd ff:ff:ff:ff:ff:ff inet 10.244.0.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::5c9d:faff:fe96:af2b/64 scope link valid_lft forever preferred_lft forever 6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000 link/ether 72:66:f6:85:8e:aa brd ff:ff:ff:ff:ff:ff inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::7066:f6ff:fe85:8eaa/64 scope link valid_lft forever preferred_lft forever 160: veth26c6ffcc@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default link/ether de:52:a9:34:79:f6 brd ff:ff:ff:ff:ff:ff link-netns cni-b0039ff2-418f-6ff2-d3dd-b65dd3d8bee4 inet6 fe80::dc52:a9ff:fe34:79f6/64 scope link valid_lft forever preferred_lft forever 161: vetha4607aaa@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default link/ether de:5a:9b:4e:6f:04 brd ff:ff:ff:ff:ff:ff link-netns cni-1f503360-9a27-073a-b8c7-5ee8286a56d2 inet6 fe80::dc5a:9bff:fe4e:6f04/64 scope link valid_lft forever preferred_lft forever 162: veth4615fa64@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default link/ether 1a:cc:41:1c:f2:9d brd ff:ff:ff:ff:ff:ff link-netns cni-532a4e09-6e09-1113-3044-1d864ac3acf5 inet6 fe80::18cc:41ff:fe1c:f29d/64 scope link valid_lft forever preferred_lft forever 163: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 10.88.0.2/24 scope global wg0 valid_lft forever preferred_lft forever [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# sudo wg show interface: wg0 public key: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= private key: (hidden) listening port: 42179 peer: BgxjDizUdEATpdh0iZ7Y+zQo2iVyqRBgp70CemeZ30A= endpoint: 43.138.186.171:51820 allowed ips: 10.88.0.1/32 transfer: 0 B received, 27.46 KiB sent persistent keepalive: every 25 seconds [root@k8s-01 ~]# cat /etc/wireguard/wg0.conf [Interface] Address = 10.88.0.2/24 PrivateKey = n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= [Peer] PublicKey = BgxjDizUdEATpdh0iZ7Y+zQo2iVyqRBgp70CemeZ30A= Endpoint = 43.138.186.171:51820 AllowedIPs = 10.88.0.1/32 PersistentKeepalive = 25 [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# cat /etc/wireguard/k8s.pub cat: /etc/wireguard/k8s.pub: No such file or directory [root@k8s-01 ~]# cat cat /etc/wireguard/publickey cat: cat: No such file or directory n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= [root@k8s-01 ~]# vi /etc/wireguard/wg0.conf [root@k8s-01 ~]# systemctl restart wg-quick@wg0 [root@k8s-01 ~]# wg show interface: wg0 public key: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= private key: (hidden) listening port: 46509 peer: qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= endpoint: 43.138.186.171:51820 allowed ips: 10.88.0.1/32 transfer: 0 B received, 592 B sent persistent keepalive: every 25 seconds [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# sysctl -w net.ipv4.conf.all.rp_filter=2 sysctl -w net.ipv4.conf.wg0.rp_filter=2 net.ipv4.conf.all.rp_filter = 2 net.ipv4.conf.wg0.rp_filter = 2 [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# ip -c a show dev wg0 167: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none inet 10.88.0.2/24 scope global wg0 valid_lft forever preferred_lft forever [root@k8s-01 ~]# watch -n1 wg show [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# sudo wg showconf wg0 [Interface] ListenPort = 46509 PrivateKey = mL/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuFU= [Peer] PublicKey = qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= AllowedIPs = 10.88.0.1/32 Endpoint = 43.138.186.171:51820 PersistentKeepalive = 25 [root@k8s-01 ~]# sudo cat /etc/wireguard/wg0.conf [Interface] Address = 10.88.0.2/24 PrivateKey = n7/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuBU= [Peer] PublicKey = qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= Endpoint = 43.138.186.171:51820 AllowedIPs = 10.88.0.1/32 PersistentKeepalive = 25 [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# sudo sed -i 's|^PrivateKey = .*|PrivateKey = mL/...FU=|' /etc/wireguard/wg0.conf [root@k8s-01 ~]# sudo sed -i '/^\[Interface\]/a ListenPort = 46509' /etc/wireguard/wg0.conf [root@k8s-01 ~]# echo 'mL/...FU=' | sudo wg pubkey wg: Key is not the correct length or format [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# CLIENT_PRIV=$(sudo wg showconf wg0 | awk '/^PrivateKey/ {print $3; exit}') [root@k8s-01 ~]# CLIENT_PUB=$(echo "$CLIENT_PRIV" | wg pubkey) [root@k8s-01 ~]# echo "CLIENT_PUB = $CLIENT_PUB" CLIENT_PUB = dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= [root@k8s-01 ~]# sudo sed -i "s|^PrivateKey = .*|PrivateKey = $CLIENT_PRIV|" /etc/wireguard/wg0.conf [root@k8s-01 ~]# grep -n '^ListenPort' /etc/wireguard/wg0.conf 2:ListenPort = 46509 [root@k8s-01 ~]# vi /etc/wireguard/wg0.conf [root@k8s-01 ~]# cat /etc/wireguard/wg0.conf [Interface] ListenPort = 46509 Address = 10.88.0.2/24 PrivateKey = mL/nzuiBYLFm+ijhBR8d0G/JcNPu+eKg1V//vX5yuFU= [Peer] PublicKey = qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= Endpoint = 43.138.186.171:51820 AllowedIPs = 10.88.0.1/32 PersistentKeepalive = 25 [root@k8s-01 ~]# sudo systemctl restart wg-quick@wg0 [root@k8s-01 ~]# wg show interface: wg0 public key: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= private key: (hidden) listening port: 46509 peer: qFvMNYv27vwcIfJuu6fXLcxYNscOTvlDxmd9JzN8fV8= endpoint: 43.138.186.171:51820 allowed ips: 10.88.0.1/32 transfer: 0 B received, 148 B sent persistent keepalive: every 25 seconds [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# sudo sed -i 's|^PublicKey = .*|PublicKey = 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg=|' /etc/wireguard/wg0.conf sudo systemctl restart wg-quick@wg0 wg show interface: wg0 public key: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= private key: (hidden) listening port: 46509 peer: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= endpoint: 43.138.186.171:51820 allowed ips: 10.88.0.1/32 latest handshake: Now transfer: 124 B received, 180 B sent persistent keepalive: every 25 seconds [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# [root@k8s-01 ~]# wg show interface: wg0 public key: dF92nKBqKgDRGxNuDvm3gCKgaBwfyuBXqBecLbLs7ik= private key: (hidden) listening port: 46509 peer: 4GSWTJJq5zv6yd0pa4apypDSxxE+J7HckZ0OJOdfNlg= endpoint: 43.138.186.171:51820 allowed ips: 10.88.0.1/32 latest handshake: 16 seconds ago transfer: 124 B received, 180 B sent persistent keepalive: every 25 seconds [root@k8s-01 ~]# ping -c 3 10.88.0.1 PING 10.88.0.1 (10.88.0.1) 56(84) bytes of data. 64 bytes from 10.88.0.1: icmp_seq=1 ttl=64 time=6.22 ms 64 bytes from 10.88.0.1: icmp_seq=2 ttl=64 time=5.92 ms 64 bytes from 10.88.0.1: icmp_seq=3 ttl=64 time=63.9 ms --- 10.88.0.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 5.921/25.348/63.904/27.263 ms [root@k8s-01 ~]# ping -c 3 10.88.0.1 PING 10.88.0.1 (10.88.0.1) 56(84) bytes of data. 64 bytes from 10.88.0.1: icmp_seq=1 ttl=64 time=6.08 ms 64 bytes from 10.88.0.1: icmp_seq=2 ttl=64 time=6.24 ms 64 bytes from 10.88.0.1: icmp_seq=3 ttl=64 time=6.00 ms --- 10.88.0.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 5.998/6.105/6.242/0.101 ms
2025年08月27日
3 阅读
0 评论
0 点赞
2025-08-21
k8s镜像加速
一、安装配置nginx1.需要准备一个可以访问外网的服务器 2.安装nginx 3.准备域名解析到服务器,然后把证书配置到nginx里面 # /etc/nginx/sites-available/docker-mirror # DNS for variable proxy_pass resolver 1.1.1.1 8.8.8.8 valid=300s ipv6=off; # Cache (only used under /v2/) proxy_cache_path /var/cache/nginx/docker levels=1:2 keys_zone=docker_cache:50m max_size=300g inactive=7d use_temp_path=off; # Registry v2 header map $http_docker_distribution_api_version $docker_api_version { default "registry/2.0"; } # expose cache status map $upstream_cache_status $cache_status { default $upstream_cache_status; "" "BYPASS"; } server { listen 443 ssl http2; # listen 443 ssl http2 default_server; server_name xing.axzys.cn; ssl_certificate /etc/nginx/ssl/xing.axzys.cn.pem; ssl_certificate_key /etc/nginx/ssl/xing.axzys.cn.key; client_max_body_size 0; proxy_http_version 1.1; proxy_connect_timeout 60s; proxy_read_timeout 600s; proxy_send_timeout 600s; # 默认流式 proxy_buffering off; proxy_request_buffering off; proxy_set_header Connection ""; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Docker-Distribution-Api-Version $docker_api_version; # 全局打开缓存(/_proxy、/token 会单独关闭) proxy_cache docker_cache; proxy_cache_lock on; proxy_cache_revalidate on; proxy_cache_min_uses 1; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_valid 200 206 302 10m; add_header X-Cache-Status $cache_status always; # 把上游 3xx Location 改写到 /_proxy/<host>/<path?query> proxy_redirect ~^https://(?<h>[^/]+)(?<p>/.*)$ https://$server_name/_proxy/$h$p; # ---------- token endpoint(Docker Hub 专用) ---------- location = /token { proxy_pass https://auth.docker.io/token$is_args$args; proxy_set_header Host auth.docker.io; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Authorization ""; proxy_cache off; proxy_buffering off; proxy_http_version 1.1; proxy_connect_timeout 30s; proxy_read_timeout 30s; proxy_send_timeout 30s; } # ---------- GHCR token 代领 ---------- location = /ghcr-token { proxy_pass https://ghcr.io/token$is_args$args; proxy_set_header Host ghcr.io; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Authorization ""; proxy_cache off; proxy_buffering off; proxy_http_version 1.1; proxy_connect_timeout 30s; proxy_read_timeout 30s; proxy_send_timeout 30s; } # ---------- /v2/ -> Docker Hub ---------- location ^~ /v2/ { set $upstream_host "registry-1.docker.io"; proxy_set_header Host $upstream_host; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; # 引导客户端去我们的 /token proxy_hide_header WWW-Authenticate; add_header WWW-Authenticate 'Bearer realm="https://xing.axzys.cn/token",service="registry.docker.io"' always; proxy_buffering off; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # ================= 其余注册中心(带前缀)================= # 先 set 再 rewrite;必要时仅对 GHCR 改写 WWW-Authenticate 到本地 /ghcr-token # ghcr.io location ^~ /ghcr/ { set $upstream_host "ghcr.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; # 去掉前缀 rewrite ^/ghcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; # 关键:把令牌下发到你自己的 /ghcr-token,避免客户端直连 ghcr.io/token 403/网络问题 proxy_hide_header WWW-Authenticate; add_header WWW-Authenticate 'Bearer realm="https://xing.axzys.cn/ghcr-token",service="ghcr.io"' always; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # gcr.io location ^~ /gcr/ { set $upstream_host "gcr.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/gcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # registry.k8s.io location ^~ /rk8s/ { set $upstream_host "registry.k8s.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/rk8s(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # 兼容 k8s.gcr.io -> registry.k8s.io location ^~ /kgcr/ { set $upstream_host "registry.k8s.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/kgcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # mcr.microsoft.com location ^~ /mcr/ { set $upstream_host "mcr.microsoft.com"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/mcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # nvcr.io location ^~ /nvcr/ { set $upstream_host "nvcr.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/nvcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # quay.io location ^~ /quay/ { set $upstream_host "quay.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/quay(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # docker.elastic.co location ^~ /elastic/ { set $upstream_host "docker.elastic.co"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/elastic(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # ---------- /_proxy/<host>/<path?query> -> 对象存储/CDN ---------- location ~ ^/_proxy/(?<h>[^/]+)(?<p>/.*)$ { if ($h !~* ^(registry-1\.docker\.io|auth\.docker\.io|production\.cloudflare\.docker\.com|.*\.cloudflarestorage\.com|.*\.r2\.cloudflarestorage\.com|.*\.amazonaws\.com|storage\.googleapis\.com|.*\.googleapis\.com|.*\.pkg\.dev|ghcr\.io|github\.com|pkg-containers\.[^/]*githubusercontent\.com|objects\.githubusercontent\.com|.*\.blob\.core\.windows\.net|.*\.azureedge\.net|mcr\.microsoft\.com|.*\.microsoft\.com|quay\.io|cdn\.quay\.io|.*quay-cdn[^/]*\.redhat\.com|k8s\.gcr\.io|registry\.k8s\.io|gcr\.io|docker\.elastic\.co|.*\.elastic\.co|.*\.cloudfront\.net|.*\.fastly\.net)$) { return 403; } set $upstream_host $h; # 去掉 '/_proxy/<host>' 前缀 rewrite ^/_proxy/[^/]+(?<rest>/.*)$ $rest break; # 正确 Host 与 SNI proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; # 只透传客户端 Range proxy_set_header Range $http_range; # 不缓存预签名 URL;不缓冲 proxy_redirect off; proxy_cache off; proxy_buffering off; proxy_request_buffering off; # 避免把任何 Authorization 透传 proxy_set_header Authorization ""; # 不带 URI 的 proxy_pass proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } location = /healthz { return 200 'ok'; add_header Content-Type text/plain; } } # HTTP -> HTTPS server { listen 80; server_name xing.axzys.cn; return 301 https://$host$request_uri; } 二、配置k8s客户端vi /etc/containerd/config.toml[plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"] endpoint = ["https://k8s-registry.local"]#完整配置 disabled_plugins = [] imports = [] oom_score = 0 plugin_dir = "" required_plugins = [] root = "/var/lib/containerd" state = "/run/containerd" temp = "" version = 2 [cgroup] path = "" [debug] address = "" format = "" gid = 0 level = "" uid = 0 [grpc] address = "/run/containerd/containerd.sock" gid = 0 max_recv_message_size = 16777216 max_send_message_size = 16777216 tcp_address = "" tcp_tls_ca = "" tcp_tls_cert = "" tcp_tls_key = "" uid = 0 [metrics] address = "" grpc_histogram = false [plugins] [plugins."io.containerd.gc.v1.scheduler"] deletion_threshold = 0 mutation_threshold = 100 pause_threshold = 0.02 schedule_delay = "0s" startup_delay = "100ms" [plugins."io.containerd.grpc.v1.cri"] cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"] device_ownership_from_security_context = false disable_apparmor = false disable_cgroup = false disable_hugetlb_controller = true disable_proc_mount = false disable_tcp_service = true drain_exec_sync_io_timeout = "0s" enable_cdi = false enable_selinux = false enable_tls_streaming = false enable_unprivileged_icmp = false enable_unprivileged_ports = false ignore_deprecation_warnings = [] ignore_image_defined_volumes = false image_pull_progress_timeout = "5m0s" image_pull_with_sync_fs = false max_concurrent_downloads = 3 max_container_log_line_size = 16384 netns_mounts_under_state_dir = false restrict_oom_score_adj = false sandbox_image = "registry.cn-guangzhou.aliyuncs.com/xingcangku/eeeee:3.8" selinux_category_range = 1024 stats_collect_period = 10 stream_idle_timeout = "4h0m0s" stream_server_address = "127.0.0.1" stream_server_port = "0" systemd_cgroup = false tolerate_missing_hugetlb_controller = true unset_seccomp_profile = "" [plugins."io.containerd.grpc.v1.cri".cni] bin_dir = "/opt/cni/bin" conf_dir = "/etc/cni/net.d" conf_template = "" ip_pref = "" max_conf_num = 1 setup_serially = false [plugins."io.containerd.grpc.v1.cri".containerd] default_runtime_name = "runc" disable_snapshot_annotations = true discard_unpacked_layers = false ignore_blockio_not_enabled_errors = false ignore_rdt_not_enabled_errors = false no_pivot = false snapshotter = "overlayfs" [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime] base_runtime_spec = "" cni_conf_dir = "" cni_max_conf_num = 0 container_annotations = [] pod_annotations = [] privileged_without_host_devices = false privileged_without_host_devices_all_devices_allowed = false runtime_engine = "" runtime_path = "" runtime_root = "" runtime_type = "" sandbox_mode = "" snapshotter = "" [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] base_runtime_spec = "" cni_conf_dir = "" cni_max_conf_num = 0 container_annotations = [] pod_annotations = [] privileged_without_host_devices = false privileged_without_host_devices_all_devices_allowed = false runtime_engine = "" runtime_path = "" runtime_root = "" runtime_type = "io.containerd.runc.v2" sandbox_mode = "podsandbox" snapshotter = "" [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] BinaryName = "" CriuImagePath = "" CriuPath = "" CriuWorkPath = "" IoGid = 0 IoUid = 0 NoNewKeyring = false NoPivotRoot = false Root = "" ShimCgroup = "" SystemdCgroup = true [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime] base_runtime_spec = "" cni_conf_dir = "" cni_max_conf_num = 0 container_annotations = [] pod_annotations = [] privileged_without_host_devices = false privileged_without_host_devices_all_devices_allowed = false runtime_engine = "" runtime_path = "" runtime_root = "" runtime_type = "" sandbox_mode = "" snapshotter = "" [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options] [plugins."io.containerd.grpc.v1.cri".image_decryption] key_model = "node" [plugins."io.containerd.grpc.v1.cri".registry] config_path = "/etc/containerd/certs.d" [plugins."io.containerd.grpc.v1.cri".registry.auths] [plugins."io.containerd.grpc.v1.cri".registry.configs] [plugins."io.containerd.grpc.v1.cri".registry.headers] #[plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"] endpoint = ["https://15.164.211.114"] [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming] tls_cert_file = "" tls_key_file = "" [plugins."io.containerd.internal.v1.opt"] path = "/opt/containerd" [plugins."io.containerd.internal.v1.restart"] interval = "10s" [plugins."io.containerd.internal.v1.tracing"] [plugins."io.containerd.metadata.v1.bolt"] content_sharing_policy = "shared" [plugins."io.containerd.monitor.v1.cgroups"] no_prometheus = false [plugins."io.containerd.nri.v1.nri"] disable = true disable_connections = false plugin_config_path = "/etc/containerd/certs.d" plugin_path = "/opt/nri/plugins" plugin_registration_timeout = "5s" plugin_request_timeout = "2s" socket_path = "/var/run/nri/nri.sock" [plugins."io.containerd.runtime.v1.linux"] no_shim = false runtime = "runc" runtime_root = "" shim = "containerd-shim" shim_debug = false [plugins."io.containerd.runtime.v2.task"] platforms = ["linux/amd64"] sched_core = false [plugins."io.containerd.service.v1.diff-service"] default = ["walking"] sync_fs = false [plugins."io.containerd.service.v1.tasks-service"] blockio_config_file = "" rdt_config_file = "" [plugins."io.containerd.snapshotter.v1.aufs"] root_path = "" [plugins."io.containerd.snapshotter.v1.blockfile"] fs_type = "" mount_options = [] root_path = "" scratch_file = "" [plugins."io.containerd.snapshotter.v1.btrfs"] root_path = "" [plugins."io.containerd.snapshotter.v1.devmapper"] async_remove = false base_image_size = "" discard_blocks = false fs_options = "" fs_type = "" pool_name = "" root_path = "" [plugins."io.containerd.snapshotter.v1.native"] root_path = "" [plugins."io.containerd.snapshotter.v1.overlayfs"] mount_options = [] root_path = "" sync_remove = false upperdir_label = false [plugins."io.containerd.snapshotter.v1.zfs"] root_path = "" [plugins."io.containerd.tracing.processor.v1.otlp"] [plugins."io.containerd.transfer.v1.local"] config_path = "/etc/containerd/certs.d" max_concurrent_downloads = 3 max_concurrent_uploaded_layers = 3 [[plugins."io.containerd.transfer.v1.local".unpack_config]] differ = "" platform = "linux/amd64" snapshotter = "overlayfs" [proxy_plugins] [stream_processors] [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"] accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"] args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"] env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"] path = "ctd-decoder" returns = "application/vnd.oci.image.layer.v1.tar" [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"] accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"] args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"] env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"] path = "ctd-decoder" returns = "application/vnd.oci.image.layer.v1.tar+gzip" [timeouts] "io.containerd.timeout.bolt.open" = "0s" "io.containerd.timeout.metrics.shimstats" = "2s" "io.containerd.timeout.shim.cleanup" = "5s" "io.containerd.timeout.shim.load" = "5s" "io.containerd.timeout.shim.shutdown" = "3s" "io.containerd.timeout.task.state" = "2s" [ttrpc] address = "" gid = 0 uid = 0sudo mkdir -p /etc/containerd/certs.d/{docker.io,ghcr.io,gcr.io,registry.k8s.io,k8s.gcr.io,mcr.microsoft.com,nvcr.io,quay.io,docker.elastic.co}root@k8s-03:/etc/containerd/certs.d# cat /etc/containerd/certs.d/ghcr.io/hosts.toml server = "https://ghcr.io" [host."https://xing.axzys.cn/ghcr/v2"] capabilities = ["pull", "resolve"] override_path = true skip_verify = falseroot@k8s-03:~# cat /etc/containerd/certs.d/docker.io/hosts.toml server = "https://registry-1.docker.io" [host."https://xing.axzys.cn"] capabilities = ["pull", "resolve"] skip_verify = false root@k8s-03:~# cat /etc/containerd/certs.d/k8s.gcr.io/hosts.toml server = "https://k8s.gcr.io" [host."https://xing.axzys.cn/kgcr/2"] capabilities = ["pull", "resolve"] override_path = true skip_verify = false root@k8s-03:~# cat /etc/containerd/certs.d/registry.k8s.io/hosts.toml server = "https://registry.k8s.io" [host."https://xing.axzys.cn/rk8s/v2"] capabilities = ["pull", "resolve"] override_path = true skip_verify = false root@k8s-03:~# cat /etc/containerd/certs.d/registry-1.docker.io/hosts.toml server = "https://registry-1.docker.io" [host."https://xing.axzys.cn"] capabilities = ["pull", "resolve"] skip_verify = false root@k8s-03:~# cat /etc/containerd/certs.d/quay.io/hosts.toml server = "https://quay.io" [host."https://xing.axzys.cn/quay/v2"] capabilities = ["pull", "resolve"] override_path = true skip_verify = false root@k8s-03:~# cat /etc/containerd/certs.d/docker.elastic.co/hosts.toml server = "https://docker.elastic.co" [host."https://xing.axzys.cn/elastic/2"] capabilities = ["pull", "resolve"] override_path = true skip_verify = falseroot@k8s-03:~# cat /etc/containerd/certs.d/ghcr.io/hosts.toml server = "https://ghcr.io" [host."https://xing.axzys.cn/ghcr/v2"] capabilities = ["pull", "resolve"] override_path = true skip_verify = false #重启containerd生效 sudo systemctl restart containerd三、测试拉取#测试拉取镜像 root@k8s-03:/etc/containerd/certs.d# sudo nerdctl -n k8s.io --debug pull docker.io/library/alpine:3.15 DEBU[0000] verifying process skipped DEBU[0000] The image will be unpacked for platform {"amd64" "linux" "" [] ""}, snapshotter "overlayfs". DEBU[0000] fetching image="docker.io/library/alpine:3.15" DEBU[0000] loading host directory dir=/etc/containerd/certs.d/docker.io DEBU[0000] resolving host=xing.axzys.cn DEBU[0000] do request host=xing.axzys.cn request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, application/vnd.oci.image.index.v1+json, */*" request.header.user-agent=containerd/2.1.1+unknown request.method=HEAD url="https://xing.axzys.cn/v2/library/alpine/manifests/3.15?ns=docker.io" docker.io/library/alpine:3.15: resolving |--------------------------------------| elapsed: 0.9 s total: 0.0 B (0.0 B/s) DEBU[0001] fetch response received host=xing.axzys.cn response.header.connection=keep-alive response.header.content-length=157 response.header.content-type=application/json response.header.date="Sat, 23 Aug 2025 16:41:57 GMT" response.header.docker-distribution-api-version=registry/2.0 response.header.docker-ratelimit-source=15.164.211.114 response.header.server=nginx response.header.strict-transport-security="max-age=31536000" response.header.www-authenticate="Bearer realm=\"https://xing.axzys.cn/token\",service=\"registry.docker.io\"" response.status="401 Unauthorized" url="https://xing.axzys.cn/v2/library/alpine/manifests/3.15?ns=docker.io" DEBU[0001] Unauthorized header="Bearer realm=\"https://xing.axzys.cn/token\",service=\"registry.docker.io\"" host=xing.axzys.cn DEBU[0001] no scope specified for token auth challenge host=xing.axzys.cn DEBU[0001] do request host=xing.axzys.cn request.header.accept="application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.oci.image.manifest.v1+json, applicatio docker.io/library/alpine:3.15: resolving |--------------------------------------| elapsed: 3.3 s total: 0.0 B (0.0 B/s) DEBU[0003] fetch response received host=xing.axzys.cn response.header.connection=keep-alive response.header.content-length=1638 response.header.content-type=application/vnd.docker.distribution.manifest.list.v2+json response.header.date="Sat, 23 Aug 2025 16:42:00 GMT" response.header.docker-content-digest="sha256:19b4bcc4f60e99dd5ebdca0cbce22c503bbcff197549d7e19dab4f22254dc864" response.header.docker-distribution-api-version=registry/2.0 response.header.docker-ratelimit-source=15.164.211.114 response.header.etag="\"sha256:19b4bcc4f60e99dd5ebdca0cbce22c503bbcff197549d7e19dab4f22254dc864\"" response.header.ratelimit-limit="100;w=21600" response.header.ratelimit-remaining="92;w=21600" response.header.server=nginx response.header.strict-transport-security="max-age=31536000" response.header.www-authenticate="Bearer realm=\"https://xing.axzys.cn/token\",service=\"registry.docker.io\"" response.status="200 OK" url="https://xing.axzys.cn/v2/library/alpine/manifests/3.15?ns=docker.io" DEBU[0003] resolved desc.digest="sha256:19b4bcc4f60e99dd5ebdca0cbce22c503bbcff197549d7e19dab4f22254dc864" host=xing.axzys.cn DEBU[0003] loading host directory dir=/etc/containerd/certs.d/docker.io docker.io/library/alpine:3.15: resolving |--------------------------------------| elapsed: 3.4 s total: 0.0 B (0.0 B/s) DEBU[0003] fetch digest="sha256:6a0657acfef760bd9e293361c9b558e98e7d740ed0dffca823d17098a4ffddf5" mediatype=application/vnd.docker.distribution.manifest.v2+json size=528 DEBU[0003] fetch digest="sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d" mediatype=application/vnd.docker.container.image.v1+json size=1472 DEBU[0003] fetching layer chunk_size=0 digest="sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d" initial_parallelism=0 mediatype=application/vnd.docker.container.image.v1+json offset=0 parallelism=1 size=1472 DEBU[0003] do request digest="sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d" mediatype=application/vnd.docker.container.image.v1+json request.header.accept="application/vnd.docker.container.image.v1+json, */*" request.header.accept-encoding="zstd;q=1.0, gzip;q=0.8, deflate;q=0.5" request.header.range="bytes=0-" request.header.user-agent docker.io/library/alpine:3.15: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:19b4bcc4f60e99dd5ebdca0cbce22c503bbcff197549d7e19dab4f22254dc864: exists |++++++++++++++++++++++++++++++++++++++| manifest-sha256:6a0657acfef760bd9e293361c9b558e98e7d740ed0dffca823d17098a4ffddf5: exists |++++++++++++++++++++++++++++++++++++++| config-sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d: downloading |--------------------------------------| 0.0 B/1.4 KiB elapsed: 4.4 s total: 0.0 B (0.0 B/s) DEBU[0004] fetch response received digest="sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d" mediatype=application/vnd.docker.container.image.v1+json response.header.connection=keep-alive response.header.content-length=157 response.header.content-type=application/json response.header.date="Sat, 23 Aug 2025 16:42:01 GMT" response.header.docker-distribution-api-version=registry/2.0 response.header.server=nginx response.header.strict-transport-security="max-age=31536000" response.header.www-authenticate="Bearer realm=\"https://xing.axzys.cn/token\",service=\"registry.docker.io\"" response.status="401 Unauthorized" size=1472 url="https://xing.axzys.cn/v2/library/alpine/blobs/sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d?ns=docker.io" docker.io/library/alpine:3.15: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:19b4bcc4f60e99dd5ebdca0cbce22c503bbcff197549d7e19dab4f22254dc864: exists |++++++++++++++++++++++++++++++++++++++| manifest-sha256:6a0657acfef760bd9e293361c9b558e98e7d740ed0dffca823d17098a4ffddf5: exists |++++++++++++++++++++++++++++++++++++++| config-sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d: downloading |--------------------------------------| 0.0 B/1.4 KiB elapsed: 8.2 s total: 0.0 B (0.0 B/s) DEBU[0008] fetch response received digest="sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d" mediatype=application/vnd.docker.container.image.v1+json response.header.accept-ranges=bytes response.header.cf-ray=973c0fa05a2930d3-ICN response.header.connection=keep-alive response.header.content-length=1472 response.header.content-range="bytes 0-1471/1472" response.header.content-type=application/octet-stream response.header.date="Sat, 23 Aug 2025 16:42:04 GMT" response.header.etag="\"aa36606459d6778a94123c7d6a33396b\"" response.header.last-modified="Fri, 13 Dec 2024 15:03:06 GMT" response.header.server=nginx response.header.vary=Accept-Encoding response.header.x-cache-status=BYPASS response.status="206 Partial Content" size=1472 url="https://xing.axzys.cn/v2/library/alpine/blobs/sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d?ns=docker.io" docker.io/library/alpine:3.15: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:19b4bcc4f60e99dd5ebdca0cbce22c503bbcff197549d7e19dab4f22254dc864: exists |++++++++++++++++++++++++++++++++++++++| manifest-sha256:6a0657acfef760bd9e293361c9b558e98e7d740ed0dffca823d17098a4ffddf5: exists |++++++++++++++++++++++++++++++++++++++| config-sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d: done |+++++++++++++++++++++++++++++++++++ docker.io/library/alpine:3.15: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:19b4bcc4f60e99dd5ebdca0cbce22c503bbcff197549d7e19dab4f22254dc864: exists |++++++++++++++++++++++++++++++++++++++| docker.io/library/alpine:3.15: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:19b4bcc4f60e99dd5ebdca0cbce22c503bbcff197549d7e19dab4f22254dc864: exists |++++++++++++++++++++++++++++++++++++++| manifest-sha256:6a0657acfef760bd9e293361c9b558e98e7d740ed0dffca823d17098a4ffddf5: exists |++++++++++++++++++++++++++++++++++++++| config-sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:d078792c4f9122259f14b539315bd92cbd9490ed73e08255a08689122b143108: done |++++++++++++++++++++++++++++++++++++++| elapsed: 86.3s total: 2.7 Mi (32.0 KiB/s) #使用k8syaml文件拉取 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 7m2s (x6 over 5h7m) kubelet Container image "docker.io/library/alpine:3.9" already present on machine Normal Created 7m2s (x6 over 5h7m) kubelet Created container test-container Normal Started 7m1s (x6 over 5h7m) kubelet Started container test-container #nginx日志 223.74.152.108 - - [23/Aug/2025:16:41:57 +0000] "HEAD /v2/library/alpine/manifests/3.15?ns=docker.io HTTP/1.1" 401 0 "-" "containerd/2.1.1+unknown" 223.74.152.108 - - [23/Aug/2025:16:42:00 +0000] "HEAD /v2/library/alpine/manifests/3.15?ns=docker.io HTTP/1.1" 200 0 "-" "containerd/2.1.1+unknown" 223.74.152.108 - - [23/Aug/2025:16:42:01 +0000] "GET /v2/library/alpine/blobs/sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d?ns=docker.io HTTP/1.1" 401 157 "-" "containerd/2.1.1+unknown" 223.74.152.108 - - [23/Aug/2025:16:42:04 +0000] "GET /v2/library/alpine/blobs/sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d?ns=docker.io HTTP/1.1" 307 0 "-" "containerd/2.1.1+unknown" 223.74.152.108 - - [23/Aug/2025:16:42:04 +0000] "GET /_proxy/docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/32/32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%2F20250823%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250823T164203Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=860ec74942b8c48e9922b561b9ef4cfd409dc4acf22daa9a31a45754aff6d32a HTTP/1.1" 206 1472 "https://xing.axzys.cn/v2/library/alpine/blobs/sha256:32b91e3161c8fc2e3baf2732a594305ca5093c82ff4e0c9f6ebbd2a879468e1d?ns=docker.io" "containerd/2.1.1+unknown" 223.74.152.108 - - [23/Aug/2025:16:42:05 +0000] "GET /v2/library/alpine/blobs/sha256:d078792c4f9122259f14b539315bd92cbd9490ed73e08255a08689122b143108?ns=docker.io HTTP/1.1" 307 0 "-" "containerd/2.1.1+unknown" 223.74.152.108 - - [23/Aug/2025:16:43:21 +0000] "GET /_proxy/docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/d0/d078792c4f9122259f14b539315bd92cbd9490ed73e08255a08689122b143108/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%2F20250823%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20250823T164205Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=118da1e073a4589f6a14cb751acfbfdb0c7431fa55703f24d5278e7ec26246a3 HTTP/1.1" 206 2826431 "https://xing.axzys.cn/v2/library/alpine/blobs/sha256:d078792c4f9122259f14b539315bd92cbd9490ed73e08255a08689122b143108?ns=docker.io" "containerd/2.1.1+unknown" 四、剖析过程和nginx配置 4.1参与者与目标- Client:nerdctl/containerd - Mirror:NGINX@ xing.axzys.cn(你这份配置) - Docker Hub:registry-1.docker.io(镜像 API) + auth.docker.io(发 token) - 对象存储/CDN(Docker Hub 背后):Cloudflare R2 等(这次命中 *.r2.cloudflarestorage.com) 目标:客户端的所有请求都打到你域名,由 NGINX 统一处理认证、改写 3xx、缓存 /v2/ 下可缓存内容;当上游把大文件重定向到对象存储时,继续保持同域(走你域名的 /_proxy/...),避免直连外网受限/不可达。4.2按时间线还原整条链路时间均来自你贴的两段日志(nginx access log 与 nerdctl --debug),相互印证。4.2.1准备(为什么需要 resolver/SNI 等)你在 http 块里: resolver 1.1.1.1 8.8.8.8 ...; 因为后面大量用到了变量形式的 upstream 主机名($upstream_host),Nginx 需要在运行时解 DNS。 在 /v2/ 和 /_proxy/ 中你都开启了: proxy_ssl_server_name on; proxy_ssl_name $upstream_host; 这样跟上游 TLS 握手时,SNI 会填真实目标域名,证书校验才会通过。4.2.2HEAD manifest 触发认证(16:41:57 → 401)HEAD /v2/library/alpine/manifests/3.15?ns=docker.io → 401 WWW-Authenticate: Bearer realm="https://xing.axzys.cn/token", service="registry.docker.io"谁返回 401? 你的 NGINX(并非 Docker Hub)。为什么? 你在 /v2/: proxy_hide_header WWW-Authenticate; add_header WWW-Authenticate 'Bearer realm="https://xing.axzys.cn/token",service="registry.docker.io"' always; 这会强制把认证引导到你自己的 /token 端点,从而把“领 token”的流量固定在你的域名下(便于出网与审计)。 client 行为:containerd 收到 401 + WWW-Authenticate 后,会去 GET https://xing.axzys.cn/token?... 领一个 Bearer Token(JWT)。 你的 access log 片段里没贴出 /token 的那行,但从后续现象可知它已成功。nerdctl debug 里出现 Unauthorized ... no scope specified for token auth challenge 这只是 containerd 发起 token 流程的常见提示——第一次 401 只提供了 realm/service,后续在请求具体资源时它会带上 scope=repository:library/alpine:pull 等参数去换真正可用的 token。4.2.3 拉取 config blob(16:42:01 → 401 → 307)GET /v2/.../blobs/sha256:32b91e... → 401 ...随后同 URL → 307 Location: https://...r2.cloudflarestorage.com/...- 第一次 401:常见于 token 刷新或 scope 切换;containerd 会透明地再换一次 token(或掉头再次请求),随即你就看到 307 了。 - 307 from Hub/CDN:Docker Hub 对于实际二进制层(包括 config 层)不会直接回源给你,而是下发一个预签名 URL(Cloudflare R2/S3/GCS 等)。你的 /v2/ 配置有: proxy_redirect ~^https://(?<h>[^/]+)(?<p>/.*)$ https://$server_name/_proxy/$h$p; #这会把上游 30x Location 改写成你域名下的 /_proxy/<原host>/<原path?query>,于是客户端继续请求你域名,而不会直连 R2。4.2.4 通过 /_proxy 去对象存储(16:42:04 → 206)GET /_proxy/docker-images-prod....r2.cloudflarestorage.com/... → 206 Partial Content Content-Range: bytes 0-1471/1472 X-Cache-Status: BYPASS命中你 location ~ ^/_proxy/... 域名白名单严格校验,非允许列表一律 403(你已经列了 R2/S3/GCS/Quay CDN/Azure/Microsoft/Elastic/Cloudfront/Fastly 等)。 SNI/证书校验对齐上游真实主机(proxy_ssl_name $upstream_host; proxy_ssl_verify on;)。 不缓存(proxy_cache off;),不缓冲(proxy_buffering off;),不透传 Authorization(安全起见,proxy_set_header Authorization "";)。 仅透传 Range:proxy_set_header Range $http_range; —— 客户端最常发 Range: bytes=0-,于是上游返回 206 Partial Content。这次的对象是 config(1472 字节),一口气就拿完了(Content-Range: 0-1471/1472)。 nerdctl debug 里还能看到: cf-ray=...-ICN —— 这是 Cloudflare 的 PoP 标识,ICN 通常表示仁川/首尔边缘节点,说明你离 R2 的边缘很近,但速率还是取决于上游限速与跨网络链路。4.2.5 拉取大层(layer blob)(16:42:05 → 307;16:43:21 → 206)GET /v2/.../blobs/sha256:d07879... → 307 Location: https://...r2.cloudflarestorage.com/... GET /_proxy/...r2.cloudflarestorage.com/... → 206 2,826,431 bytes过程与 Step 3/4 相同,只是这个 blob 是真正的大层。 你的 access log 里 206 2826431,等于 ~2.70 MiB;整个拉取最终统计 total 2.7 MiB,耗时 86.3s(~32 KiB/s),这正是你 debug 里最后那行4.3为什么这些 NGINX 指令至关重要4.3.1认证引导(把令牌流程“拉到你域名”)/v2/ 里:proxy_hide_header WWW-Authenticate; add_header WWW-Authenticate 'Bearer realm="https://xing.axzys.cn/token",service="registry.docker.io"' always; /token 里:proxy_pass https://auth.docker.io/token$is_args$args; proxy_set_header Host auth.docker.io; ...; proxy_cache off; proxy_buffering off; #这确保客户端永远找你要 token,你再转发给 auth.docker.io。这样即便直连外网不稳定,token 也能拿到。4.3.2重定向改写到同域的 /_proxy/v2/ 里:proxy_redirect 用正则把任何 https://<host>/<path> 改写为 https://xing.axzys.cn/_proxy/<host>/<path>。 #客户端永远与 你域名交互(包括下载层),不会直连 R2/S3/GCS —— 这就是加速器/出口统一的关键。4.3.3 /_proxy 的安全与直通策略允许名单:仅允许对象存储/官方域名;其他域一律 403(防 SSRF、钓鱼域)。 TLS/SNI 严格:与上游域名完全一致的 SNI 与证书验证。 禁缓存/禁缓冲/清理凭据:预签名 URL 是带时效与权限的,不能缓存;也不要带上任何敏感头。 只透传 Range:让上游直接按 Range 返回 206,最大化兼容客户端的断点续传与并行策略。4.3.4 缓存与切片(仅对 /v2/)slice 1m; proxy_cache docker_cache; proxy_cache_key ... $slice_range; proxy_cache_valid 200 206 302 10m; proxy_cache_use_stale error timeout updating http_5xx; proxy_cache_lock on; 这套优化对直接从 /v2/ 回 200/206 的上游特别有效(很多私有 registry 会这么回)。 但对 Docker Hub,由于大层都会 30x 到对象存储,真正的数据并不在 /v2/,而是在 /_proxy(你已禁缓存)。因此: /v2/ 的切片缓存主要惠及:manifest(200)和上游可能返回的 302/小对象; 大层数据不会进 cache(因为 /_proxy 禁缓存且 URL 是带过期时间的预签名)。这是刻意的正确选择,避免缓存过期签名导致 403。4.4 你日志里出现的关键头部含义Docker-RateLimit-Source: 15.164.211.114:Hub 把镜像拉取计数记到你服务器的出口 IP;所有内网客户端都会共享这个匿名配额。 RateLimit-Limit: 100;w=21600 / RateLimit-Remaining: 92;w=21600:匿名用户的典型限额(6 小时窗口 100 次)。 Content-Range: bytes 0-1471/1472、206 Partial Content:按 Range 断点下载,containerd/nerdctl 默认就会这么拉。 cf-ray=...-ICN:Cloudflare PoP(ICN=Incheon/Seoul),表明对象实际由边缘节点服务。 X-Cache-Status: BYPASS:这条来自 /_proxy,因为你明确 proxy_cache off,所以必然是 BYPASS(正常)。4.5 为什么整体耗时 86.3s(32 KiB/s)上游对匿名大层的限速(CDN 端的 Per-connection/Per-IP throttle)+ 公网出口质量 通常是最主要因素; 你的 /_proxy 正确地关闭了缓存,所以不会被“预签名 URL 过期/权限偏差”坑到,但也意味着无法靠本地缓存提升首拉速度; 第二次拉取(相同层/相同对象)也不会从 /_proxy 命中,因为 URL 带签名且时效变化;不过 manifest 与 302 在 /v2/ 有 10 分钟缓存,能节省“引导步骤”的往返。4.6超简时间轴(把两段日志合在一起看)16:41:57 HEAD manifests → 401(你的 /v2/ 故意引导去 /token) 16:42:00 HEAD manifests(带 token)→ 200(拿到 manifest list digest) 16:42:01 GET config blob → 401(token/scope 校验) 16:42:04 GET 同 config blob → 307(Hub 把数据放到 R2) 16:42:04 GET /_proxy 到 R2 → 206(1472B,config 完成) 16:42:05 GET layer blob → 307(重定向到 R2) 16:43:21 GET /_proxy 到 R2 → 206(2.7MiB,大层完成) 总计:2.7MiB / 86.3s ≈ 32KiB/s(主要瓶颈在上游/公网链路/限速)# /etc/nginx/sites-available/docker-mirror # DNS for variable proxy_pass resolver 1.1.1.1 8.8.8.8 valid=300s ipv6=off; # Cache (only used under /v2/) proxy_cache_path /var/cache/nginx/docker levels=1:2 keys_zone=docker_cache:50m max_size=300g inactive=7d use_temp_path=off; # Registry v2 header map $http_docker_distribution_api_version $docker_api_version { default "registry/2.0"; } # expose cache status map $upstream_cache_status $cache_status { default $upstream_cache_status; "" "BYPASS"; } server { listen 443 ssl http2; server_name xing.axzys.cn; ssl_certificate /etc/nginx/ssl/xing.axzys.cn.pem; ssl_certificate_key /etc/nginx/ssl/xing.axzys.cn.key; client_max_body_size 0; proxy_http_version 1.1; proxy_connect_timeout 60s; proxy_read_timeout 600s; proxy_send_timeout 600s; # 默认流式 proxy_buffering off; proxy_request_buffering off; proxy_set_header Connection ""; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Docker-Distribution-Api-Version $docker_api_version; # 全局打开缓存(/_proxy、/token 会单独关闭) proxy_cache docker_cache; proxy_cache_lock on; proxy_cache_revalidate on; proxy_cache_min_uses 1; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_valid 200 206 302 10m; add_header X-Cache-Status $cache_status always; # 把上游 3xx Location 改写到 /_proxy/<host>/<path?query> proxy_redirect ~^https://(?<h>[^/]+)(?<p>/.*)$ https://$server_name/_proxy/$h$p; # ---------- token endpoint(Docker Hub 专用) ---------- location = /token { proxy_pass https://auth.docker.io/token$is_args$args; proxy_set_header Host auth.docker.io; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Authorization ""; proxy_cache off; proxy_buffering off; proxy_http_version 1.1; proxy_connect_timeout 30s; proxy_read_timeout 30s; proxy_send_timeout 30s; } # ---------- GHCR token 代领 ---------- location = /ghcr-token { proxy_pass https://ghcr.io/token$is_args$args; proxy_set_header Host ghcr.io; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Authorization ""; proxy_cache off; proxy_buffering off; proxy_http_version 1.1; proxy_connect_timeout 30s; proxy_read_timeout 30s; proxy_send_timeout 30s; } # ---------- /v2/ -> 本机 crproxy (Docker Hub) ---------- # 关键修正:把 /v2/... 重写为 **/v2/docker.io/...**(原来少了 /v2,导致 301 -> /_proxy/hub.docker.com -> 403) location ^~ /v2/ { set $upstream_host "127.0.0.1:6440"; proxy_set_header Host $upstream_host; # ✅ 正确:保持 /v2 前缀 rewrite ^/v2(?<rest>/.*)$ /v2/docker.io$rest break; # 分片 + 缓存键 slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; # 引导客户端去我们的 /token proxy_hide_header WWW-Authenticate; add_header WWW-Authenticate 'Bearer realm="https://xing.axzys.cn/token",service="registry.docker.io"' always; proxy_buffering off; proxy_pass http://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # ================= 其余注册中心(带前缀)================= # ghcr.io location ^~ /ghcr/ { set $upstream_host "ghcr.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/ghcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_hide_header WWW-Authenticate; add_header WWW-Authenticate 'Bearer realm="https://xing.axzys.cn/ghcr-token",service="ghcr.io"' always; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # gcr.io location ^~ /gcr/ { set $upstream_host "gcr.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/gcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # registry.k8s.io location ^~ /rk8s/ { set $upstream_host "registry.k8s.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/rk8s(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # 兼容 k8s.gcr.io -> registry.k8s.io location ^~ /kgcr/ { set $upstream_host "registry.k8s.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/kgcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # mcr.microsoft.com location ^~ /mcr/ { set $upstream_host "mcr.microsoft.com"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/mcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # nvcr.io location ^~ /nvcr/ { set $upstream_host "nvcr.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/nvcr(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # quay.io location ^~ /quay/ { set $upstream_host "quay.io"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/quay(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # docker.elastic.co location ^~ /elastic/ { set $upstream_host "docker.elastic.co"; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; rewrite ^/elastic(?<rest>/.*)$ $rest break; slice 1m; proxy_cache_key $scheme$upstream_host$request_uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } # ---------- /_proxy/<host>/<path?query> -> 对象存储/CDN ---------- location ~ ^/_proxy/(?<h>[^/]+)(?<p>/.*)$ { if ($h !~* ^(registry-1\.docker\.io|auth\.docker\.io|production\.cloudflare\.docker\.com|.*\.cloudflarestorage\.com|.*\.r2\.cloudflarestorage\.com|.*\.amazonaws\.com|storage\.googleapis\.com|.*\.googleapis\.com|.*\.pkg\.dev|ghcr\.io|github\.com|pkg-containers\.[^/]*githubusercontent\.com|objects\.githubusercontent\.com|.*\.blob\.core\.windows\.net|.*\.azureedge\.net|mcr\.microsoft\.com|.*\.microsoft\.com|quay\.io|cdn\.quay\.io|.*quay-cdn[^/]*\.redhat\.com|k8s\.gcr\.io|registry\.k8s\.io|gcr\.io|docker\.elastic\.co|.*\.elastic\.co|.*\.cloudfront\.net|.*\.fastly\.net)$) { return 403; } set $upstream_host $h; rewrite ^/_proxy/[^/]+(?<rest>/.*)$ $rest break; proxy_set_header Host $upstream_host; proxy_ssl_server_name on; proxy_ssl_name $upstream_host; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_set_header Range $http_range; proxy_redirect off; proxy_cache off; proxy_buffering off; proxy_request_buffering off; proxy_set_header Authorization ""; proxy_pass https://$upstream_host; access_log /var/log/nginx/docker_mirror_access.log; error_log /var/log/nginx/docker_mirror_error.log warn; } location = /healthz { return 200 'ok'; add_header Content-Type text/plain; } } # HTTP -> HTTPS server { listen 80; server_name xing.axzys.cn; return 301 https://$host$request_uri; }
2025年08月21日
5 阅读
0 评论
0 点赞
2025-08-19
Helm App创建
一、gitlab仓库配置 1.1克隆代码root@k8s-01:~/argocd# cd /opt/ root@k8s-01:/opt# ls cni containerd root@k8s-01:/opt# git clone http://192.168.30.181/develop/argo-demo.git Cloning into 'argo-demo'... Username for 'http://192.168.30.181': root Password for 'http://root@192.168.30.181': remote: Enumerating objects: 19, done. remote: Counting objects: 100% (19/19), done. remote: Compressing objects: 100% (16/16), done. remote: Total 19 (delta 3), reused 0 (delta 0), pack-reused 0 (from 0) Receiving objects: 100% (19/19), 4.49 KiB | 1.12 MiB/s, done. Resolving deltas: 100% (3/3), done. root@k8s-01:/opt# cd argo-demo/ root@k8s-01:/opt/argo-demo# ls manifests README.md root@k8s-01:/opt/argo-demo# 1.2创建Helm应用创建一个名为helm的approot@k8s-01:/opt/argo-demo# helm create helm Creating helm root@k8s-01:/opt/argo-demo# ls helm manifests README.md root@k8s-01:/opt/argo-demo# tree helm helm ├── charts ├── Chart.yaml ├── templates │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── hpa.yaml │ ├── ingress.yaml │ ├── NOTES.txt │ ├── serviceaccount.yaml │ ├── service.yaml │ └── tests │ └── test-connection.yaml └── values.yaml 3 directories, 10 files 修改helm配置[root@tiaoban argo-demo]# cd helm/ [root@tiaoban helm]# vim Chart.yaml appVersion: "v1" # 修改默认镜像版本为v1 [root@tiaoban helm]# vim values.yaml image: repository: ikubernetes/myapp # 修改镜像仓库地址helm文件校验root@k8s-01:/opt/argo-demo# helm lint helm ==> Linting helm [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed 1.3推送代码root@k8s-01:/opt/argo-demo# git add . root@k8s-01:/opt/argo-demo# git commit -m "add helm" Author identity unknown *** Please tell me who you are. Run git config --global user.email "you@example.com" git config --global user.name "Your Name" to set your account's default identity. Omit --global to set the identity only in this repository. fatal: unable to auto-detect email address (got 'root@k8s-01.(none)') root@k8s-01:/opt/argo-demo# root@k8s-01:/opt/argo-demo# root@k8s-01:/opt/argo-demo# root@k8s-01:/opt/argo-demo# git config --global user.email “790731@qq.com” git config --global user.name "axing" root@k8s-01:/opt/argo-demo# git commit -m "add helm" [main ea70765] add helm 11 files changed, 450 insertions(+) create mode 100644 helm/.helmignore create mode 100644 helm/Chart.yaml create mode 100644 helm/templates/NOTES.txt create mode 100644 helm/templates/_helpers.tpl create mode 100644 helm/templates/deployment.yaml create mode 100644 helm/templates/hpa.yaml create mode 100644 helm/templates/ingress.yaml create mode 100644 helm/templates/service.yaml create mode 100644 helm/templates/serviceaccount.yaml create mode 100644 helm/templates/tests/test-connection.yaml create mode 100644 helm/values.yaml root@k8s-01:/opt/argo-demo# root@k8s-01:/opt/argo-demo# git push Username for 'http://192.168.30.181': root Password for 'http://root@192.168.30.181': Enumerating objects: 17, done. Counting objects: 100% (17/17), done. Delta compression using up to 8 threads Compressing objects: 100% (15/15), done. Writing objects: 100% (16/16), 6.00 KiB | 6.00 MiB/s, done. Total 16 (delta 0), reused 0 (delta 0), pack-reused 0 To http://192.168.30.181/develop/argo-demo.git 293d75f..ea70765 main -> main root@k8s-01:/opt/argo-demo# 1.4查看验证二、Argo CD配置 2.1创建helm类型的app通过Argo UI创建app,填写如下信息:2.2查看验证查看argo cd应用信息,已完成部署。登录k8s查看资源[root@tiaoban helm]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES demo-helm-585b5ddb66-bdbcr 1/1 Running 0 2m38s 10.244.3.31 work3 <none> <none> rockylinux 1/1 Running 13 (140m ago) 13d 10.244.1.7 work1 <none> <none> [root@tiaoban helm]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE demo-helm ClusterIP 10.105.202.171 <none> 80/TCP 2m41s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 279d [root@tiaoban helm]# kubectl exec -it rockylinux -- bash [root@rockylinux /]# curl demo-helm Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>版本更新测试#修改git仓库文件,模拟版本更新 root@k8s-01:/opt/argo-demo# cd helm/ root@k8s-01:/opt/argo-demo/helm# ls charts Chart.yaml templates values.yaml root@k8s-01:/opt/argo-demo/helm# vi Chart.yaml root@k8s-01:/opt/argo-demo/helm# ls charts Chart.yaml templates values.yaml root@k8s-01:/opt/argo-demo/helm# vi values.yaml root@k8s-01:/opt/argo-demo/helm# ls charts Chart.yaml templates values.yaml # 提交推送至git仓库 root@k8s-01:/opt/argo-demo/helm# git add . root@k8s-01:/opt/argo-demo/helm# git commit -m "update helm v2" [main 59dcb2d] update helm v2 2 files changed, 3 insertions(+), 3 deletions(-) root@k8s-01:/opt/argo-demo/helm# git push Username for 'http://192.168.30.181': root Password for 'http://root@192.168.30.181': Enumerating objects: 9, done. Counting objects: 100% (9/9), done. Delta compression using up to 8 threads Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 475 bytes | 475.00 KiB/s, done. Total 5 (delta 3), reused 0 (delta 0), pack-reused 0 To http://192.168.30.181/develop/argo-demo.git ea70765..59dcb2d main -> main root@k8s-01:/opt/argo-demo/helm# 查看argo cd更新记录访问验证[root@tiaoban helm]# kubectl exec -it rockylinux -- bash [root@rockylinux /]# curl demo-helm Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
2025年08月19日
4 阅读
0 评论
0 点赞
2025-08-19
Directory APP创建与配置
一、APP创建 1.1webUI创建1.2CLI创建除了使用webUI创建应用外,也可以使用Argo CLI命令行工具创建# 创建应用 root@k8s-01:~/argocd# argocd app create demo1 \ --repo http://192.168.30.181/develop/argo-demo.git \ --path manifests/ --sync-policy automatic --dest-namespace default \ --dest-server https://kubernetes.default.svc --directory-recurse WARN[0000] Failed to invoke grpc call. Use flag --grpc-web in grpc calls. To avoid this warning message, use flag --grpc-web. application 'demo1' created root@k8s-01:~/argocd# # 查看应用列表 root@k8s-01:~/argocd# argocd app list WARN[0000] Failed to invoke grpc call. Use flag --grpc-web in grpc calls. To avoid this warning message, use flag --grpc-web. NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET argocd/demo https://kubernetes.default.svc default OutOfSync Progressing Manual SharedResourceWarning(3) http://192.168.30.181/develop/argo-demo.git manifests HEAD argocd/demo-test https://kubernetes.default.svc default OutOfSync Healthy Manual SharedResourceWarning(3) http://192.168.30.181/develop/argo-demo.git manifests/ HEAD argocd/demo1 https://kubernetes.default.svc default default Synced Healthy Auto <none> http://192.168.30.181/develop/argo-demo.git manifests/ # 查看应用状态 root@k8s-01:~/argocd# kubectl get application -n argocd NAME SYNC STATUS HEALTH STATUS demo OutOfSync Progressing demo-test OutOfSync Healthy demo1 Synced Healthy # 执行立即同步操作 root@k8s-01:~/argocd# argocd app sync argocd/demo WARN[0000] Failed to invoke grpc call. Use flag --grpc-web in grpc calls. To avoid this warning message, use flag --grpc-web. TIMESTAMP GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE 2025-08-19T07:00:05+00:00 Service default myapp OutOfSync Healthy 2025-08-19T07:00:05+00:00 apps Deployment default myapp OutOfSync Healthy 2025-08-19T07:00:05+00:00 traefik.io IngressRoute default myapp OutOfSync 2025-08-19T07:00:05+00:00 Service default myapp Synced Healthy 2025-08-19T07:00:05+00:00 Service default myapp Synced Healthy service/myapp configured 2025-08-19T07:00:05+00:00 apps Deployment default myapp OutOfSync Healthy deployment.apps/myapp configured 2025-08-19T07:00:05+00:00 traefik.io IngressRoute default myapp OutOfSync ingressroute.traefik.io/myapp configured 2025-08-19T07:00:05+00:00 apps Deployment default myapp Synced Healthy deployment.apps/myapp configured 2025-08-19T07:00:05+00:00 traefik.io IngressRoute default myapp Synced ingressroute.traefik.io/myapp configured Name: argocd/demo Project: default Server: https://kubernetes.default.svc Namespace: URL: https://argocd.local.com:30443/applications/argocd/demo Source: - Repo: http://192.168.30.181/develop/argo-demo.git Target: HEAD Path: manifests SyncWindow: Sync Allowed Sync Policy: Manual Sync Status: Synced to HEAD (293d75f) Health Status: Healthy Operation: Sync Sync Revision: 293d75f441403c3f19c888df50939ec3a9e6f1fa Phase: Succeeded Start: 2025-08-19 07:00:05 +0000 UTC Finished: 2025-08-19 07:00:05 +0000 UTC Duration: 0s Message: successfully synced (all tasks run) GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE Service default myapp Synced Healthy service/myapp configured apps Deployment default myapp Synced Healthy deployment.apps/myapp configured traefik.io IngressRoute default myapp Synced ingressroute.traefik.io/myapp configured1.3yaml文件创建[root@tiaoban ~]# cat demo.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: demo namespace: argocd spec: destination: namespace: default server: 'https://kubernetes.default.svc' source: path: manifests # yaml资源清单路径 repoURL: 'http://gitlab.local.com/devops/argo-demo.git' # 同步仓库地址 targetRevision: 'master' # 分支名称 sources: [] project: default syncPolicy: automated: prune: false selfHeal: false [root@tiaoban ~]# kubectl apply -f demo.yaml application.argoproj.io/demo created二、应用同步选项 2.1同步策略配置SYNC POLICY:同步策略 Argo CD能够在检测到 Git 中所需的清单与集群中的实时状态之间存在差异时自动同步应用程序。自动同步是GitOps Pull模式的核心,好处是 CI/CD Pipeline 不再需要直接访问Argo CD API服务器来执行部署,可以通过在WEB UI的Application-SYNC POLICY中启用AUTOMATED或CLIargocd app set <APPNAME> --sync-policy automated 进行配置。PRUNE RESOURCES :自动删除资源,开启选项后Git Repo中删除资源会自动在环境中删除对应的资源。SELF HEAL:自动痊愈,强制以GitRepo状态为准,手动在环境修改不会生效。2.2AutoSync自动同步默认同步周期是180s, 可以修改argocd-cm配置文件,添加timeout.reconciliation参数。同步流程: 1. 获取所有设置为auto-sync的apps 2. 从每个app的git存储库中获取最新状态 3. 将git状态与集群应用状态对比 4. 如果相同,不执行任何操作并标记为synced 5. 如果不同,标记为out-of-sync2.3SyncOptions同步选项- Validate=false:禁用Kubectl验证 - Replace=true:kubectl replace替换 - PrunePropagationPolicy=background:级联删除策略(background, foreground and orphan.)ApplyOutOfSyncOnly=true:仅同步不同步状态的资源。避免大量对象时资源API消耗 - CreateNamespace=true:创建namespace - PruneLast=true:同步后进行修剪 - RespectlgnoreDifferences=true:支持忽略差异配置(ignoreDifferences:) - ServerSideApply=true:部署操作在服务端运行(避免文件过大)三、应用状态 sync status - Synced:已同步 - OutOfSync:未同步 health status - Progressing:正在执行 - Suspended:资源挂载暂停 - Healthy:资源健康 - Degraded:资源故障 - Missing:集群不存在资源
2025年08月19日
4 阅读
0 评论
0 点赞
1
2
...
16