首页
导航
统计
留言
更多
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
Search
1
Ubuntu安装 kubeadm 部署k8s 1.30
218 阅读
2
kubeadm 部署k8s 1.30
133 阅读
3
rockylinux 9.3详细安装drbd
131 阅读
4
rockylinux 9.3详细安装drbd+keepalived
121 阅读
5
ceshi
82 阅读
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
登录
/
注册
Search
标签搜索
k8s
linux
docker
drbd+keepalivde
ansible
dcoker
webhook
星
累计撰写
117
篇文章
累计收到
940
条评论
首页
栏目
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
页面
导航
统计
留言
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
搜索到
78
篇与
的结果
2025-08-11
runner注册
一、gitlab runner类型shared:运行整个平台项目的作业(gitlab) group:运行特定group下的所有项目的作业(group) specific:运行指定的项目作业(project)二、创建不同类型的runner 2.1shared类型依次点击主页——>管理中心——>CI/CD——>Runner——>新建实例runner#按命令提示输入之后 回车 然后 按自己安装runner的模式 shell还是docker root@k8s-02:~# gitlab-runner register --url http://192.168.30.181 --token glrt-KXvcjZNVMVtCustGF-O3Z286MQp0OjEKdToxCw.01.121vlf8dr Runtime platform arch=amd64 os=linux pid=23285 revision=cc489270 version=18.2.1 Running in system-mode. Enter the GitLab instance URL (for example, https://gitlab.com/): [http://192.168.30.181]: Verifying runner... is valid correlation_id=01K2CM5N1H5TAGE79WMJ9CMDQN runner=KXvcjZNVM Enter a name for the runner. This is stored only in the local config.toml file: [k8s-02]: Enter an executor: docker-windows, docker-autoscaler, instance, shell, ssh, parallels, docker, docker+machine, kubernetes, custom, virtualbox: ERROR: Invalid executor specified Enter an executor: kubernetes, custom, virtualbox, docker-windows, docker-autoscaler, instance, shell, ssh, parallels, docker, docker+machine: ERROR: Invalid executor specified Enter an executor: custom, virtualbox, docker-windows, docker-autoscaler, instance, shell, ssh, parallels, docker, docker+machine, kubernetes: ERROR: Invalid executor specified Enter an executor: custom, virtualbox, docker-windows, docker-autoscaler, instance, shell, ssh, parallels, docker, docker+machine, kubernetes: ERROR: Invalid executor specified Enter an executor: instance, shell, ssh, parallels, docker, docker+machine, kubernetes, custom, virtualbox, docker-windows, docker-autoscaler: ERROR: Invalid executor specified Enter an executor: parallels, docker, docker+machine, kubernetes, custom, virtualbox, docker-windows, docker-autoscaler, instance, shell, ssh: shell Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded! Configuration (with the authentication token) was saved in "/etc/gitlab-runner/config.toml" 2.2group类型依次点击主页——>群组——>指定组——>设置——>构建——>runner——>新建群组runner
2025年08月11日
4 阅读
0 评论
0 点赞
2025-08-08
Rocky Linux 9.3 安装k8s-docker安装
一、固定IP地址#配置 sudo nmcli connection modify ens160 \ ipv4.method manual \ ipv4.addresses 192.168.30.20/24 \ ipv4.gateway 192.168.30.2 \ ipv4.dns "8.8.8.8,8.8.4.4" #更新配置 sudo nmcli connection down ens160 && sudo nmcli connection up ens160二、配置yum源 2.1备份现有仓库配置文件#sudo mkdir /etc/yum.repos.d/backup #sudo mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup/ 直接执行下面的2.2修改仓库配置文件# 使用阿里云推荐的配置方法 sudo sed -e 's!^mirrorlist=!#mirrorlist=!g' \ -e 's!^#baseurl=http://dl.rockylinux.org/$contentdir!baseurl=https://mirrors.aliyun.com/rockylinux!g' \ -i /etc/yum.repos.d/Rocky-*.repo2.3清理并重建缓存sudo dnf clean all sudo dnf makecache2.4测试更新sudo dnf update -y三、准备工作 3.1修改主机名hostnamectl set-hostname k8s-01 hostnamectl set-hostname k8s-02 hostnamectl set-hostname k8s-033.2关闭一些服务# 1、关闭selinux sed -i 's#enforcing#disabled#g' /etc/selinux/config setenforce 0 # 2、禁用防火墙,网络管理,邮箱 systemctl disable --now firewalld NetworkManager postfix # 3、关闭swap分区 swapoff -a # 注释swap分区 cp /etc/fstab /etc/fstab_bak sed -i '/swap/d' /etc/fstab3.3sshd服务优化(可以不做)# 1、加速访问 sed -ri 's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config sed -ri 's#^GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config grep ^UseDNS /etc/ssh/sshd_config grep ^GSSAPIAuthentication /etc/ssh/sshd_config systemctl restart sshd # 2、密钥登录(主机点做):为了让后续一些远程拷贝操作更方便 ssh-keygen ssh-copy-id -i root@k8s-01 ssh-copy-id -i root@k8s-02 ssh-copy-id -i root@k8s-03 #连接测试 [root@m01 ~]# ssh 172.16.1.7 Last login: Tue Nov 24 09:02:26 2020 from 10.0.0.1 [root@web01 ~]#3.4增大文件标识符数量(退出当前会话立即生效)cat > /etc/security/limits.d/k8s.conf <<EOF * soft nofile 65535 * hard nofile 131070 EOF ulimit -Sn ulimit -Hn3.5所有节点配置模块自动加载,此步骤不做的话(kubeadm init时会直接失败)modprobe br_netfilter modprobe ip_conntrack cat >>/etc/rc.sysinit<<EOF #!/bin/bash for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file done EOF echo "modprobe br_netfilter" >/etc/sysconfig/modules/br_netfilter.modules echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules chmod 755 /etc/sysconfig/modules/br_netfilter.modules chmod 755 /etc/sysconfig/modules/ip_conntrack.modules lsmod | grep br_netfilter3.6同步集群时间# =====================》chrony服务端:服务端我们可以自己搭建,也可以直接用公网上的时间服务器,所以是否部署服务端看你自己 # 1、安装 dnf -y install chrony # 2、修改配置文件 mv /etc/chrony.conf /etc/chrony.conf.bak cat > /etc/chrony.conf << EOF server ntp1.aliyun.com iburst minpoll 4 maxpoll 10 server ntp2.aliyun.com iburst minpoll 4 maxpoll 10 server ntp3.aliyun.com iburst minpoll 4 maxpoll 10 server ntp4.aliyun.com iburst minpoll 4 maxpoll 10 server ntp5.aliyun.com iburst minpoll 4 maxpoll 10 server ntp6.aliyun.com iburst minpoll 4 maxpoll 10 server ntp7.aliyun.com iburst minpoll 4 maxpoll 10 driftfile /var/lib/chrony/drift makestep 10 3 rtcsync allow 0.0.0.0/0 local stratum 10 keyfile /etc/chrony.keys logdir /var/log/chrony stratumweight 0.05 noclientlog logchange 0.5 EOF # 4、启动chronyd服务 systemctl restart chronyd.service # 最好重启,这样无论原来是否启动都可以重新加载配置 systemctl enable chronyd.service systemctl status chronyd.service # =====================》chrony客户端:在需要与外部同步时间的机器上安装,启动后会自动与你指定的服务端同步时间 # 下述步骤一次性粘贴到每个客户端执行即可 # 1、安装chrony dnf -y install chrony # 2、需改客户端配置文件 mv /etc/chrony.conf /etc/chrony.conf.bak cat > /etc/chrony.conf << EOF server 192.168.30.20 iburst driftfile /var/lib/chrony/drift makestep 10 3 rtcsync local stratum 10 keyfile /etc/chrony.key logdir /var/log/chrony stratumweight 0.05 noclientlog logchange 0.5 EOF # 3、启动chronyd systemctl restart chronyd.service systemctl enable chronyd.service systemctl status chronyd.service # 4、验证 chronyc sources -v3.7安装常用软件dnf -y install expect wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate chrony bind-utils rsync unzip git3.8查看内核版本要4.4+[root@localhost ~]# grubby --default-kernel /boot/vmlinuz-5.14.0-570.30.1.el9_6.x86_643.8节点安装IPVS# 1、安装ipvsadm等相关工具 dnf -y install ipvsadm ipset sysstat conntrack libseccomp # 2、配置加载 cat > /etc/sysconfig/modules/ipvs.modules <<"EOF" #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in ${ipvs_modules}; do /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe ${kernel_module} fi done EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs3.9机器修改内核参数cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp.keepaliv.probes = 3 net.ipv4.tcp_keepalive_intvl = 15 net.ipv4.tcp.max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp.max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.top_timestamps = 0 net.core.somaxconn = 16384 EOF # 立即生效 sysctl --system四、安装containerd(所有k8s节点都要做) 自Kubernetes1.24以后,K8S就不再原生支持docker了我们都知道containerd来自于docker,后被docker捐献给了云原生计算基金会(我们安装docker当然会一并安装上containerd)安装方法:centos的libseccomp的版本为2.3.1,不满足containerd的需求,需要下载2.4以上的版本即可,我这里部署2.5.1版本。 rpm -e libseccomp-2.5.1-1.el8.x86_64 --nodeps rpm -ivh libseccomp-2.5.1-1.e18.x8664.rpm #官网已经gg了,不更新了,请用阿里云 # wget http://rpmfind.net/linux/centos/8-stream/Base0s/x86 64/0s/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm wget https://mirrors.aliyun.com/centos/8/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm cd /root/rpms sudo yum localinstall libseccomp-2.5.1-1.el8.x86_64.rpm -y #rocky 默认版本就是2.5.2 无需执行上面的命令 直接执行下面的命令查看版本 [root@k8s-01 ~]# rpm -qa | grep libseccomp libseccomp-2.5.2-2.el9.x86_64 安装方式一:(基于阿里云的源)推荐用这种方式,安装的是4sudo dnf config-manager --set-enabled powertools # Rocky Linux 8/9需启用PowerTools仓库 sudo dnf install -y yum-utils device-mapper-persistent-data lvm2 #1、卸载之前的 dnf remove docker docker-ce containerd docker-common docker-selinux docker-engine -y #2、准备repo sudo tee /etc/yum.repos.d/docker-ce.repo <<-'EOF' [docker-ce-stable] name=Docker CE Stable - AliOS baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg EOF # 3、安装 sudo dnf install -y containerd.io sudo dnf install containerd* -y配置# 1、配置 mkdir -pv /etc/containerd containerd config default > /etc/containerd/config.toml #为containerd生成配置文件 #2、替换默认pause镜像地址:这一步非常非常非常非常重要 grep sandbox_image /etc/containerd/config.toml sed -i 's/registry.k8s.io/registry.cn-hangzhou.aliyuncs.com\/google_containers/' /etc/containerd/config.toml grep sandbox_image /etc/containerd/config.toml #请务必确认新地址是可用的: sandbox_image="registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" #3、配置systemd作为容器的cgroup driver grep SystemdCgroup /etc/containerd/config.toml sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml grep SystemdCgroup /etc/containerd/config.toml # 4、配置加速器(必须配置,否则后续安装cni网络插件时无法从docker.io里下载镜像) #参考:https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration #添加 config_path="/etc/containerd/certs.d" sed -i 's/config_path\ =.*/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.tomlmkdir -p /etc/containerd/certs.d/docker.io cat>/etc/containerd/certs.d/docker.io/hosts.toml << EOF server ="https://docker.io" [host."https ://dockerproxy.com"] capabilities = ["pull","resolve"] [host."https://docker.m.daocloud.io"] capabilities = ["pull","resolve"] [host."https://docker.chenby.cn"] capabilities = ["pull","resolve"] [host."https://registry.docker-cn.com"] capabilities = ["pull","resolve" ] [host."http://hub-mirror.c.163.com"] capabilities = ["pull","resolve" ] EOF#5、配置containerd开机自启动 #5.1 启动containerd服务并配置开机自启动 systemctl daemon-reload && systemctl restart containerd systemctl enable --now containerd #5.2 查看containerd状态 systemctl status containerd #5.3查看containerd的版本 ctr version五、安装k8s 5.1准备k8s源# 创建repo文件 cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF sudo dnf makecache #参考:https://developer.aliyun.com/mirror/kubernetes/setenforce dnf install -y kubelet-1.27* kubeadm-1.27* kubectl-1.27* systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet 安装锁定版本的插件 sudo dnf install -y dnf-plugin-versionlock 锁定版本不让后续更新sudo dnf versionlock add kubelet-1.27* kubeadm-1.27* kubectl-1.27* containerd.io [root@k8s-01 ~]# sudo dnf versionlock list Last metadata expiration check: 0:35:21 ago on Fri Aug 8 10:40:25 2025. kubelet-0:1.27.6-0.* kubeadm-0:1.27.6-0.* kubectl-0:1.27.6-0.* containerd.io-0:1.7.27-3.1.el9.* #sudo dnf update就会排除锁定的应用5.2加载内核# 加载 br_netfilter 模块 sudo modprobe br_netfilter # 启用内核参数 cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF # 应用配置 sudo sysctl --system #临时关闭防火墙 sudo systemctl stop firewalld #永久关闭防火墙 sudo systemctl disable firewalld sudo modprobe br_netfilter cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system sudo systemctl stop firewalld sudo systemctl disable firewalld 5.3主节点操作(node节点不执行)初始化master节点(仅在master节点上执行) #可以kubeadm config images list查看 [root@k8s-master-01 ~]# kubeadm config images list registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/coredns/coredns:v1.11.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0kubeadm config print init-defaults > kubeadm.yamlvi kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s apiVersion: kubeadm.k8s.io/v1beta3 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.110.97 #这里要改为控制节点 bindPort: 6443 nodeRegistration: criSocket: unix:///var/run/containerd/containerd.sock imagePullPolicy: IfNotPresent name: k8s-master-01 #这里要修改 taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #要去阿里云创建仓库 kind: ClusterConfiguration kubernetesVersion: 1.30.3 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 10.244.0.0/16 #添加这行 scheduler: {} #在最后插入以下内容 --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration cgroupDriver: systemd部署K8Skubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification --ignore-preflight-errors=Swap部署网络插件下载网络插件wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml[root@k8s-01 ~]# cat kube-flannel.yml apiVersion: v1 kind: Namespace metadata: labels: k8s-app: flannel pod-security.kubernetes.io/enforce: privileged name: kube-flannel --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: flannel name: flannel namespace: kube-flannel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: flannel name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - get - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: flannel name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-flannel --- apiVersion: v1 data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "EnableNFTables": false, "Backend": { "Type": "vxlan" } } kind: ConfigMap metadata: labels: app: flannel k8s-app: flannel tier: node name: kube-flannel-cfg namespace: kube-flannel --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: flannel k8s-app: flannel tier: node name: kube-flannel-ds namespace: kube-flannel spec: selector: matchLabels: app: flannel k8s-app: flannel template: metadata: labels: app: flannel k8s-app: flannel tier: node spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux containers: - args: - --ip-masq - --kube-subnet-mgr command: - /opt/bin/flanneld env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: EVENT_QUEUE_DEPTH value: "5000" image: registry.cn-guangzhou.aliyuncs.com/xingcangku/cccc:0.25.5 name: kube-flannel resources: requests: cpu: 100m memory: 50Mi securityContext: capabilities: add: - NET_ADMIN - NET_RAW privileged: false volumeMounts: - mountPath: /run/flannel name: run - mountPath: /etc/kube-flannel/ name: flannel-cfg - mountPath: /run/xtables.lock name: xtables-lock hostNetwork: true initContainers: - args: - -f - /flannel - /opt/cni/bin/flannel command: - cp image: registry.cn-guangzhou.aliyuncs.com/xingcangku/ddd:1.5.1 name: install-cni-plugin volumeMounts: - mountPath: /opt/cni/bin name: cni-plugin - args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist command: - cp image: registry.cn-guangzhou.aliyuncs.com/xingcangku/cccc:0.25.5 name: install-cni volumeMounts: - mountPath: /etc/cni/net.d name: cni - mountPath: /etc/kube-flannel/ name: flannel-cfg priorityClassName: system-node-critical serviceAccountName: flannel tolerations: - effect: NoSchedule operator: Exists volumes: - hostPath: path: /run/flannel name: run - hostPath: path: /opt/cni/bin name: cni-plugin - hostPath: path: /etc/cni/net.d name: cni - configMap: name: kube-flannel-cfg name: flannel-cfg - hostPath: path: /run/xtables.lock type: FileOrCreate name: xtables-lock[root@k8s-01 ~]# grep -i image kube-flannel.yml image: registry.cn-guangzhou.aliyuncs.com/xingcangku/cccc:0.25.5 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/ddd:1.5.1 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/cccc:0.25.5 #在node节点执行下面命令修改ip地址 mkdir -p $HOME/.kube scp root@192.168.30.135:/etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/configdocker安装1.卸载旧版本(如有) sudo dnf remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine 2.安装依赖包 sudo dnf install -y dnf-plugins-core 3.添加 Docker 官方仓库 sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 或者安装阿里云的 sudo dnf config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 4.安装 Docker 引擎 sudo dnf install -y docker-ce docker-ce-cli containerd.io 5.启动并设置开机自启 sudo systemctl start docker sudo systemctl enable docker安装docker-compose1、要先给chmod +x 执行权限 2、/usr/local/bin/docker-compose 自己传docker-compose 过去 [root@k8s-03 harbor]# sudo ./install.sh [Step 0]: checking if docker is installed ... Note: docker version: 20.10.24 [Step 1]: checking docker-compose is installed ... Note: docker-compose version: 2.24.5 [Step 2]: preparing environment ... [Step 3]: preparing harbor configs ... prepare base dir is set to /root/harbor Clearing the configuration file: /config/portal/nginx.conf Clearing the configuration file: /config/log/logrotate.conf Clearing the configuration file: /config/log/rsyslog_docker.conf Clearing the configuration file: /config/nginx/nginx.conf Clearing the configuration file: /config/core/env Clearing the configuration file: /config/core/app.conf Clearing the configuration file: /config/registry/passwd Clearing the configuration file: /config/registry/config.yml Clearing the configuration file: /config/registryctl/env Clearing the configuration file: /config/registryctl/config.yml Clearing the configuration file: /config/db/env Clearing the configuration file: /config/jobservice/env Clearing the configuration file: /config/jobservice/config.yml Generated configuration file: /config/portal/nginx.conf Generated configuration file: /config/log/logrotate.conf Generated configuration file: /config/log/rsyslog_docker.conf Generated configuration file: /config/nginx/nginx.conf Generated configuration file: /config/core/env Generated configuration file: /config/core/app.conf Generated configuration file: /config/registry/config.yml Generated configuration file: /config/registryctl/env Generated configuration file: /config/registryctl/config.yml Generated configuration file: /config/db/env Generated configuration file: /config/jobservice/env Generated configuration file: /config/jobservice/config.yml loaded secret from file: /data/secret/keys/secretkey Generated configuration file: /compose_location/docker-compose.yml Clean up the input dir [Step 4]: starting Harbor ... [+] Running 9/10 ⠸ Network harbor_harbor Created 2.3s ✔ Container harbor-log Started 0.4s ✔ Container harbor-db Started 1.3s ✔ Container harbor-portal Started 1.3s ✔ Container redis Started 1.2s ✔ Container registry Started 1.2s ✔ Container registryctl Started 1.3s ✔ Container harbor-core Started 1.6s ✔ Container nginx Started 2.1s ✔ Container harbor-jobservice Started 2.2s ✔ ----Harbor has been installed and started successfully.---- [root@k8s-03 harbor]# dockeer ps -bash: dockeer: command not found [root@k8s-03 harbor]# docker p docker: 'p' is not a docker command. See 'docker --help' [root@k8s-03 harbor]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 49d3c2bd157f goharbor/nginx-photon:v2.5.0 "nginx -g 'daemon of…" 11 seconds ago Up 8 seconds (health: starting) 0.0.0.0:80->8080/tcp, :::80->8080/tcp, 0.0.0.0:443->8443/tcp, :::443->8443/tcp nginx 60a868e50223 goharbor/harbor-jobservice:v2.5.0 "/harbor/entrypoint.…" 11 seconds ago Up 8 seconds (health: starting) harbor-jobservice abf5e1d382b1 goharbor/harbor-core:v2.5.0 "/harbor/entrypoint.…" 11 seconds ago Up 8 seconds (health: starting) harbor-core 9f5415aa4086 goharbor/harbor-portal:v2.5.0 "nginx -g 'daemon of…" 11 seconds ago Up 9 seconds (health: starting) harbor-portal f4c2c38abe04 goharbor/harbor-db:v2.5.0 "/docker-entrypoint.…" 11 seconds ago Up 9 seconds (health: starting) harbor-db 74b6a076b5b2 goharbor/harbor-registryctl:v2.5.0 "/home/harbor/start.…" 11 seconds ago Up 8 seconds (health: starting) registryctl 8c3bead9c56e goharbor/redis-photon:v2.5.0 "redis-server /etc/r…" 11 seconds ago Up 9 seconds (health: starting) redis d09c4161d411 goharbor/registry-photon:v2.5.0 "/home/harbor/entryp…" 11 seconds ago Up 9 seconds (health: starting) registry 90f8c13f0490 goharbor/harbor-log:v2.5.0 "/bin/sh -c /usr/loc…" 11 seconds ago Up 9 seconds (health: starting) 127.0.0.1:1514->10514/tcp harbor-log [root@k8s-03 harbor]# sudo wget "https://github.com/docker/compose/releases/download/v2.24.5/docker-compose-$(uname -s)-$(uname -m)" -O /usr/local/bin/docker-compose --2025-08-11 16:12:21-- https://github.com/docker/compose/releases/download/v2.24.5/docker-compose-Linux-x86_64 Resolving github.com (github.com)... 20.200.245.247 Connecting to github.com (github.com)|20.200.245.247|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://release-assets.githubusercontent.com/github-production-release-asset/15045751/aef9c31b-3422-45af-b239-516f7a79cca1?sp=r&sv=2018-11-09&sr=b&spr=https&se=2025-08-11T08%3A49%3A34Z&rscd=attachment%3B+filename%3Ddocker-compose-linux-x86_64&rsct=application%2Foctet-stream&skoid=96c2d410-5711-43a1-aedd-ab1947aa7ab0&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skt=2025-08-11T07%3A49%3A31Z&ske=2025-08-11T08%3A49%3A34Z&sks=b&skv=2018-11-09&sig=k%2BvfmI39lbdCBNCQDwuQiB5UfH%2F8S9PNPOgAFydaPJs%3D&jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmVsZWFzZS1hc3NldHMuZ2l0aHVidXNlcmNvbnRlbnQuY29tIiwia2V5Ijoia2V5MSIsImV4cCI6MTc1NDkwMDI0MiwibmJmIjoxNzU0ODk5OTQyLCJwYXRoIjoicmVsZWFzZWFzc2V0cHJvZHVjdGlvbi5ibG9iLmNvcmUud2luZG93cy5uZXQifQ.x2Izppyvpu0u8fDdEvN9JVDiEOk70qV6l1OyQSg1woI&response-content-disposition=attachment%3B%20filename%3Ddocker-compose-linux-x86_64&response-content-type=application%2Foctet-stream [following] --2025-08-11 16:12:22-- https://release-assets.githubusercontent.com/github-production-release-asset/15045751/aef9c31b-3422-45af-b239-516f7a79cca1?sp=r&sv=2018-11-09&sr=b&spr=https&se=2025-08-11T08%3A49%3A34Z&rscd=attachment%3B+filename%3Ddocker-compose-linux-x86_64&rsct=application%2Foctet-stream&skoid=96c2d410-5711-43a1-aedd-ab1947aa7ab0&sktid=398a6654-997b-47e9-b12b-9515b896b4de&skt=2025-08-11T07%3A49%3A31Z&ske=2025-08-11T08%3A49%3A34Z&sks=b&skv=2018-11-09&sig=k%2BvfmI39lbdCBNCQDwuQiB5UfH%2F8S9PNPOgAFydaPJs%3D&jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmVsZWFzZS1hc3NldHMuZ2l0aHVidXNlcmNvbnRlbnQuY29tIiwia2V5Ijoia2V5MSIsImV4cCI6MTc1NDkwMDI0MiwibmJmIjoxNzU0ODk5OTQyLCJwYXRoIjoicmVsZWFzZWFzc2V0cHJvZHVjdGlvbi5ibG9iLmNvcmUud2luZG93cy5uZXQifQ.x2Izppyvpu0u8fDdEvN9JVDiEOk70qV6l1OyQSg1woI&response-content-disposition=attachment%3B%20filename%3Ddocker-compose-linux-x86_64&response-content-type=application%2Foctet-stream Resolving release-assets.githubusercontent.com (release-assets.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.109.133, ... Connecting to release-assets.githubusercontent.com (release-assets.githubusercontent.com)|185.199.110.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 61389086 (59M) [application/octet-stream] Saving to: ‘/usr/local/bin/docker-compose’ /usr/local/bin/docker-compose 100%[=================================================================================================================================================================>] 58.54M 164KB/s in 2m 49s 2025-08-11 16:15:11 (355 KB/s) - ‘/usr/local/bin/docker-compose’ saved [61389086/61389086] [root@k8s-03 harbor]# sudo chmod +x /usr/local/bin/docker-compose [root@k8s-03 harbor]# sudo rm -f /usr/bin/docker-compose [root@k8s-03 harbor]# sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose [root@k8s-03 harbor]# echo $PATH /root/.local/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/.local/bin [root@k8s-03 harbor]# docker-compose version -bash: /root/.local/bin/docker-compose: No such file or directory [root@k8s-03 harbor]# export PATH=/usr/local/bin:/usr/bin:/root/.local/bin:$PATH [root@k8s-03 harbor]# echo 'export PATH=/usr/local/bin:$PATH' | sudo tee -a /root/.bashrc export PATH=/usr/local/bin:$PATH [root@k8s-03 harbor]# source /root/.bashrc [root@k8s-03 harbor]# docker-compose version Docker Compose version v2.24.5 [root@k8s-03 harbor]# 获取证书[root@k8s-03 harbor]# sudo ./t.sh Certificate request self-signature ok subject=C=CN, ST=Beijing, L=Beijing, O=example, OU=Personal, CN=harbor.telewave.tech [root@k8s-03 harbor]# ls LICENSE common common.sh data docker-compose.yml harbor.v2.5.0.tar harbor.yml harbor.yml.bak harbor.yml.tmpl install.sh prepare t.sh [root@k8s-03 harbor]# pwd /root/harbor [root@k8s-03 harbor]# ls /work/harbor/cert/ ca.crt ca.key ca.srl harbor.telewave.tech.cert harbor.telewave.tech.crt harbor.telewave.tech.csr harbor.telewave.tech.key v3.ext
2025年08月08日
8 阅读
0 评论
0 点赞
2025-08-07
gitlab runner安装
安装的gitlab runner版本与gitlab版本保持一致。一、参考文档https://docs.gitlab.com/runner/install/index.html二、查看gitlab版本root@k8s-02:~# gitlab-rake gitlab:env:info System information System: Ubuntu 22.04 Current User: git Using RVM: no Ruby Version: 3.2.5 Gem Version: 3.6.9 Bundler Version:2.6.5 Rake Version: 13.0.6 Redis Version: 7.2.9 Sidekiq Version:7.3.9 Go Version: unknown GitLab information Version: 18.2.1 Revision: baccadafcda Directory: /opt/gitlab/embedded/service/gitlab-rails DB Adapter: PostgreSQL DB Version: 16.8 URL: http://192.168.30.181 HTTP Clone URL: http://192.168.30.181/some-group/some-project.git SSH Clone URL: git@192.168.30.181:some-group/some-project.git Using LDAP: no Using Omniauth: yes Omniauth Providers: GitLab Shell Version: 14.43.0 Repository storages: - default: unix:/var/opt/gitlab/gitaly/gitaly.socket GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell Gitaly - default Address: unix:/var/opt/gitlab/gitaly/gitaly.socket - default Version: 18.2.1 - default Git Version: 2.50.1.gl1 三、yum安装https://docs.gitlab.com/runner/install/linux-repository.html #centos curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh" | sudo bash #ubuntu curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash #要安装特定版本的 GitLab Runner: #centos yum list gitlab-runner --showduplicates | sort -r sudo yum install gitlab-runner-17.2.0-1 #ubuntu apt-cache madison gitlab-runner sudo apt install gitlab-runner=17.7.1-1 gitlab-runner-helper-images=17.7.1-1 root@k8s-02:~# gitlab-rake gitlab:env:info System information System: Ubuntu 22.04 Current User: git Using RVM: no Ruby Version: 3.2.5 Gem Version: 3.6.9 Bundler Version:2.6.5 Rake Version: 13.0.6 Redis Version: 7.2.9 Sidekiq Version:7.3.9 Go Version: unknown GitLab information Version: 18.2.1 Revision: baccadafcda Directory: /opt/gitlab/embedded/service/gitlab-rails DB Adapter: PostgreSQL DB Version: 16.8 URL: http://192.168.30.181 HTTP Clone URL: http://192.168.30.181/some-group/some-project.git SSH Clone URL: git@192.168.30.181:some-group/some-project.git Using LDAP: no Using Omniauth: yes Omniauth Providers: GitLab Shell Version: 14.43.0 Repository storages: - default: unix:/var/opt/gitlab/gitaly/gitaly.socket GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell Gitaly - default Address: unix:/var/opt/gitlab/gitaly/gitaly.socket - default Version: 18.2.1 - default Git Version: 2.50.1.gl1 root@k8s-02:~# systemctl status gitlab gitlab-runner.service gitlab-runsvdir.service gitlab.slice root@k8s-02:~# systemctl status gitlab gitlab-runner.service gitlab-runsvdir.service gitlab.slice root@k8s-02:~# systemctl status gitlab-runner.service ● gitlab-runner.service - GitLab Runner Loaded: loaded (/etc/systemd/system/gitlab-runner.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2025-08-11 12:31:20 UTC; 32min ago Main PID: 1044 (gitlab-runner) Tasks: 10 (limit: 23452) Memory: 82.6M CPU: 3.032s CGroup: /system.slice/gitlab-runner.service └─1044 /usr/bin/gitlab-runner run --config /etc/gitlab-runner/config.toml --working-directory /home/gitlab-runner --ser> Aug 11 12:31:20 k8s-02 systemd[1]: Started GitLab Runner. Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: Runtime platform arch=amd64 os=linux pid=1044 revisio> Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: Starting multi-runner from /etc/gitlab-runner/config.toml... builds=0 max_builds=0 Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: Running in system-mode. Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: Usage logger disabled builds=0 max_builds=1 Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: Configuration loaded builds=0 max_builds=1 Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: listen_address not defined, metrics & debug endpoints disabled builds=0 max_builds=1 Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: [session_server].listen_address not defined, session endpoints disabled builds=0 max_bu> Aug 11 12:31:24 k8s-02 gitlab-runner[1044]: Initializing executor providers builds=0 max_builds=1 root@k8s-02:~# ^C root@k8s-02:~# gitlab-runner -v Version: 18.2.1 Git revision: cc489270 Git branch: 18-2-stable GO version: go1.24.4 X:cacheprog Built: 2025-07-28T12:43:39Z OS/Arch: linux/amd64 如果您尝试安装的特定版本gitlab-runner而不安装相同版本的 gitlab-runner-helper-images,则可能会遇到以下错误: sudo apt install gitlab-runner=17.7.1-1 ... The following packages have unmet dependencies: gitlab-runner : Depends: gitlab-runner-helper-images (= 17.7.1-1) but 17.8.3-1 is to be installed E: Unable to correct problems, you have held broken packages.四、rpm包安装查找合适版本的软件包并下载 https://mirrors.tuna.tsinghua.edu.cn/gitlab-runner/yum/el7-x86_64/ [root@tiaoban gitlab-runner]# wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-runner/yum/el7-x86_64/gitlab-runner-16.10.0-1.x86_64.rpm [root@tiaoban gitlab-runner]# rpm -ivh gitlab-runner-16.10.0-1.x86_64.rpm五、docker安装[root@client2 docker]# mkdir gitlab-runner [root@client2 docker]# ls gitlab-runner [root@client2 docker]# docker run --name gitlab-runner -itd -v /opt/docker/gitlab-runner:/etc/gitlab-runner --restart always gitlab/gitlab-runner:v16.10.0
2025年08月07日
4 阅读
0 评论
0 点赞
2025-08-07
Gitlab CI/CD简介和jenkins对比
一、Gitlab CI/CD优势- 开源: CI/CD是开源GitLab社区版和专有GitLab企业版的一部分。 - 易于学习: 具有详细的入门文档。 - 无缝集成: GitLab CI / CD是GitLab的一部分,支持从计划到部署,具有出色的用户体验。 - 可扩展: 测试可以在单独的计算机上分布式运行,可以根据需要添加任意数量的计算机。 - 更快的结果: 每个构建可以拆分为多个作业,这些作业可以在多台计算机上并行运行。 - 针对交付进行了优化: 多个阶段,手动部署, 环境 和 变量。二、Gitlab CI/CD特点- 多平台: Unix,Windows,macOS和任何其他支持Go的平台上执行构建。 - 多语言: 构建脚本是命令行驱动的,并且可以与Java,PHP,Ruby,C和任何其他语言一起使用。 - 稳定构建: 构建在与GitLab不同的机器上运行。 - 并行构建: GitLab CI / CD在多台机器上拆分构建,以实现快速执行。 - 实时日志记录: 合并请求中的链接将您带到动态更新的当前构建日志。 - 灵活的管道: 您可以在每个阶段定义多个并行作业,并且可以 触发其他构建。 - 版本管道: 一个 .gitlab-ci.yml文件 包含您的测试,整个过程的步骤,使每个人都能贡献更改,并确保每个分支获得所需的管道。 - 自动缩放: 您可以 自动缩放构建机器,以确保立即处理您的构建并将成本降至最低。 - 构建工件: 您可以将二进制文件和其他构建工件上载到 GitLab并浏览和下载它们。 - Docker支持: 可以使用自定义Docker映像, 作为测试的一部分启动 服务, 构建新的Docker映像,甚至可以在Kubernetes上运行。 - 容器注册表: 内置的容器注册表, 用于存储,共享和使用容器映像。 - 受保护的变量: 在部署期间使用受每个环境保护的变量安全地存储和使用机密。 - 环境: 定义多个环境。三、Gitlab CI/CD架构 3.1Gitlab CI / CDGitLab的一部分,GitLab是一个Web应用程序,具有将其状态存储在数据库中的API。 除了GitLab的所有功能之外,它还管理项目/构建并提供一个不错的用户界面。3.2Gitlab Runner是一个处理构建的应用程序。 它可以单独部署,并通过API与GitLab CI / CD一起使用。3.3.gitlab-ci.yml定义流水线作业运行,位于应用项目根目录下 。为了运行测试,至少需要一个 GitLab 实例、一个 GitLab Runner、一个gitlab-ci文件四、Gitlab CI/CD工作原理- 将代码托管到Git存储库。 - 在项目根目录创建ci文件 .gitlab-ci.yml ,在文件中指定构建,测试和部署脚本。 - GitLab将检测到它并使用名为GitLab Runner的工具运行脚本。 - 脚本被分组为作业,它们共同组成了一个管道。管道状态也会由GitLab显示:最后,如果出现任何问题,可以轻松地 回滚所有更改:五、gitlab CI简介gitlab ci是在gitlab8.0之后自带的一个持续集成系统,中心思想是当每一次push到gitlab的时候,都会触发一次脚本执行,然后脚本的内容包括了测试、编译、部署等一系列自定义的内容。 gitlab ci的脚本执行,需要自定义安装对应的gitlab runner来执行,代码push之后,webhook检测到代码变化,就会触发gitlab ci,分配到各个runner来运行相应的脚本script。这些脚本有些是测试项目用的,有些是部署用的。六、Gitlab ci与Jenkins对比 6.1分支可配置性使用gitlab ci,新创建的分支无需任何进一步的配置即可立即使用CI管道中的已定义作业。 Jenkins基于gitlab的多分支流水线插件可以实现。相对配置来说,gitlab ci更加方便。6.2拉取请求支持如果很好的集成了存储库管理器的CI/CD平台,可以看到请求的当前构建状态。使用这个功能,可以避免将代码合并到不起作用或者无法正确构建的主分支中。 -Jenkins没有与源代码管理系统进一步集成,需要管理员自行写代码或者插件实现。 -gitlab与其CI平台紧密集成,可以方便查看每个打开和关闭拉动请求的运行和完成管道。6.3权限管理- gitlab ci是git存储库管理器gitlab的固定组件,因此在ci/cd流程和存储库直接提供了良好的交互。 - Jenkins与存储库管理器都是松散耦合的,因此在选择版本控制系统时它非常灵活。此外,就像其前身一样,Jenkins强调了对插件的支持,以进一步扩展或改善软件的现有功能。6.4插件管理扩展Jenkins的本机功能是通过插件完成的,插件的维护,保护和成本很高。 gitlab是开放式的,任何人都可以直接向代码库贡献更改,一旦合并,它将自动测试并维护每个更改七、Jenkins vs GitLab CI/CD 优缺点 7.1Jenkins 的优点- 大量插件库 - 自托管,例如对工作空间的完全控制 - 容易调试运行,由于对工作空间的绝对控制 - 容易搭建节点 - 容易部署代码 - 非常好的凭证管理 - 非常灵活多样的功能 - 支持不同的语言 - 非常直观7.2Jenkins 的缺点- 插件集成复杂 - 对于比较小的项目开销比较大,因为你需要自己搭建 - 缺少对整个 pipeline 跟踪的分析7.3GitLab CI/CD 的优点- 更好的 Docker 集成 - 运行程序扩展或收缩比较简单 - 阶段内的作业并行执行 - 有向无环图 pipeline 的机会 - 由于并发运行程序而非常易于扩展收缩 - 合并请求集成 - 容易添加作业 - 容易处理冲突问题 - 良好的安全和隐私政策7.4GitLab CI/CD 的缺点- 需要为每个作业定义构建并上传 / 下载 - 在实际合并发生之前测试合并状态是不可能的 - 还不支持细分阶段八、对比总结 8.1gitlab ci- 轻量级,不需要复杂的安装手段 - 配置简单,与gitlab可直接适配 - 实时构建日志十分清晰,UI交互体验很好 - 使用yaml进行配置,任何人都可以很方便的使用 - 没有统一的管理界面,无法统一管理所有的项目 - 配置依赖于代码仓库,耦合度没有Jenkins低8.2Jenkins- 编译服务和代码仓库分离,耦合度低 - 插件丰富,支持语言众多 - 有统一的web管理页面 - 插件以及自身安装较为复杂 - 体量较大,不适合小型团队开发。九、适用场景- gitlab ci有助于devops人员,例如敏捷开发中,开发人员与运维是同一个人,最便捷的开发方式 - Jenkins适合在多角色团队中,职责分明,配置与代码分离,插件丰富。
2025年08月07日
6 阅读
0 评论
0 点赞
2025-08-04
Jenkins+k8s项目实战
一、Jenkins动态slave介绍 1.1为什么需要动态slave1. 配置管理困难:不同项目可能使用不同的编程语言、框架或库,这导致了每个Slave的配置环境各不相同。因此,需要动态Slave能够根据不同的项目需求,灵活配置不同的运行环境,从而简化配置管理和维护工作。 2. 资源分配不均衡:在使用静态Slave时,可能会出现某些Slave处于空闲状态,而其他Slave却处于繁忙状态,导致资源分配不均衡。动态Slave可以根据当前任务的需求自动调配资源,使得任务能够在空闲的Slave上尽快执行,从而提高资源利用率和任务执行效率。 3. 资源浪费:静态Slave在没有任务执行时仍然占用着资源,这导致了资源的浪费。而动态Slave能够根据实际需要自动扩容或缩减,当没有任务执行时会释放资源,从而避免了资源的浪费。1.2动态slave工作流程正因为上面的这些种种痛点,我们渴望一种更高效更可靠的方式来完成这个 CI/CD 流程,而 Docker虚拟化容器技术能很好的解决这个痛点,又特别是在 Kubernetes 集群环境下面能够更好来解决上面的问题,下图是基于 Kubernetes 搭建 Jenkins 集群的简单示意图:从图上可以看到 Jenkins Master 和 Jenkins Slave 以 Pod 形式运行在 Kubernetes 集群的 Node 上,Master 运行在其中一个节点,并且将其配置数据存储到一个 Volume 上去,Slave 运行在各个节点上,并且它不是一直处于运行状态,它会按照需求动态的创建并自动删除。 这种方式的工作流程大致为:当 Jenkins Master 接受到 Build 请求时,会根据配置的 Label 动态创建一个运行在 Pod 中的 Jenkins Slave 并注册到 Master 上,当运行完 Job 后,这个 Slave 会被注销并且这个 Pod 也会自动删除,恢复到最初状态。二、服务部署本项目所有服务均运行在k8s集群上,使用nfs共享存储 nfs共享存储部署 container部署 harbor部署 gitlab部署 jenkins部署 SonarQube部署三、项目与权限配置 3.1Harbor配置Harbor的项目分为公开和私有的: 公开项目:所有用户都可以访问,通常存放公共的镜像,默认有一个library公开项目。 私有项目:只有授权用户才可以访问,通常存放项目本身的镜像。 我们可以为微服务项目创建一个新的项目创建用户 创建一个普通用户xing。配置项目用户权限 在spring_boot_demo项目中添加普通用户xing,并设置角色为开发者。权限说明角色权限访客对项目有只读权限开发人员对项目有读写权限维护人员对项目有读写权限、创建webhook权限项目管理员出上述外,还有用户管理等权限3.2gitlab项目权限配置创建组 管理员用户登录,创建群组,组名称为develop,组权限为私有创建项目 创建sprint boot demo项目,并指定develop,项目类型为私有创建用户 创建一个普通用户xing用户添加到组中 将xing添加到群组develop中,cuiliang角色为Developer配置分支权限用户权限验证 使用任意一台机器模拟开发人员拉取代码,完成开发后推送至代码仓库。 拉取仓库代码#拉取代码 root@k8s-03:~/work# git clone https://gitee.com/cuiliang0302/sprint_boot_demo.git Cloning into 'sprint_boot_demo'... remote: Enumerating objects: 261, done. remote: Total 261 (delta 0), reused 0 (delta 0), pack-reused 261 (from 1) Receiving objects: 100% (261/261), 105.79 KiB | 162.00 KiB/s, done. Resolving deltas: 100% (116/116), done. root@k8s-03:~/work# git remote set-url origin http://192.168.30.181/develop/sprint-boot-demo.git fatal: not a git repository (or any of the parent directories): .git推送至gitlab仓库root@k8s-03:~/work/sprint_boot_demo# git remote set-url origin http://192.168.30.181/develop/sprint-boot-demo.git root@k8s-03:~/work/sprint_boot_demo# git remote -v origin http://192.168.30.181/develop/sprint-boot-demo.git (fetch) origin http://192.168.30.181/develop/sprint-boot-demo.git (push) root@k8s-03:~/work/sprint_boot_demo# git push --set-upstream origin --all Username for 'http://192.168.30.181': xing Password for 'http://xing@192.168.30.181': Enumerating objects: 254, done. Counting objects: 100% (254/254), done. Delta compression using up to 8 threads Compressing objects: 100% (119/119), done. Writing objects: 100% (254/254), 105.20 KiB | 105.20 MiB/s, done. Total 254 (delta 111), reused 253 (delta 110), pack-reused 0 remote: Resolving deltas: 100% (111/111), done. remote: remote: To create a merge request for master, visit: remote: http://192.168.30.181/develop/sprint-boot-demo/-/merge_requests/new?merge_request%5Bsource_branch%5D=master remote: To http://192.168.30.181/develop/sprint-boot-demo.git * [new branch] master -> master Branch 'master' set up to track remote branch 'master' from 'origin'. 查看验证 四、jenkins配置 4.1插件安装与配置GitLab插件安装与配置:https://www.cuiliangblog.cn/detail/section/127410630 SonarQube Scanner插件安装与配置:https://www.cuiliangblog.cn/detail/section/165534414 Kubernetes插件安装与配置:https://www.cuiliangblog.cn/detail/section/127230452 Email Extension邮件推送插件安装与配置:https://www.cuiliangblog.cn/detail/section/133029974 Version Number版本号插件安装与配置:https://plugins.jenkins.io/versionnumber/ Content Replace文件内容替换插件安装与配置:https://plugins.jenkins.io/content-replace/4.2jenkins slave镜像制作安装完Kubernetes插件后,默认的slave镜像仅包含一些基础功能和软件包,如果需要构建镜像,执行kubectl命令,则需要引入其他container或者自定义slave镜像。 关于镜像构建问题,如果k8s容器运行时为docker,可以直接使用docker in docker方案,启动一个docker:dind容器,通过Docker pipeline插件执行镜像构建与推送操作,具体内容可参考https://www.cuiliangblog.cn/detail/section/166573065。 如果k8s容器运行时为container,则使用nerdctl+buildkitd方案,启动一个buildkit容器,通过nerdctl命令执行镜像构建与推送操作,具体内容可参考:https://axzys.cn/index.php/archives/521/ 本次实验以container环境为例,通过nerdctl+buildkitd方案演示如何构建并推送镜像。构建jenkins-slave镜像root@k8s-01:~/jenkins/work# cp /usr/bin/kubectl . root@k8s-01:~/jenkins/work# cp /usr/bin/nerdctl . root@k8s-01:~/jenkins/work# cp /usr/local/bin/buildctl . root@k8s-01:~/jenkins/work# ls buildctl Dockerfile kubectl nerdctl测试jenkins-slave镜像构建容器与操作k8s 以下操作在k8s集群master机器,容器运行时为container节点执行测试# 启动buildkit镜像构建服务 # 挂载/run/containerd/containerd.sock方便container调用buildkitd # 挂载/var/lib/buildkit,以便于将构建过程中下载的镜像持久化存储,方便下次构建时使用缓存 # 挂载/run/buildkit/目录方便nerctl调用buildkitd root@k8s-03:~/bin# nerdctl run --name buildkit -d --privileged=true \ -v /run/buildkit/:/run/buildkit/ \ -v /var/lib/buildkit:/var/lib/buildkit \ -v /run/containerd/containerd.sock:/run/containerd/containerd.sock \ registry.cn-guangzhou.aliyuncs.com/xingcangku/moby-buildkit:v0.13.2 registry.cn-guangzhou.aliyuncs.com/xingcangku/moby-buildkit:v0.13.2: resolved |++++++++++++++++++++++++++++++++++++++| manifest-sha256:ff1ed58245d6871cc4bbe07a838603720a90ca33f124484fff2af61b344c3b2f: done |++++++++++++++++++++++++++++++++++++++| config-sha256:b67e88949be2f8ee8844bbe205c8b0054533a2c8cf35a6bc39ebc1d7cd7ce8f1: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:1479f6e1da81ab38384f704f742a35030364df4c9f9ed65e812a9c921bc20d25: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:77e3817cafb06118d96cfbd8af2fb7834a03e14d83acbcd9b7ff2a298c292d4d: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:fabeb51fadd96525499e8ce7171956d81bf70c836d4943ff8be9714528da736a: done |++++++++++++++++++++++++++++++++++++++| elapsed: 19.4s total: 87.2 M (4.5 MiB/s) 661d91a4ade1379768948ea962541ce5876e4dc51c14d3673fedfbdb7d142af5 root@k8s-03:~/bin# nerdctl ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4778573cadf3 registry.cn-guangzhou.aliyuncs.com/xingcangku/moby-buildkit:v0.13.2 "buildkitd" About a minute ago Up buildkit root@k8s-03:~/bin# # 启动jenkins-slave容器 # 挂载/run/containerd/containerd.sock方便netdctl操作container # 挂载/run/buildkit/目录方便nerctl调用buildkitd构建镜像 # 挂载/root/.kube/目录方便kubectl工具操作k8s root@k8s-03:~/bin# nerdctl run --name jenkins-slave -it --privileged=true -v /run/buildkit/:/run/buildkit/ -v /root/.kube/:/root/.kube/ -v /run/containerd/containerd.sock:/run/containerd/containerd.sock registry.cn-guangzhou.aliyuncs.com/xingcangku/jenkins-cangku:v1 bash registry.cn-guangzhou.aliyuncs.com/xingcangku/jenkins-cangku:v1: resolved |++++++++++++++++++++++++++++++++++++++| manifest-sha256:b3e519ae85d0f05ff170778c8ffae494879397d9881ca8bc905bc889da82fc07: done |++++++++++++++++++++++++++++++++++++++| config-sha256:ad852c7e884a5f9f6e87fcb6112fbe0c616b601a69ae5cf74ba09f2456d4e578: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:9ac051fdbd99f7d8c9e496724860b8ae3373f24d6f8a54f1d9096526df425d3c: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:85a4e35755fb1aa44b91602297dc8d9f10eb8ad3f32baab32094cebc0eda41a4: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:0bdf2c4d1714b5962675435789b4e83edb5aa4d94ec0d7643737940b0e73c4ed: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:6b189407e830325d112140133ef8a0adcae1a94a9231cfa56872c78f21886b66: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:28f95146d6851ca39a2ce18612cb5e5b19845ef85e4bb95bfa3095193fdf5777: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:68a03bb16ee6a2c356d2e354bab5ec566dd7ef1e4e6daee52c1f87aa9d0cd139: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:1c55318e78a1d3f438c6ca3cf6532d365a86fabbf7e3ecd14140e455f1489991: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:65375761a96d26587431452e982c657d479c41c41027a7f0acf37a6a21fd1112: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:f26c0daf8d2d5ff437f0eb85dad34fe122657585c3e8e5589fdf4cd007705fbe: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:cef14de45bb7cc343e593f80531764848fe724db9084ae4a3cabacc7a7e24083: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:7db9f9afd5f7e18ef5e410566fe6342de266960271c5ba712b8027e593617498: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:392ad068a4827c711e9af3956bd5d3bcac7e8d66b3dd35bb0a9fae60de47f80d: done |++++++++++++++++++++++++++++++++++++++| elapsed: 4.9 s total: 170.8 (34.9 MiB/s) root@4da1c7c8f6f4:/home/jenkins# # 测试container管理 root@4da1c7c8f6f4:/home/jenkins# nerdctl ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4da1c7c8f6f4 registry.cn-guangzhou.aliyuncs.com/xingcangku/jenkins-cangku:v1 "/usr/local/bin/jenk…" 47 seconds ago Up jenkins-slave 4778573cadf3 registry.cn-guangzhou.aliyuncs.com/xingcangku/moby-buildkit:v0.13.2 "buildkitd" 6 minutes ago Up buildkit root@4da1c7c8f6f4:/home/jenkins# #测试k8s管理 root@4da1c7c8f6f4:/home/jenkins# kubectl get node NAME STATUS ROLES AGE VERSION k8s-01 Ready control-plane 5d v1.27.6 k8s-02 Ready <none> 5d v1.27.6 k8s-03 Ready <none> 5d v1.27.6 # 测试镜像构建解释一下下面的操作,如果是宿主机已经开启了buildkitd服务必须先关闭,不然/var/lib/buildkit/buildkitd.lock会生成,这个生成后Docker启动不了buildkitd容器。#停止buildkitd服务 root@k8s-03:/var/lib/buildkit# sudo systemctl stop buildkitd sudo systemctl disable buildkitd # 2. 清理残留文件 sudo rm -f /run/buildkit/buildkitd.sock sudo rm -f /run/buildkit/otel-grpc.sock sudo rm -f /var/lib/buildkit/buildkitd.lock # 3. 创建专用数据目录(避免冲突) sudo mkdir -p /var/lib/buildkit-container sudo chmod 700 /var/lib/buildkit-container Removed /etc/systemd/system/multi-user.target.wants/buildkitd.service. # 启动buildkit镜像构建服务 # 挂载/run/containerd/containerd.sock方便container调用buildkitd # 挂载/var/lib/buildkit,以便于将构建过程中下载的镜像持久化存储,方便下次构建时使用缓存 # 挂载/run/buildkit/目录方便nerctl调用buildkitd root@k8s-03:/var/lib/buildkit# nerdctl run -d --name buildkit \ --privileged \ -v /run/buildkit/:/run/buildkit/ \ -v /var/lib/buildkit-container:/var/lib/buildkit \ -v /run/containerd/containerd.sock:/run/containerd/containerd.sock \ registry.cn-guangzhou.aliyuncs.com/xingcangku/moby-buildkit:v0.13.2 a9d371968d62cb654e50677bb7433b6c58ac25cf177c76dcc81dfb3a311b8289 root@k8s-03:/var/lib/buildkit# nerdctl ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a9d371968d62 registry.cn-guangzhou.aliyuncs.com/xingcangku/moby-buildkit:v0.13.2 "buildkitd" 7 seconds ago Up buildkit root@k8s-03:/var/lib/buildkit# root@k8s-03:/var/lib/buildkit# root@k8s-03:/var/lib/buildkit# buildctl --addr unix:///run/buildkit/buildkitd.sock debug workers ID PLATFORMS 37647lrzalh7j20qy1hskzedp linux/amd64,linux/amd64/v2,linux/amd64/v3,linux/amd64/v4,linux/386 qi74wmika7i8tvdej2ob0ooj5 linux/amd64,linux/amd64/v2,linux/amd64/v3,linux/amd64/v4,linux/386 root@k8s-03:/var/lib/buildkit# root@k8s-03:/var/lib/buildkit# # 启动jenkins-slave容器 # 挂载/run/containerd/containerd.sock方便netdctl操作container # 挂载/run/buildkit/目录方便nerctl调用buildkitd构建镜像 # 挂载/root/.kube/目录方便kubectl工具操作k8s root@k8s-03:/var/lib/buildkit# nerdctl run --name jenkins-slave -it --privileged=true \ -v /run/buildkit/:/run/buildkit/ \ -v /root/.kube/:/root/.kube/ \ -v /run/containerd/containerd.sock:/run/containerd/containerd.sock \ registry.cn-guangzhou.aliyuncs.com/xingcangku/jenkins-cangku:v1 bash # 测试镜像构建 root@d2f81acc6e0b:/home/jenkins# echo 'FROM registry.cn-guangzhou.aliyuncs.com/xingcangku/busybox-latest:latest' >> Dockerfile echo 'CMD ["echo","hello","container"]' >> Dockerfile cat Dockerfile FROM registry.cn-guangzhou.aliyuncs.com/xingcangku/busybox-latest:latest CMD ["echo","hello","container"] root@d2f81acc6e0b:/home/jenkins# root@d2f81acc6e0b:/home/jenkins# root@d2f81acc6e0b:/home/jenkins# nerdctl build -t test-test:v1 . [+] Building 12.4s (3/5) [+] Building 12.5s (5/5) FINISHED => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 143B 0.0s => [internal] load metadata for registry.cn-guangzhou.aliyuncs.com/xingcangku/busybox-latest:latest 11.6s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [1/1] FROM registry.cn-guangzhou.aliyuncs.com/xingcangku/busybox-latest:latest@sha256:b41a05bd7a4a32e4c48c284cc2178abe8c11 0.8s => => resolve registry.cn-guangzhou.aliyuncs.com/xingcangku/busybox-latest:latest@sha256:b41a05bd7a4a32e4c48c284cc2178abe8c11 0.0s => => sha256:90b9666d4aed1893ff122f238948dfd5e8efdcf6c444fe92371ea0f01750bf8c 2.15MB / 2.15MB 0.8s => exporting to docker image format 0.8s => => exporting layers 0.0s => => exporting manifest sha256:05610df32232fdd6d6276d0aa50c628fc3acd75deb010cf15a4ac74cf35ea348 0.0s => => exporting config sha256:0b44030dca1d1504de8aa100696d5c86f19b06cec660cf55c2ba6c5c36d1fb89 0.0s => => sending tarball 0.0s Loaded image: docker.io/library/test-test:v1 root@d2f81acc6e0b:/home/jenkins# 4.3job任务创建与配置配置webhook构建触发器,当分支代码提交时触发构建,具体配置如下:流水线选择SCM从代码仓库中获取jenkinsfile,脚本路径填写Jenkinsfile-k8s.groov手动触发完会生成一个pod4.4部署总结1. jenkinsfile中如果涉及yaml的代码需要注意权限。 2. 还需要查看ServiceAccount是否跟之前设置的正确 3. 镜像尽量自己做成国内的地址 4. 如果是自定义的Harbor仓库需要提前创建Docker Registry Secret就是Harbor仓库的账号密码,然后在yaml中添加上配置imagePullSecrets。 5. 提前把名称空间创建出来五、效果演示 5.1开发测试阶段模拟开发人员完成功能开发后提交代码至test分支,推送以后会自动触发gitlab的webhook然后自动调用拉取jenkins你设置好的job流水线。root@k8s-03:~/work/sprint-boot-demo# git branch -a main * test remotes/origin/HEAD -> origin/main remotes/origin/main root@k8s-03:~/work/sprint-boot-demo# cat src/main/java/com/example/springbootdemo/HelloWorldController.java package com.example.springbootdemo; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; @Controller public class HelloWorldController { @RequestMapping("/") @ResponseBody public String hello() { // 获取环境变量 ENV_NAME,如果不存在则使用默认值 "default" String envName = System.getenv().getOrDefault("ENV_NAME", "default"); return String.format("<h1>Hello SpringBoot</h1><p>Version:v1 Env:%s</p>", envName); } @RequestMapping("/health") @ResponseBody public String healthy() { return "ok"; } } root@k8s-03:~/work/sprint-boot-demo# vi src/main/java/com/example/springbootdemo/HelloWorldController.java root@k8s-03:~/work/sprint-boot-demo# git add . root@k8s-03:~/work/sprint-boot-demo# git commit -m "test环境更新版本至v2" [test 368ff3d] test环境更新版本至v2 1 file changed, 1 insertion(+), 1 deletion(-) root@k8s-03:~/work/sprint-boot-demo# git push fatal: The current branch test has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin test root@k8s-03:~/work/sprint-boot-demo# root@k8s-03:~/work/sprint-boot-demo# git push --set-upstream origin test Username for 'http://192.168.30.181': root Password for 'http://root@192.168.30.181': Enumerating objects: 17, done. Counting objects: 100% (17/17), done. Delta compression using up to 8 threads Compressing objects: 100% (6/6), done. Writing objects: 100% (9/9), 697 bytes | 697.00 KiB/s, done. Total 9 (delta 2), reused 0 (delta 0), pack-reused 0 remote: remote: To create a merge request for test, visit: remote: http://192.168.30.181/develop/sprint-boot-demo/-/merge_requests/new?merge_request%5Bsource_branch%5D=test remote: To http://192.168.30.181/develop/sprint-boot-demo.git * [new branch] test -> test Branch 'test' set up to track remote branch 'test' from 'origin'.此时查看cicd名称空间下的pod信息,发现已经创建一个名为springbootdemo-275-rf832-h6jkq-630x8的pod,包含3个container,分别是jnlp、maven、buildkitd。root@k8s-02:~# kubectl get pods -n cicd NAME READY STATUS RESTARTS AGE jenkins-7d65887794-tsqs5 1/1 Running 5 (104m ago) 2d8h springbootdemo-40-p9smx-4mw1j-z0v30 3/3 Running 0 7m30s 查看jenkins任务信息,已顺利完成了集成部署工作。并且收到了jenkins自动发出的邮件,内容如下:查看SonarQube代码扫描信息,未发现异常代码。查看k8s,已成功创建相关资源root@k8s-01:~/gitlab# kubectl get all -n test NAME READY STATUS RESTARTS AGE pod/demo-7d4975d656-cdpk7 1/1 Running 0 4m28s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/demo ClusterIP 10.108.234.81 <none> 8888/TCP 14m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/demo 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/demo-7d4975d656 1 1 1 14m root@k8s-01:~/gitlab# 此时模拟测试人员,访问测试环境域名至此,开发测试阶段演示完成。5.2生产发布阶段接下来演示master分支代码提交后,触发生产环境版本发布流程。#Harbor仓库的权限要提前创建 kubectl create secret docker-registry harbor-pull-secret \ --namespace=test \ --docker-server=192.168.30.180:30003 \ --docker-username=admin \ --docker-password=Harbor12345root@k8s-03:~/work/sprint-boot-demo# git branch -a main * test remotes/origin/HEAD -> origin/main remotes/origin/main remotes/origin/test root@k8s-03:~/work/sprint-boot-demo# git checkout main Switched to branch 'main' Your branch is up to date with 'origin/main'. root@k8s-03:~/work/sprint-boot-demo# vi src/main/java/com/example/springbootdemo/HelloWorldController.java root@k8s-03:~/work/sprint-boot-demo# cat src/main/java/com/example/springbootdemo/HelloWorldController.java package com.example.springbootdemo; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; @Controller public class HelloWorldController { @RequestMapping("/") @ResponseBody public String hello() { // 获取环境变量 ENV_NAME,如果不存在则使用默认值 "default" String envName = System.getenv().getOrDefault("ENV_NAME", "default"); return String.format("<h1>Hello SpringBoot</h1><p>Version:v5 Env:%s</p>", envName); } @RequestMapping("/health") @ResponseBody public String healthy() { return "ok"; } } root@k8s-03:~/work/sprint-boot-demo# git add . root@k8s-03:~/work/sprint-boot-demo# git commit -m "生产环境更新版本v5" [main 617325e] 生产环境更新版本v5 1 file changed, 1 insertion(+), 1 deletion(-) root@k8s-03:~/work/sprint-boot-demo# git push Username for 'http://192.168.30.181': root Password for 'http://root@192.168.30.181': Enumerating objects: 17, done. Counting objects: 100% (17/17), done. Delta compression using up to 8 threads Compressing objects: 100% (6/6), done. Writing objects: 100% (9/9), 704 bytes | 704.00 KiB/s, done. Total 9 (delta 2), reused 0 (delta 0), pack-reused 0 To http://192.168.30.181/develop/sprint-boot-demo.git 912f7b3..617325e main -> main 待收到邮件通知后,查看k8s资源,已经在prod名称空间下创建相关资源此时访问生产环境域名,服务可以正常访问。此时查看Harbor仓库镜像信息,其中p开头的为生产环境镜像,t开头的为测试环境镜像。jenkinsfile pipeline { agent { kubernetes { // 定义要在 Kubernetes 中运行的 Pod 模板 yaml ''' apiVersion: v1 kind: Pod metadata: labels: app: jenkins-slave spec: serviceAccountName: jenkins-admin containers: - name: jnlp image: registry.cn-guangzhou.aliyuncs.com/xingcangku/jenkins-cangku:v1 resources: limits: memory: "512Mi" cpu: "500m" securityContext: privileged: true volumeMounts: - name: buildkit mountPath: "/run/buildkit/" - name: containerd mountPath: "/run/containerd/containerd.sock" - name: kube-config mountPath: "/root/.kube/" readOnly: true - name: maven image: registry.cn-guangzhou.aliyuncs.com/xingcangku/maven3.9.3-openjdk-17:v2.0 resources: limits: memory: "512Mi" cpu: "500m" command: - 'sleep' args: - '9999' volumeMounts: - name: maven-data mountPath: "/root/.m2" - name: buildkitd image: registry.cn-guangzhou.aliyuncs.com/xingcangku/moby-buildkit:v0.13.2 resources: limits: memory: "256Mi" cpu: "500m" securityContext: privileged: true volumeMounts: - name: buildkit mountPath: "/run/buildkit/" - name: buildkit-data mountPath: "/var/lib/buildkit/" - name: containerd mountPath: "/run/containerd/containerd.sock" volumes: - name: maven-data persistentVolumeClaim: claimName: jenkins-maven - name: buildkit hostPath: path: /run/buildkit/ - name: buildkit-data hostPath: path: /var/lib/buildkit/ - name: containerd hostPath: path: /run/containerd/containerd.sock - name: kube-config secret: secretName: kube-config ''' retries 2 } } environment { // 全局变量 HARBOR_CRED = "harbor-admin" IMAGE_NAME = "" IMAGE_APP = "demo" branchName = "" } stages { stage('拉取代码') { environment { // gitlab仓库信息 GITLAB_CRED = "gitlab-xing-password" GITLAB_URL = "http://192.168.30.181/develop/sprint-boot-demo.git" } steps { echo '开始拉取代码' checkout scmGit(branches: [[name: '*/*']], extensions: [], userRemoteConfigs: [[credentialsId: "${GITLAB_CRED}", url: "${GITLAB_URL}"]]) // 获取当前拉取的分支名称 script { def branch = env.GIT_BRANCH ?: 'main' branchName = branch.split('/')[-1] } echo '拉取代码完成' } } stage('编译打包') { steps { container('maven') { // 指定使用maven container进行打包 echo '开始编译打包' sh 'mvn clean package' echo '编译打包完成' } } } stage('代码审查') { environment { // SonarQube信息 SONARQUBE_SCANNER = "SonarQube" SONARQUBE_SERVER = "SonarQube" } steps { echo '开始代码审查' script { def scannerHome = tool "${SONARQUBE_SCANNER}" withSonarQubeEnv("${SONARQUBE_SERVER}") { // 添加扫描参数 sh """ ${scannerHome}/bin/sonar-scanner \ -Dsonar.projectKey=springbootdemo \ -Dsonar.projectName=SpringBootDemo \ -Dsonar.java.binaries=target/classes \ -Dsonar.sources=src/main/java \ -Dsonar.sourceEncoding=UTF-8 """ } } echo '代码审查完成' } } stage('构建镜像') { environment { // harbor仓库信息 HARBOR_URL = "192.168.30.180:30003" HARBOR_PROJECT = "spring_boot_demo" // 镜像标签 IMAGE_TAG = '' // 镜像名称 IMAGE_NAME = '' } steps { echo '开始构建镜像' script { if (branchName == 'main') { IMAGE_TAG = VersionNumber versionPrefix: 'p', versionNumberString: '${BUILD_DATE_FORMATTED, "yyMMdd"}.${BUILDS_TODAY}' } else if (branchName == 'test') { IMAGE_TAG = VersionNumber versionPrefix: 't', versionNumberString: '${BUILD_DATE_FORMATTED, "yyMMdd"}.${BUILDS_TODAY}' } else { error("Unsupported branch: ${params.BRANCH}") } IMAGE_NAME = "${HARBOR_URL}/${HARBOR_PROJECT}/${IMAGE_APP}:${IMAGE_TAG}" sh "nerdctl build --insecure-registry -t ${IMAGE_NAME} . " } echo '构建镜像完成' echo '开始推送镜像' // 获取harbor账号密码 withCredentials([usernamePassword(credentialsId: "${HARBOR_CRED}", passwordVariable: 'HARBOR_PASSWORD', usernameVariable: 'HARBOR_USERNAME')]) { // 登录Harbor仓库 sh """nerdctl login --insecure-registry ${HARBOR_URL} -u ${HARBOR_USERNAME} -p ${HARBOR_PASSWORD} nerdctl push --insecure-registry ${IMAGE_NAME}""" } echo '推送镜像完成' echo '开始删除镜像' script { sh "nerdctl rmi -f ${IMAGE_NAME}" } echo '删除镜像完成' } } stage('项目部署') { environment { // 资源清单名称 YAML_NAME = "k8s.yaml" } steps { echo '开始修改资源清单' script { if (branchName == 'main' ) { NAME_SPACE = 'prod' DOMAIN_NAME = 'demo.local.com' } else if (branchName == 'test') { NAME_SPACE = 'test' DOMAIN_NAME = 'demo.test.com' } else { error("Unsupported branch: ${params.BRANCH}") } } // 使用Content Replace插件进行k8s资源清单内容替换 contentReplace(configs: [fileContentReplaceConfig(configs: [fileContentReplaceItemConfig(replace: "${IMAGE_NAME}", search: 'IMAGE_NAME'), fileContentReplaceItemConfig(replace: "${NAME_SPACE}", search: 'NAME_SPACE'), fileContentReplaceItemConfig(replace: "${DOMAIN_NAME}", search: 'DOMAIN_NAME')], fileEncoding: 'UTF-8', filePath: "${YAML_NAME}", lineSeparator: 'Unix')]) echo '修改资源清单完成' sh "cat ${YAML_NAME}" echo '开始部署资源清单' sh "kubectl apply -f ${YAML_NAME}" echo '部署资源清单完成' } } } post { always { echo '开始发送邮件通知' emailext(subject: '构建通知:${PROJECT_NAME} - Build # ${BUILD_NUMBER} - ${BUILD_STATUS}!', body: '${FILE,path="email.html"}', to: '7902731@qq.com') echo '邮件通知发送完成' } } }
2025年08月04日
23 阅读
0 评论
1 点赞
1
...
3
4
5
...
16