首页
导航
统计
留言
更多
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
Search
1
Ubuntu安装 kubeadm 部署k8s 1.30
219 阅读
2
kubeadm 部署k8s 1.30
134 阅读
3
rockylinux 9.3详细安装drbd
131 阅读
4
rockylinux 9.3详细安装drbd+keepalived
122 阅读
5
ceshi
82 阅读
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
zabbix
登录
/
注册
Search
标签搜索
k8s
linux
docker
drbd+keepalivde
ansible
dcoker
webhook
星
累计撰写
118
篇文章
累计收到
940
条评论
首页
栏目
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
zabbix
页面
导航
统计
留言
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
搜索到
78
篇与
的结果
2025-06-14
OpenTelemetry 应用埋点
一、部署示例应用 1、部署java应用apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: app: java-demo template: metadata: labels: app: java-demo spec: containers: - name: java-demo image: registry.cn-guangzhou.aliyuncs.com/xingcangku/spring-petclinic:1.5.1 imagePullPolicy: IfNotPresent resources: limits: memory: "1Gi" # 增加内存 cpu: "500m" ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: java-demo spec: type: ClusterIP # 改为 ClusterIP,Traefik 使用服务发现 selector: app: java-demo ports: - port: 80 targetPort: 8080 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: java-demo spec: entryPoints: - web # 使用 WEB 入口点 (端口 8000) routes: - match: Host(`java-demo.local.cn`) # 可以修改为您需要的域名 kind: Rule services: - name: java-demo port: 80 2、部署python应用apiVersion: apps/v1 kind: Deployment metadata: name: python-demo spec: selector: matchLabels: app: python-demo template: metadata: labels: app: python-demo spec: containers: - name: python-demo image: registry.cn-guangzhou.aliyuncs.com/xingcangku/python-demoapp:latest imagePullPolicy: IfNotPresent resources: limits: memory: "500Mi" cpu: "200m" ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: python-demo spec: selector: app: python-demo ports: - port: 5000 targetPort: 5000 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: python-demo spec: entryPoints: - web routes: - match: Host(`python-demo.local.com`) kind: Rule services: - name: python-demo port: 5000二、应用埋点 1、java应用自动埋点apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # 声明资源类型为 Instrumentation(用于语言自动注入) metadata: name: java-instrumentation # Instrumentation 资源的名称(可以被 Deployment 等引用) namespace: opentelemetry spec: propagators: # 指定用于 trace 上下文传播的方式,支持多种格式 - tracecontext # W3C Trace Context(最通用的跨服务追踪格式) - baggage # 传播用户定义的上下文键值对 - b3 # Zipkin 的 B3 header(用于兼容 Zipkin 环境) sampler: # 定义采样策略(决定是否收集 trace) type: always_on # 始终采样所有请求(适合测试或调试环境) java: # image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest # 使用的 Java 自动注入 agent 镜像地址 image: harbor.cuiliangblog.cn/otel/autoinstrumentation-java:latest env: - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://center-collector.opentelemetry.svc:4318#为了启用自动检测,我们需要更新部署文件并向其添加注解。这样我们可以告诉 OpenTelemetry Operator 将 sidecar 和 java-instrumentation 注入到我们的应用程序中。修改 Deployment 配置如下: apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: app: java-demo template: metadata: labels: app: java-demo annotations: instrumentation.opentelemetry.io/inject-java: "opentelemetry/java-instrumentation" # 填写 Instrumentation 资源的名称 sidecar.opentelemetry.io/inject: "opentelemetry/sidecar" # 注入一个 sidecar 模式的 OpenTelemetry Collector spec: containers: - name: java-demo image: registry.cn-guangzhou.aliyuncs.com/xingcangku/spring-petclinic:1.5.1 imagePullPolicy: IfNotPresent resources: limits: memory: "500Mi" cpu: "200m" ports: - containerPort: 8080#接下来更新 deployment,然后查看资源信息,java-demo 容器已经变为两个。 root@k8s01:~/helm/opentelemetry# kubectl get pods NAME READY STATUS RESTARTS AGE java-demo-5cdd74d47-vmqqx 0/2 Init:0/1 0 6s java-demo-5f4d989b88-xrzg7 1/1 Running 0 42m my-sonarqube-postgresql-0 1/1 Running 8 (2d21h ago) 9d my-sonarqube-sonarqube-0 0/1 Pending 0 6d6h python-demo-69c56c549c-jcgmj 1/1 Running 0 16m redis-5ff4857944-v2vz5 1/1 Running 5 (2d21h ago) 6d2h root@k8s01:~/helm/opentelemetry# kubectl get pods -w NAME READY STATUS RESTARTS AGE java-demo-5cdd74d47-vmqqx 0/2 PodInitializing 0 9s java-demo-5f4d989b88-xrzg7 1/1 Running 0 42m my-sonarqube-postgresql-0 1/1 Running 8 (2d21h ago) 9d my-sonarqube-sonarqube-0 0/1 Pending 0 6d6h python-demo-69c56c549c-jcgmj 1/1 Running 0 17m redis-5ff4857944-v2vz5 1/1 Running 5 (2d21h ago) 6d2h java-demo-5cdd74d47-vmqqx 2/2 Running 0 23s java-demo-5f4d989b88-xrzg7 1/1 Terminating 0 43m java-demo-5f4d989b88-xrzg7 0/1 Terminating 0 43m java-demo-5f4d989b88-xrzg7 0/1 Terminating 0 43m java-demo-5f4d989b88-xrzg7 0/1 Terminating 0 43m java-demo-5f4d989b88-xrzg7 0/1 Terminating 0 43m root@k8s01:~/helm/opentelemetry# kubectl get pods -w NAME READY STATUS RESTARTS AGE java-demo-5cdd74d47-vmqqx 2/2 Running 0 28s my-sonarqube-postgresql-0 1/1 Running 8 (2d21h ago) 9d my-sonarqube-sonarqube-0 0/1 Pending 0 6d6h python-demo-69c56c549c-jcgmj 1/1 Running 0 17m redis-5ff4857944-v2vz5 1/1 Running 5 (2d21h ago) 6d2h ^Croot@k8s01:~/helm/opentelemetry# kubectl get opentelemetrycollectors -A NAMESPACE NAME MODE VERSION READY AGE IMAGE MANAGEMENT opentelemetry center deployment 0.127.0 1/1 3h22m registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0 managed opentelemetry sidecar sidecar 0.127.0 3h19m managed root@k8s01:~/helm/opentelemetry# kubectl get instrumentations -A NAMESPACE NAME AGE ENDPOINT SAMPLER SAMPLER ARG opentelemetry java-instrumentation 2m26s always_on #查看 sidecar日志,已正常启动并发送 spans 数据 root@k8s01:~/helm/opentelemetry# kubectl logs java-demo-5cdd74d47-vmqqx -c otc-container 2025-06-14T15:31:35.013Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z info builders/builders.go:26 Development component. May change in the future. {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Beta component. May change in the future. {"resource": {}, "otelcol.component.id": "batch", "otelcol.component.kind": "processor", "otelcol.pipeline.id": "traces", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z debug otlpreceiver@v0.127.0/otlp.go:58 created signal-agnostic logger {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver"} 2025-06-14T15:31:35.021Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127.0", "NumCPU": 8} 2025-06-14T15:31:35.021Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:176 [core] original dial target is: "center-collector.opentelemetry.svc:4317" {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:459 [core] [Channel #1]Channel created {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:207 [core] [Channel #1]parsed dial target is: resolver.Target{URL:url.URL{Scheme:"passthrough", Opaque:"", User:(*url.Userinfo)(nil), Host:"", Path:"/center-collector.opentelemetry.svc:4317", RawPath:"", OmitHost:false, ForceQuery:false, RawQuery:"", Fragment:"", RawFragment:""}} {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:208 [core] [Channel #1]Channel authority set to "center-collector.opentelemetry.svc:4317" {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.022Z info grpc@v1.72.1/resolver_wrapper.go:210 [core] [Channel #1]Resolver state updated: { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } (resolver returned new addresses) {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.022Z info grpc@v1.72.1/balancer_wrapper.go:122 [core] [Channel #1]Channel switches to new LB policy "pick_first" {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info gracefulswitch/gracefulswitch.go:194 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc000bc6090] Received new config { "shuffleAddressList": false }, resolver state { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to CONNECTING{"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/balancer_wrapper.go:195 [core] [Channel #1 SubChannel #2]Subchannel created {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:364 [core] [Channel #1]Channel exiting idle mode {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity change to CONNECTING {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.024Z info grpc@v1.72.1/clientconn.go:1343 [core] [Channel #1 SubChannel #2]Subchannel picks a new address "center-collector.opentelemetry.svc:4317" to connect {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.024Z info grpc@v1.72.1/server.go:690 [core] [Server #3]Server created {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.024Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4317"} 2025-06-14T15:31:35.025Z info grpc@v1.72.1/server.go:886 [core] [Server #3 ListenSocket #4]ListenSocket created {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.025Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4318"} 2025-06-14T15:31:35.026Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {"resource": {}} 2025-06-14T15:31:35.034Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity change to READY {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.034Z info pickfirstleaf/pickfirstleaf.go:197 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc000bc6090] SubConn 0xc0008e1db0 reported connectivity state READY and the health listener is disabled. Transitioning SubConn to READY. {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.034Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to READY {"resource": {}, "grpc_log": true} root@k8s01:~/helm/opentelemetry# kubectl logs java-demo-5cdd74d47-vmqqx -c otc-container 2025-06-14T15:31:35.013Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "otlp 2025-06-14T15:31:35.014Z info builders/builders.go:26 Development component. May change in the future. {"resource": {aces"} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Beta component. May change in the future. {"resource": {}, "oteles", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "otlp 2025-06-14T15:31:35.014Z debug otlpreceiver@v0.127.0/otlp.go:58 created signal-agnostic logger {"resource": {}, "otel 2025-06-14T15:31:35.021Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127.0", 2025-06-14T15:31:35.021Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:176 [core] original dial target is: "center-collector.opentelemetr 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:459 [core] [Channel #1]Channel created {"resource": {}, "grpc 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:207 [core] [Channel #1]parsed dial target is: resolver.Target{URL:ector.opentelemetry.svc:4317", RawPath:"", OmitHost:false, ForceQuery:false, RawQuery:"", Fragment:"", RawFragment:""}} {"resource": { 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:208 [core] [Channel #1]Channel authority set to "center-collector. 2025-06-14T15:31:35.022Z info grpc@v1.72.1/resolver_wrapper.go:210 [core] [Channel #1]Resolver state updated: { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } (resolver returned new addresses) {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.022Z info grpc@v1.72.1/balancer_wrapper.go:122 [core] [Channel #1]Channel switches to new LB policy " 2025-06-14T15:31:35.023Z info gracefulswitch/gracefulswitch.go:194 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc000bc6090] "shuffleAddressList": false }, resolver state { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to CONNECTING 2025-06-14T15:31:35.023Z info grpc@v1.72.1/balancer_wrapper.go:195 [core] [Channel #1 SubChannel #2]Subchannel created 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:364 [core] [Channel #1]Channel exiting idle mode {"resource": { 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity chang 2025-06-14T15:31:35.024Z info grpc@v1.72.1/clientconn.go:1343 [core] [Channel #1 SubChannel #2]Subchannel picks a new addres 2025-06-14T15:31:35.024Z info grpc@v1.72.1/server.go:690 [core] [Server #3]Server created {"resource": {}, "grpc 2025-06-14T15:31:35.024Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol.comp 2025-06-14T15:31:35.025Z info grpc@v1.72.1/server.go:886 [core] [Server #3 ListenSocket #4]ListenSocket created {"reso 2025-06-14T15:31:35.025Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol.comp 2025-06-14T15:31:35.026Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {"reso 2025-06-14T15:31:35.034Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity chang 2025-06-14T15:31:35.034Z info pickfirstleaf/pickfirstleaf.go:197 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc000bc6090]ansitioning SubConn to READY. {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.034Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to READY {"reso #查看collector 日志,已经收到 traces 数据 root@k8s01:~/helm/opentelemetry# kubectl get pod -n opentelemetry NAME READY STATUS RESTARTS AGE center-collector-78f7bbdf45-j798s 1/1 Running 0 3h24m root@k8s01:~/helm/opentelemetry# kubectl get -n opentelemetry pods NAME READY STATUS RESTARTS AGE center-collector-78f7bbdf45-j798s 1/1 Running 0 3h25m root@k8s01:~/helm/opentelemetry# kubectl logs -n opentelemetry center-collector-78f7bbdf45-j798s 2025-06-14T12:09:21.290Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T12:09:21.291Z info builders/builders.go:26 Development component. May change in the future. {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T12:09:21.294Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127.0", "NumCPU": 8} 2025-06-14T12:09:21.294Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T12:09:21.294Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4317"} 2025-06-14T12:09:21.295Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4318"} 2025-06-14T12:09:21.295Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {"resource": {}} root@k8s01:~/helm/opentelemetry# 2、python应用自动埋点与 java 应用类似,python 应用同样也支持自动埋点, OpenTelemetry 提供了 opentelemetry-instrument CLI 工具,在启动 Python 应用时通过 sitecustomize 或环境变量注入自动 instrumentation。 我们先创建一个java-instrumentation 资源apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # 声明资源类型为 Instrumentation(用于语言自动注入) metadata: name: python-instrumentation # Instrumentation 资源的名称(可以被 Deployment 等引用) namespace: opentelemetry spec: propagators: # 指定用于 trace 上下文传播的方式,支持多种格式 - tracecontext # W3C Trace Context(最通用的跨服务追踪格式) - baggage # 传播用户定义的上下文键值对 - b3 # Zipkin 的 B3 header(用于兼容 Zipkin 环境) sampler: # 定义采样策略(决定是否收集 trace) type: always_on # 始终采样所有请求(适合测试或调试环境) python: image: registry.cn-guangzhou.aliyuncs.com/xingcangku/autoinstrumentation-python:latest env: - name: OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED # 启用日志的自动检测 value: "true" - name: OTEL_PYTHON_LOG_CORRELATION # 在日志中启用跟踪上下文注入 value: "true" - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://center-collector.opentelemetry.svc:4318^Croot@k8s01:~/helm/opentelemetry# cat new-python-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: python-demo spec: selector: matchLabels: app: python-demo template: metadata: labels: app: python-demo annotations: instrumentation.opentelemetry.io/inject-python: "opentelemetry/python-instrumentation" # 填写 Instrumentation 资源的名称 sidecar.opentelemetry.io/inject: "opentelemetry/sidecar" # 注入一个 sidecar 模式的 OpenTelemetry Collector spec: containers: - name: pyhton-demo image: registry.cn-guangzhou.aliyuncs.com/xingcangku/python-demoapp:latest imagePullPolicy: IfNotPresent resources: limits: memory: "500Mi" cpu: "200m" ports: - containerPort: 5000 oot@k8s03:~# kubectl get pods NAME READY STATUS RESTARTS AGE java-demo-5559f949b9-74p68 2/2 Running 0 2m14s java-demo-5559f949b9-kwgpc 0/2 Terminating 0 14m my-sonarqube-postgresql-0 1/1 Running 8 (2d22h ago) 9d my-sonarqube-sonarqube-0 0/1 Pending 0 6d7h python-demo-599fc7f8d6-lbhnr 2/2 Running 0 20m redis-5ff4857944-v2vz5 1/1 Running 5 (2d22h ago) 6d3h root@k8s03:~# kubectl logs python-demo-599fc7f8d6-lbhnr -c otc-container 2025-06-14T15:57:12.951Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T15:57:12.952Z info builders/builders.go:26 Development component. May change in the future. {"resource{}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T15:57:12.952Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "p", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T15:57:12.952Z debug builders/builders.go:24 Beta component. May change in the future. {"resource": {}, "lcol.component.id": "batch", "otelcol.component.kind": "processor", "otelcol.pipeline.id": "traces", "otelcol.signal": "traces"} 2025-06-14T15:57:12.952Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "p", "otelcol.component.kind": "receiver", "otelcol.signal": "traces"} 2025-06-14T15:57:12.952Z debug otlpreceiver@v0.127.0/otlp.go:58 created signal-agnostic logger {"resource": {}, "lcol.component.id": "otlp", "otelcol.component.kind": "receiver"} 2025-06-14T15:57:12.953Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127, "NumCPU": 8} 2025-06-14T15:57:12.953Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T15:57:12.953Z info grpc@v1.72.1/clientconn.go:176 [core] original dial target is: "center-collector.opentelery.svc:4317" {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:459 [core] [Channel #1]Channel created {"resource": {}, "c_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:207 [core] [Channel #1]parsed dial target is: resolver.Target{:url.URL{Scheme:"passthrough", Opaque:"", User:(*url.Userinfo)(nil), Host:"", Path:"/center-collector.opentelemetry.svc:4317", Rawh:"", OmitHost:false, ForceQuery:false, RawQuery:"", Fragment:"", RawFragment:""}} {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:208 [core] [Channel #1]Channel authority set to "center-collec.opentelemetry.svc:4317" {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/resolver_wrapper.go:210 [core] [Channel #1]Resolver state updated: { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } (resolver returned new addresses) {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/balancer_wrapper.go:122 [core] [Channel #1]Channel switches to new LB poli"pick_first" {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info gracefulswitch/gracefulswitch.go:194 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc00046e] Received new config { "shuffleAddressList": false }, resolver state { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to CONNECTI"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/balancer_wrapper.go:195 [core] [Channel #1 SubChannel #2]Subchannel create"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:364 [core] [Channel #1]Channel exiting idle mode {"resource{}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity cge to CONNECTING {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:1343 [core] [Channel #1 SubChannel #2]Subchannel picks a new adss "center-collector.opentelemetry.svc:4317" to connect {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/server.go:690 [core] [Server #3]Server created {"resource": {}, "c_log": true} 2025-06-14T15:57:12.954Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol.ponent.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4317"} 2025-06-14T15:57:12.954Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol.ponent.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4318"} 2025-06-14T15:57:12.954Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {"ource": {}} 2025-06-14T15:57:12.955Z info grpc@v1.72.1/server.go:886 [core] [Server #3 ListenSocket #4]ListenSocket created {"ource": {}, "grpc_log": true} 2025-06-14T15:57:12.962Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity cge to READY {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.962Z info pickfirstleaf/pickfirstleaf.go:197 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc00046e] SubConn 0xc0005fccd0 reported connectivity state READY and the health listener is disabled. Transitioning SubConn to READY. {"ource": {}, "grpc_log": true} 2025-06-14T15:57:12.962Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to READY {"ource": {}, "grpc_log": true} root@k8s03:~# root@k8s03:~# kubectl logs -n opentelemetry center-collector-78f7bbdf45-j798s 2025-06-14T12:09:21.290Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T12:09:21.291Z info builders/builders.go:26 Development component. May change in the future. {"resourceaces"} 2025-06-14T12:09:21.294Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127 2025-06-14T12:09:21.294Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T12:09:21.294Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol. 2025-06-14T12:09:21.295Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol. 2025-06-14T12:09:21.295Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {" 2025-06-14T16:05:11.811Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor 2025-06-14T16:05:16.636Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor 2025-06-14T16:05:26.894Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor 2025-06-14T16:18:11.294Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor 2025-06-14T16:18:21.350Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor root@k8s03:~#
2025年06月14日
6 阅读
0 评论
0 点赞
2025-06-14
OpenTelemetry部署
建议使用 OpenTelemetry Operator 来部署,因为它可以帮助我们轻松部署和管理 OpenTelemetry 收集器,还可以自动检测应用程序。具体可参考文档https://opentelemetry.io/docs/platforms/kubernetes/operator/一、部署cert-manager因为 Operator 使用了 Admission Webhook 通过 HTTP 回调机制对资源进行校验/修改。Kubernetes 要求 Webhook 服务必须使用 TLS,因此 Operator 需要为其 webhook server 签发证书,所以需要先安装cert-manager。# wget https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml # kubectl apply -f cert-manager.yaml root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get -n cert-manager pod NAME READY STATUS RESTARTS AGE cert-manager-7bd494778-gs44k 1/1 Running 0 37s cert-manager-cainjector-76474c8c48-w9r5p 1/1 Running 0 37s cert-manager-webhook-6797c49f67-thvcz 1/1 Running 0 37s root@k8s01:~/helm/opentelemetry/cert-manager# 二、部署Operator在 Kubernetes 上使用 OpenTelemetry,主要就是部署 OpenTelemetry 收集器。# wget https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml # kubectl apply -f opentelemetry-operator.yaml # kubectl get pod -n opentelemetry-operator-system NAME READY STATUS RESTARTS AGE opentelemetry-operator-controller-manager-6d94c5db75-cz957 2/2 Running 0 74s # kubectl get crd |grep opentelemetry instrumentations.opentelemetry.io 2025-04-21T09:48:53Z opampbridges.opentelemetry.io 2025-04-21T09:48:54Z opentelemetrycollectors.opentelemetry.io 2025-04-21T09:48:54Z targetallocators.opentelemetry.io 2025-04-21T09:48:54Zroot@k8s01:~/helm/opentelemetry/cert-manager# kubectl apply -f opentelemetry-operator.yaml namespace/opentelemetry-operator-system created customresourcedefinition.apiextensions.k8s.io/instrumentations.opentelemetry.io created customresourcedefinition.apiextensions.k8s.io/opampbridges.opentelemetry.io created customresourcedefinition.apiextensions.k8s.io/opentelemetrycollectors.opentelemetry.io created customresourcedefinition.apiextensions.k8s.io/targetallocators.opentelemetry.io created serviceaccount/opentelemetry-operator-controller-manager created role.rbac.authorization.k8s.io/opentelemetry-operator-leader-election-role created clusterrole.rbac.authorization.k8s.io/opentelemetry-operator-manager-role created clusterrole.rbac.authorization.k8s.io/opentelemetry-operator-metrics-reader created clusterrole.rbac.authorization.k8s.io/opentelemetry-operator-proxy-role created rolebinding.rbac.authorization.k8s.io/opentelemetry-operator-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/opentelemetry-operator-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/opentelemetry-operator-proxy-rolebinding created service/opentelemetry-operator-controller-manager-metrics-service created service/opentelemetry-operator-webhook-service created deployment.apps/opentelemetry-operator-controller-manager created Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`. certificate.cert-manager.io/opentelemetry-operator-serving-cert created issuer.cert-manager.io/opentelemetry-operator-selfsigned-issuer created mutatingwebhookconfiguration.admissionregistration.k8s.io/opentelemetry-operator-mutating-webhook-configuration created validatingwebhookconfiguration.admissionregistration.k8s.io/opentelemetry-operator-validating-webhook-configuration created root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get pods -n opentelemetry-operator-system NAME READY STATUS RESTARTS AGE opentelemetry-operator-controller-manager-f78fc55f7-xtjk2 2/2 Running 0 107s root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get crd |grep opentelemetry instrumentations.opentelemetry.io 2025-06-14T11:30:01Z opampbridges.opentelemetry.io 2025-06-14T11:30:01Z opentelemetrycollectors.opentelemetry.io 2025-06-14T11:30:02Z targetallocators.opentelemetry.io 2025-06-14T11:30:02Z三、部署Collector(中心)接下来我们部署一个精简版的 OpenTelemetry Collector,用于接收 OTLP 格式的 trace 数据,通过 gRPC 或 HTTP 协议接入,经过内存控制与批处理后,打印到日志中以供调试使用。 root@k8s01:~/helm/opentelemetry/cert-manager# cat center-collector.yaml apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector # 元数据定义部分 metadata: name: center # Collector 的名称为 center namespace: opentelemetry # 具体的配置内容 spec: image: registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0 replicas: 1 # 设置副本数量为1 config: # 定义 Collector 配置 receivers: # 接收器,用于接收遥测数据(如 trace、metrics、logs) otlp: # 配置 OTLP(OpenTelemetry Protocol)接收器 protocols: # 启用哪些协议来接收数据 grpc: endpoint: 0.0.0.0:4317 # 启用 gRPC 协议 http: endpoint: 0.0.0.0:4318 # 启用 HTTP 协议 processors: # 处理器,用于处理收集到的数据 batch: {} # 批处理器,用于将数据分批发送,提高效率 exporters: # 导出器,用于将处理后的数据发送到后端系统 debug: {} # 使用 debug 导出器,将数据打印到终端(通常用于测试或调试) service: # 服务配置部分 pipelines: # 定义处理管道 traces: # 定义 trace 类型的管道 receivers: [otlp] # 接收器为 OTLP processors: [batch] # 使用批处理器 exporters: [debug] # 将数据打印到终端 root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get pod -n opentelemetry NAME READY STATUS RESTARTS AGE center-collector-78f7bbdf45-j798s 1/1 Running 0 43s center-collector-7b7b8b9b97-qwhdr 0/1 Terminating 0 12m root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get svc -n opentelemetry NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE center-collector ClusterIP 10.105.241.233 <none> 4317/TCP,4318/TCP 49s center-collector-headless ClusterIP None <none> 4317/TCP,4318/TCP 49s center-collector-monitoring ClusterIP 10.96.61.65 <none> 8888/TCP 49s root@k8s01:~/helm/opentelemetry/cert-manager# 四、部署Collector(代理)我们使用 Sidecar 模式部署 OpenTelemetry 代理。该代理会将应用程序的追踪发送到我们刚刚部署的中心OpenTelemetry 收集器。root@k8s01:~/helm/opentelemetry/cert-manager# cat center-collector.yaml apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector # 元数据定义部分 metadata: name: center # Collector 的名称为 center namespace: opentelemetry # 具体的配置内容 spec: image: registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0 replicas: 1 # 设置副本数量为1 config: # 定义 Collector 配置 receivers: # 接收器,用于接收遥测数据(如 trace、metrics、logs) otlp: # 配置 OTLP(OpenTelemetry Protocol)接收器 protocols: # 启用哪些协议来接收数据 grpc: endpoint: 0.0.0.0:4317 # 启用 gRPC 协议 http: endpoint: 0.0.0.0:4318 # 启用 HTTP 协议 processors: # 处理器,用于处理收集到的数据 batch: {} # 批处理器,用于将数据分批发送,提高效率 exporters: # 导出器,用于将处理后的数据发送到后端系统 debug: {} # 使用 debug 导出器,将数据打印到终端(通常用于测试或调试) service: # 服务配置部分 pipelines: # 定义处理管道 traces: # 定义 trace 类型的管道 receivers: [otlp] # 接收器为 OTLP processors: [batch] # 使用批处理器 exporters: [debug] # 将数据打印到终端 root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get pod -n opentelemetry NAME READY STATUS RESTARTS AGE center-collector-78f7bbdf45-j798s 1/1 Running 0 43s center-collector-7b7b8b9b97-qwhdr 0/1 Terminating 0 12m root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get svc -n opentelemetry NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE center-collector ClusterIP 10.105.241.233 <none> 4317/TCP,4318/TCP 49s center-collector-headless ClusterIP None <none> 4317/TCP,4318/TCP 49s center-collector-monitoring ClusterIP 10.96.61.65 <none> 8888/TCP 49s root@k8s01:~/helm/opentelemetry/cert-manager# vi sidecar-collector.yaml root@k8s01:~/helm/opentelemetry/cert-manager# kubectl apply -f sidecar-collector.yaml opentelemetrycollector.opentelemetry.io/sidecar created root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get opentelemetrycollectors -n opentelemetry NAME MODE VERSION READY AGE IMAGE MANAGEMENT center deployment 0.127.0 1/1 3m3s registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0 managed sidecar sidecar 0.127.0 7s managed root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get opentelemetrycollectors -n opentelemetry NAME MODE VERSION READY AGE IMAGE center deployment 0.127.0 1/1 3m8s registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0 sidecar sidecar 0.127.0 12s root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get pod -n opentelemetry NAME READY STATUS RESTARTS AGE center-collector-78f7bbdf45-j798s 1/1 Running 0 3m31s center-collector-7b7b8b9b97-qwhdr 0/1 Terminating 0 15m sidecar 代理依赖于应用程序启动,因此现在创建后并不会立即启动,需要我们创建一个应用程序并使用这个 sidecar 模式的 collector。
2025年06月14日
4 阅读
0 评论
0 点赞
2025-06-08
traefik添加路由
一、路由配置(IngressRouteTCP) 1、TCP路由(不带TLS证书)#在yaml文件中添加端口 ports: - name: metrics containerPort: 9100 protocol: TCP - name: traefik containerPort: 9000 protocol: TCP - name: web containerPort: 8000 protocol: TCP - name: websecure containerPort: 8443 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL - NET_BIND_SERVICE readOnlyRootFilesystem: false volumeMounts: - name: data mountPath: /data readOnly: false # 允许写入 args: - "--global.checknewversion" - "--global.sendanonymoususage" - "--entryPoints.metrics.address=:9100/tcp" - "--entryPoints.traefik.address=:9000/tcp" - "--entryPoints.web.address=:8000/tcp" - "--entryPoints.websecure.address=:8443/tcp" - "--api.dashboard=true" - "--entryPoints.redistcp.address=:6379/tcp" #添加端口 - "--ping=true" - "--metrics.prometheus=true" - "--metrics.prometheus.entrypoint=metrics" - "--providers.kubernetescrd" - "--providers.kubernetescrd.allowEmptyServices=true" - "--providers.kubernetesingress" - "--providers.kubernetesingress.allowEmptyServices=true" - "--providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik-release" - "--entryPoints.websecure.http.tls=true" - "--log.level=INFO"#实例 root@k8s01:~/helm/traefik/test# cat redis.yaml # redis.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis spec: selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - name: redis image: registry.cn-guangzhou.aliyuncs.com/xingcangku/redis:1.0 resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 6379 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: redis spec: selector: app: redis ports: - port: 6379 targetPort: 6379 root@k8s01:~/helm/traefik/test# cat redis-IngressRoute.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRouteTCP metadata: name: redis namespace: default # 确保在 Redis 的命名空间 spec: entryPoints: - redistcp # 使用新定义的入口点 routes: - match: HostSNI(`*`) # 正确语法 services: - name: redis port: 6379 #redis_Ingress.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRouteTCP metadata: name: redis namespace: default # 确保在 Redis 的命名空间 spec: entryPoints: - redistcp # 使用新定义的入口点 routes: - match: HostSNI(`*`) # 正确语法 services: - name: redis port: 6379#可以查询6379映射的端口 kubectl -n traefik get endpoints #集群外部客户端配置hosts解析192.168.93.128 redis.test.com(域名可以随意填写,只要能解析到traefik所在节点即可),然后通过redis-cli工具访问redis,记得指定tcpep的端口。 # redis-cli -h redis.test.com -p 上面查询映射的端口 redis.test.com:9200> set key_a value_a OK redis.test.com:9200> get key_a "value_a" redis.test.com:9200> 2、TCP路由(带TLS证书)有时候为了安全要求,tcp传输也需要使用TLS证书加密,redis从6.0开始支持了tls证书通信。root@k8s01:~/helm/traefik/test/redis-ssl# openssl genrsa -out ca.key 4096 Generating RSA private key, 4096 bit long modulus (2 primes) ..........................................................................................................................................................................................................................................................++++ ................++++ e is 65537 (0x010001) root@k8s01:~/helm/traefik/test/redis-ssl# ls ca.key root@k8s01:~/helm/traefik/test/redis-ssl# openssl req -x509 -new -nodes -sha256 -key ca.key -days 3650 -subj '/O=Redis Test/CN=Certificate Authority' -out ca.crt root@k8s01:~/helm/traefik/test/redis-ssl# ls ca.crt ca.key root@k8s01:~/helm/traefik/test/redis-ssl# openssl genrsa -out redis.key 2048 Generating RSA private key, 2048 bit long modulus (2 primes) ..........................................................+++++ ...................................................................+++++ e is 65537 (0x010001) root@k8s01:~/helm/traefik/test/redis-ssl# openssl req -new -sha256 -key redis.key -subj '/O=Redis Test/CN=Server' | openssl x509 -req -sha256 -CA ca.crt -CAkey ca.key -CAserial ca.txt -CAcreateserial -days 365 -out redis.crt Signature ok subject=O = Redis Test, CN = Server Getting CA Private Key root@k8s01:~/helm/traefik/test/redis-ssl# openssl dhparam -out redis.dh 2048 Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time ...............................................+..................................................................................................................................................................................................+................+..........................................+...........+.............+..............................................................+................................................................+.+....................................................................................................................................................................................................................................................................................................+..............+................................................................................................................................................................................................+...................................................................................................................................................................................................................................................................................+...........................................................................................................................................................................+........................................+.+.............................+....................................................................................................................................................+.............................................................................+.++*++*++*++* root@k8s01:~/helm/traefik/test/redis-ssl# ll total 24 drwxr-xr-x 2 root root 98 Jun 8 06:57 ./ drwxr-xr-x 3 root root 249 Jun 8 06:55 ../ -rw-r--r-- 1 root root 1895 Jun 8 06:56 ca.crt -rw------- 1 root root 3243 Jun 8 06:55 ca.key -rw-r--r-- 1 root root 41 Jun 8 06:57 ca.txt -rw-r--r-- 1 root root 1407 Jun 8 06:57 redis.crt -rw-r--r-- 1 root root 424 Jun 8 06:57 redis.dh -rw------- 1 root root 1679 Jun 8 06:56 redis.key root@k8s01:~/helm/traefik/test/redis-ssl# ca.crt # 根证书 (公开) ca.key # CA 私钥 (绝密) ca.txt # 证书序列号记录 redis.key # Redis 服务器私钥 (绝密) redis.crt # Redis 服务器证书 (公开) redis.dh # Diffie-Hellman 参数 (公开)#生成 CA 私钥 openssl genrsa -out ca.key 4096 作用:创建根证书颁发机构(CA)的私钥 genrsa:生成 RSA 私钥 -out ca.key:输出到文件 ca.key 4096:密钥长度(4096 位高安全级别) 输出文件:ca.key(CA 私钥,需严格保密)#生成 CA 根证书 openssl req -x509 -new -nodes -sha256 -key ca.key -days 3650 -subj '/O=Redis Test/CN=Certificate Authority' -out ca.crt 下面是对您生成 Redis SSL/TLS 证书的每一步命令的解析: 1. 生成 CA 私钥 openssl genrsa -out ca.key 4096 作用:创建根证书颁发机构(CA)的私钥 genrsa:生成 RSA 私钥 -out ca.key:输出到文件 ca.key 4096:密钥长度(4096 位高安全级别) 输出文件:ca.key(CA 私钥,需严格保密) 2. 生成 CA 根证书 openssl req -x509 -new -nodes -sha256 -key ca.key -days 3650 -subj '/O=Redis Test/CN=Certificate Authority' -out ca.crt 作用:创建自签名的根证书 req -x509:创建自签名证书(而不是证书请求) -new:生成新证书 -nodes:不加密私钥(No DES,明文存储) -sha256:使用 SHA-256 哈希算法 -key ca.key:指定 CA 私钥 -days 3650:有效期 10 年 -subj:证书主题信息 /O=Redis Test:组织名称 /CN=Certificate Authority:通用名称(标识为 CA) -out ca.crt:输出到 ca.crt 输出文件:ca.crt(受信任的根证书)#生成 Redis 服务器私钥 openssl genrsa -out redis.key 2048 作用:创建 Redis 服务器的私钥 与步骤 1 类似,但密钥长度 2048 位(更短,性能更好) 输出文件:redis.key(服务器私钥,需保密)#生成并签署 Redis 服务器证书 openssl req -new -sha256 -key redis.key -subj '/O=Redis Test/CN=Server' | openssl x509 -req -sha256 -CA ca.crt -CAkey ca.key -CAserial ca.txt -CAcreateserial -days 365 -out redis.crt #前半部分(生成证书请求) openssl req -new -sha256 -key redis.key -subj '/O=Redis Test/CN=Server' req -new:创建证书签名请求(CSR) -subj '/O=Redis Test/CN=Server':主题信息 CN=Server:标识为 Redis 服务器 #后半部分(签署证书) openssl x509 -req ... -out redis.crt x509 -req:签署证书请求 -CA ca.crt:指定 CA 证书 -CAkey ca.key:指定 CA 私钥 -CAserial ca.txt:指定序列号记录文件 -CAcreateserial:如果序列号文件不存在则创建 -days 365:有效期 1 年 输出文件: redis.crt:Redis 服务器证书 ca.txt:证书序列号记录文件(首次生成) #生成 Diffie-Hellman 参数 openssl dhparam -out redis.dh 2048 作用:创建安全密钥交换参数 dhparam:生成 Diffie-Hellman 参数 2048:密钥长度 输出文件:redis.dh(用于前向保密 PFS)创建secret资源,使用tls类型,包含redis.crt和redis.keyroot@k8s01:~/helm/traefik/test/redis-ssl# kubectl create secret tls redis-tls --key=redis.key --cert=redis.crt secret/redis-tls created root@k8s01:~/helm/traefik/test/redis-ssl# kubectl describe secrets redis-tls Name: redis-tls Namespace: default Labels: <none> Annotations: <none> Type: kubernetes.io/tls Data ==== tls.crt: 1407 bytes tls.key: 1679 bytes root@k8s01:~/helm/traefik/test/redis-ssl# 创建secret资源,使用generic类型,包含ca.crtroot@k8s01:~/helm/traefik/test/redis-ssl# kubectl create secret generic redis-ca --from-file=ca.crt=ca.crt secret/redis-ca created root@k8s01:~/helm/traefik/test/redis-ssl# kubectl describe secrets redis-ca Name: redis-ca Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== ca.crt: 1895 bytes root@k8s01:~/helm/traefik/test/redis-ssl# 修改redis配置,启用tls证书,并挂载证书文件apiVersion: v1 kind: ConfigMap metadata: name: redis labels: app: redis data: redis.conf : |- port 0 tls-port 6379 tls-cert-file /etc/tls/tls.crt tls-key-file /etc/tls/tls.key tls-ca-cert-file /etc/ca/ca.crt --- apiVersion: apps/v1 kind: Deployment metadata: name: redis spec: selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - name: redis image: registry.cn-guangzhou.aliyuncs.com/xingcangku/redis:1.0 resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 6379 protocol: TCP volumeMounts: - name: config mountPath: /etc/redis - name: tls mountPath: /etc/tls - name: ca mountPath: /etc/ca args: - /etc/redis/redis.conf volumes: - name: config configMap: name: redis - name: tls secret: secretName: redis-tls - name: ca secret: secretName: redis-ca --- apiVersion: v1 kind: Service metadata: name: redis spec: selector: app: redis ports: - port: 6379 targetPort: 6379root@k8s01:~/helm/traefik/test# kubectl apply -f redis.yaml configmap/redis created deployment.apps/redis configured service/redis unchangedroot@k8s01:~/helm/traefik/test/redis-ssl# # 生成客户端证书 root@k8s01:~/helm/traefik/test/redis-ssl# openssl req -newkey rsa:4096 -nodes \ > -keyout client.key \ > -subj "/CN=redis-client" \ > -out client.csr rt \ -days 365Generating a RSA private key ...........................................++++ .........................................................................................................++++ writing new private key to 'client.key' ----- root@k8s01:~/helm/traefik/test/redis-ssl# root@k8s01:~/helm/traefik/test/redis-ssl# # 用同一CA签发 root@k8s01:~/helm/traefik/test/redis-ssl# openssl x509 -req -in client.csr \ > -CA ca.crt \ > -CAkey ca.key \ > -CAcreateserial \ > -out client.crt \ > -days 365 Signature ok subject=CN = redis-client Getting CA Private Key root@k8s01:~/helm/traefik/test/redis-ssl# scp {client.crt,client.key,ca.crt} root@192.168.3.131:/tmp/redis-ssl/ root@192.168.3.131's password: client.crt 100% 1732 308.5KB/s 00:00 client.key 100% 3272 972.0KB/s 00:00 ca.crt 100% 1895 1.0MB/s 00:00 root@k8s01:~/helm/traefik/test# cat redis-IngressRoute-tls.yaml # redis-IngressRoute-tls.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRouteTCP metadata: name: redis spec: entryPoints: - tcpep routes: - match: HostSNI(`redis.test.com`) services: - name: redis port: 6379 tls: passthrough: true # 重要:这里是启用透传的正确位置 root@k8s01:~/helm/traefik/test# #到集群意外的机器测试redis连接 低版本不支持TLS,需要编译安装6.0以上版本,并在编译时开启TLS root@ubuntu02:~/redis-stable# ./src/redis-cli -h redis.test.com -p 31757 --tls --sni redis.test.com --cert /tmp/redis-ssl/client.crt --key /tmp/redis-ssl/client.key --cacert /tmp/redis-ssl/ca.crt redis.test.com:31757> get key "1" redis.test.com:31757> root@ubuntu02:~/redis-stable# ./src/redis-cli \ > -h redis.test.com \ > -p 31757 \ > --tls \ > --sni redis.test.com \ > --cert /tmp/redis-ssl/client.crt \ > --key /tmp/redis-ssl/client.key \ > --cacert /tmp/redis-ssl/ca.crt redis.test.com:31757> set v1=1 (error) ERR wrong number of arguments for 'set' command redis.test.com:31757> set key 1 OK #配置对比 # redis-IngressRoute-tls.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRouteTCP metadata: name: redis spec: entryPoints: ◦ tcpep routes: ◦ match: HostSNI(`redis.test.com`) services: ▪ name: redis port: 6379 tls: passthrough: true #上面的配置可以用 下面的配置用不了 # redis-IngressRoute-tls.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRouteTCP metadata: name: redis spec: entryPoints: ◦ tcpep routes: ◦ match: HostSNI(`redis.test.com`) services: ▪ name: redis port: 6379 tls: secretName: redis-tls工作配置: 客户端 → [TLS握手] → Traefik (透传) → Redis服务器 │ └─ 直接处理TLS 失败配置: 客户端 → [TLS握手] → Traefik (终止TLS) → Redis服务器 │ │ ├─验证Traefik证书 └─ 接收明文连接但期望加密 └─客户端CA不信任Traefik证书 因为第一个配置是不经过Traefik直接到redis认证证书 工作原理:Traefik 不处理TLS流量,直接透传加密数据到Redis服务器 证书使用:客户端直接与Redis服务器进行TLS握手,使用Redis服务器自身的证书 验证过程:客户端用自有的ca.crt验证Redis服务器证书(CN=redis.test.com) 第二个是需要先和Traefik认证 工作原理:Traefik 在入口处终止TLS连接,然后以明文转发到Redis服务器 证书使用: 客户端验证 Traefik 的证书(来自 redis-tls secret) Redis 服务器接收非加密流量 验证过程:客户端验证的是 Traefik 的证书,而不是Redis服务器证书 失败的核心原因 证书不匹配问题: 当使用 secretName: redis-tls 时,Traefik 使用该 Secret 中的证书 但是客户端配置的CA证书是用于验证Redis服务器证书(CN=redis.test.com)的 Traefik 的证书(CN=TRAEFIK DEFAULT CERT)无法通过客户端CA验证
2025年06月08日
8 阅读
0 评论
0 点赞
2025-06-05
sonarqube部署安装
一、部署1、下载下载地址:https://github.com/SonarSource/helm-chart-sonarqube/releases/download/sonarqube-2025.3.0-sonarqube-dce-2025.3.0/sonarqube-2025.3.0.tgz 解压:tar zxav sonarqube-2025.3.0.tgz 把模板输出: helm template my-sonarqube . > test.yaml 2、yaml文件--- # Source: sonarqube/charts/postgresql/templates/secrets.yaml apiVersion: v1 kind: Secret metadata: name: my-sonarqube-postgresql labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm namespace: default type: Opaque data: postgresql-postgres-password: "Tlp4MmJXa3hKbA==" postgresql-password: "c29uYXJQYXNz" --- # Source: sonarqube/templates/secret.yaml --- apiVersion: v1 kind: Secret metadata: name: my-sonarqube-sonarqube-monitoring-passcode labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm type: Opaque data: SONAR_WEB_SYSTEMPASSCODE: "MzMwNzA1OTVBYmNA" --- # Source: sonarqube/templates/secret.yaml --- apiVersion: v1 kind: Secret metadata: name: my-sonarqube-sonarqube-http-proxies labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm type: Opaque stringData: PLUGINS-HTTP-PROXY: "" PLUGINS-HTTPS-PROXY: "" PLUGINS-NO-PROXY: "" PROMETHEUS-EXPORTER-HTTP-PROXY: "" PROMETHEUS-EXPORTER-HTTPS-PROXY: "" PROMETHEUS-EXPORTER-NO-PROXY: "" --- # Source: sonarqube/templates/config.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-config labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: sonar.properties: | --- # Source: sonarqube/templates/init-fs.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-init-fs labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: init_fs.sh: |- chown -R 1000:0 /opt/sonarqube/data chown -R 1000:0 /opt/sonarqube/temp chown -R 1000:0 /opt/sonarqube/logs --- # Source: sonarqube/templates/init-sysctl.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-init-sysctl labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: init_sysctl.sh: |- set -o errexit set -o xtrace vmMaxMapCount=524288 if [[ "$(sysctl -n vm.max_map_count)" -lt $vmMaxMapCount ]]; then sysctl -w vm.max_map_count=$vmMaxMapCount if [[ "$(sysctl -n vm.max_map_count)" -lt $vmMaxMapCount ]]; then echo "Failed to set initSysctl.vmMaxMapCount"; exit 1 fi fi fsFileMax=131072 if [[ "$(sysctl -n fs.file-max)" -lt $fsFileMax ]]; then sysctl -w fs.file-max=$fsFileMax if [[ "$(sysctl -n fs.file-max)" -lt $fsFileMax ]]; then echo "Failed to set initSysctl.fsFileMax"; exit 1 fi fi nofile=131072 if [[ "$(ulimit -n)" != "unlimited" ]]; then if [[ "$(ulimit -n)" -lt $nofile ]]; then ulimit -n $nofile if [[ "$(ulimit -n)" -lt $nofile ]]; then echo "Failed to set initSysctl.nofile"; exit 1 fi fi fi nproc=8192 if [[ "$(ulimit -u)" != "unlimited" ]]; then if [[ "$(ulimit -u)" -lt $nproc ]]; then ulimit -u $nproc if [[ "$(ulimit -u)" -lt $nproc ]]; then echo "Failed to set initSysctl.nproc"; exit 1 fi fi fi --- # Source: sonarqube/templates/install-plugins.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-install-plugins labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: install_plugins.sh: |- --- # Source: sonarqube/templates/jdbc-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-jdbc-config labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: SONAR_JDBC_USERNAME: "sonarUser" SONAR_JDBC_URL: "jdbc:postgresql://my-sonarqube-postgresql:5432/sonarDB" --- # Source: sonarqube/templates/prometheus-ce-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-prometheus-ce-config labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: prometheus-ce-config.yaml: |- rules: - pattern: .* --- # Source: sonarqube/templates/prometheus-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-prometheus-config labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: prometheus-config.yaml: |- rules: - pattern: .* --- # Source: sonarqube/templates/pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-sonarqube-sonarqube labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" storageClassName: "ceph-cephfs" --- # Source: sonarqube/charts/postgresql/templates/svc-headless.yaml apiVersion: v1 kind: Service metadata: name: my-sonarqube-postgresql-headless labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm # Use this annotation in addition to the actual publishNotReadyAddresses # field below because the annotation will stop being respected soon but the # field is broken in some versions of Kubernetes: # https://github.com/kubernetes/kubernetes/issues/58662 service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" namespace: default spec: type: ClusterIP clusterIP: None # We want all pods in the StatefulSet to have their addresses published for # the sake of the other Postgresql pods even before they're ready, since they # have to be able to talk to each other in order to become ready. publishNotReadyAddresses: true ports: - name: tcp-postgresql port: 5432 targetPort: tcp-postgresql selector: app.kubernetes.io/name: postgresql app.kubernetes.io/instance: my-sonarqube --- # Source: sonarqube/charts/postgresql/templates/svc.yaml apiVersion: v1 kind: Service metadata: name: my-sonarqube-postgresql labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm annotations: namespace: default spec: type: ClusterIP ports: - name: tcp-postgresql port: 5432 targetPort: tcp-postgresql selector: app.kubernetes.io/name: postgresql app.kubernetes.io/instance: my-sonarqube role: primary --- # Source: sonarqube/templates/service.yaml apiVersion: v1 kind: Service metadata: name: my-sonarqube-sonarqube labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm spec: type: ClusterIP ports: - port: 9000 targetPort: http protocol: TCP name: http selector: app: sonarqube release: my-sonarqube --- # Source: sonarqube/charts/postgresql/templates/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: my-sonarqube-postgresql labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: primary annotations: namespace: default spec: serviceName: my-sonarqube-postgresql-headless replicas: 1 updateStrategy: type: RollingUpdate selector: matchLabels: app.kubernetes.io/name: postgresql app.kubernetes.io/instance: my-sonarqube role: primary template: metadata: name: my-sonarqube-postgresql labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm role: primary app.kubernetes.io/component: primary spec: affinity: podAffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: postgresql app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/component: primary namespaces: - "default" topologyKey: kubernetes.io/hostname weight: 1 nodeAffinity: securityContext: fsGroup: 1001 automountServiceAccountToken: false containers: - name: my-sonarqube-postgresql image: registry.cn-guangzhou.aliyuncs.com/xingcangku/bitnami-postgresql:11.14.0-debian-10-r22 imagePullPolicy: "IfNotPresent" resources: limits: cpu: 2 memory: 2Gi requests: cpu: 100m memory: 200Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1001 seccompProfile: type: RuntimeDefault env: - name: BITNAMI_DEBUG value: "false" - name: POSTGRESQL_PORT_NUMBER value: "5432" - name: POSTGRESQL_VOLUME_DIR value: "/bitnami/postgresql" - name: PGDATA value: "/bitnami/postgresql/data" - name: POSTGRES_POSTGRES_PASSWORD valueFrom: secretKeyRef: name: my-sonarqube-postgresql key: postgresql-postgres-password - name: POSTGRES_USER value: "sonarUser" - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: my-sonarqube-postgresql key: postgresql-password - name: POSTGRES_DB value: "sonarDB" - name: POSTGRESQL_ENABLE_LDAP value: "no" - name: POSTGRESQL_ENABLE_TLS value: "no" - name: POSTGRESQL_LOG_HOSTNAME value: "false" - name: POSTGRESQL_LOG_CONNECTIONS value: "false" - name: POSTGRESQL_LOG_DISCONNECTIONS value: "false" - name: POSTGRESQL_PGAUDIT_LOG_CATALOG value: "off" - name: POSTGRESQL_CLIENT_MIN_MESSAGES value: "error" - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES value: "pgaudit" ports: - name: tcp-postgresql containerPort: 5432 livenessProbe: exec: command: - /bin/sh - -c - exec pg_isready -U "sonarUser" -d "dbname=sonarDB" -h 127.0.0.1 -p 5432 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 readinessProbe: exec: command: - /bin/sh - -c - -e - | exec pg_isready -U "sonarUser" -d "dbname=sonarDB" -h 127.0.0.1 -p 5432 [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 volumeMounts: - name: dshm mountPath: /dev/shm - name: data mountPath: /bitnami/postgresql subPath: volumes: - name: dshm emptyDir: medium: Memory volumeClaimTemplates: - metadata: name: data spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "2Gi" storageClassName: ceph-cephfs --- # Source: sonarqube/templates/sonarqube-sts.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: my-sonarqube-sonarqube labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm app.kubernetes.io/name: my-sonarqube app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: sonarqube app.kubernetes.io/component: my-sonarqube-sonarqube app.kubernetes.io/version: "25.5.0.107428-community" spec: replicas: 1 revisionHistoryLimit: 10 serviceName: my-sonarqube-sonarqube selector: matchLabels: app: sonarqube release: my-sonarqube template: metadata: annotations: checksum/config: 514ba5726581aabed2df14f0c3d95431e4f1150f3ee3c9790dae426c0b0effd3 checksum/init-fs: 2da6aac9b4e90ad2a2853245bcc71bf2b9a53bdf6db658a594551108671976e7 checksum/init-sysctl: a03f942e6089eda338af09ad886a4380f621c295548e9917a0e6113248ebb1aa checksum/plugins: 6b6fe750b5fb43bd030dbbe4e3ece53e5f37f595a480d504dd7e960bd5b9832a checksum/secret: 38377e36e39acacccf767e5fc68414a302d1868b7b9a99cb72e38f229023ca39 checksum/prometheus-config: c831c80bb8be92b75164340491b49ab104f5b865f53618ebcffe35fd03c4c034 checksum/prometheus-ce-config: a481713e44ccc5524e48597df39ba6f9a561fecd8b48fce7f6062602d8229613 labels: app: sonarqube release: my-sonarqube spec: automountServiceAccountToken: false securityContext: fsGroup: 0 initContainers: - name: "wait-for-db" image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsGroup: 0 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault command: ["/bin/bash", "-c"] args: ['set -o pipefail;for i in {1..200};do (echo > /dev/tcp/my-sonarqube-postgresql/5432) && exit 0; sleep 2;done; exit 1'] - name: init-sysctl image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent securityContext: privileged: true readOnlyRootFilesystem: true runAsUser: 0 command: ["/bin/bash", "-e", "/tmp/scripts/init_sysctl.sh"] volumeMounts: - name: init-sysctl mountPath: /tmp/scripts/ env: - name: SONAR_WEB_CONTEXT value: / - name: SONAR_WEB_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8000:/opt/sonarqube/conf/prometheus-config.yaml - name: SONAR_CE_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8001:/opt/sonarqube/conf/prometheus-ce-config.yaml - name: inject-prometheus-exporter image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsGroup: 0 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault command: ["/bin/sh", "-c"] args: ["curl -s 'https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.17.2/jmx_prometheus_javaagent-0.17.2.jar' --output /data/jmx_prometheus_javaagent.jar -v"] volumeMounts: - mountPath: /data name: sonarqube subPath: data env: - name: http_proxy valueFrom: secretKeyRef: name: my-sonarqube-sonarqube-http-proxies key: PROMETHEUS-EXPORTER-HTTP-PROXY - name: https_proxy valueFrom: secretKeyRef: name: my-sonarqube-sonarqube-http-proxies key: PROMETHEUS-EXPORTER-HTTPS-PROXY - name: no_proxy valueFrom: secretKeyRef: name: my-sonarqube-sonarqube-http-proxies key: PROMETHEUS-EXPORTER-NO-PROXY - name: SONAR_WEB_CONTEXT value: / - name: SONAR_WEB_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8000:/opt/sonarqube/conf/prometheus-config.yaml - name: SONAR_CE_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8001:/opt/sonarqube/conf/prometheus-ce-config.yaml - name: init-fs image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent securityContext: capabilities: add: - CHOWN drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 0 runAsNonRoot: false runAsUser: 0 seccompProfile: type: RuntimeDefault command: ["sh", "-e", "/tmp/scripts/init_fs.sh"] volumeMounts: - name: init-fs mountPath: /tmp/scripts/ - mountPath: /opt/sonarqube/data name: sonarqube subPath: data - mountPath: /opt/sonarqube/temp name: sonarqube subPath: temp - mountPath: /opt/sonarqube/logs name: sonarqube subPath: logs - mountPath: /tmp name: tmp-dir - mountPath: /opt/sonarqube/extensions name: sonarqube subPath: extensions containers: - name: sonarqube image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent ports: - name: http containerPort: 9000 protocol: TCP - name: monitoring-web containerPort: 8000 protocol: TCP - name: monitoring-ce containerPort: 8001 protocol: TCP resources: limits: cpu: 800m ephemeral-storage: 512000M memory: 6144M requests: cpu: 400m ephemeral-storage: 1536M memory: 2048M env: - name: SONAR_HELM_CHART_VERSION value: 2025.3.0 - name: SONAR_JDBC_PASSWORD valueFrom: secretKeyRef: name: my-sonarqube-postgresql key: postgresql-password - name: SONAR_WEB_SYSTEMPASSCODE valueFrom: secretKeyRef: name: my-sonarqube-sonarqube-monitoring-passcode key: SONAR_WEB_SYSTEMPASSCODE - name: SONAR_WEB_CONTEXT value: / - name: SONAR_WEB_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8000:/opt/sonarqube/conf/prometheus-config.yaml - name: SONAR_CE_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8001:/opt/sonarqube/conf/prometheus-ce-config.yaml envFrom: - configMapRef: name: my-sonarqube-sonarqube-jdbc-config livenessProbe: exec: command: - sh - -c - | wget --no-proxy --quiet -O /dev/null --timeout=1 --header="X-Sonar-Passcode: $SONAR_WEB_SYSTEMPASSCODE" "http://localhost:9000/api/system/liveness" failureThreshold: 6 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 1 readinessProbe: exec: command: - sh - -c - | #!/bin/bash # A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING # status about migration are added to prevent the node to be kill while SonarQube is upgrading the database. if wget --no-proxy -qO- http://localhost:9000/api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then exit 0 fi exit 1 failureThreshold: 6 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 1 startupProbe: httpGet: scheme: HTTP path: /api/system/status port: http initialDelaySeconds: 30 periodSeconds: 10 failureThreshold: 24 timeoutSeconds: 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsGroup: 0 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault volumeMounts: - mountPath: /opt/sonarqube/data name: sonarqube subPath: data - mountPath: /opt/sonarqube/temp name: sonarqube subPath: temp - mountPath: /opt/sonarqube/logs name: sonarqube subPath: logs - mountPath: /tmp name: tmp-dir - mountPath: /opt/sonarqube/extensions name: sonarqube subPath: extensions - mountPath: /opt/sonarqube/conf/prometheus-config.yaml subPath: prometheus-config.yaml name: prometheus-config - mountPath: /opt/sonarqube/conf/prometheus-ce-config.yaml subPath: prometheus-ce-config.yaml name: prometheus-ce-config serviceAccountName: default volumes: - name: init-sysctl configMap: name: my-sonarqube-sonarqube-init-sysctl items: - key: init_sysctl.sh path: init_sysctl.sh - name: init-fs configMap: name: my-sonarqube-sonarqube-init-fs items: - key: init_fs.sh path: init_fs.sh - name: prometheus-config configMap: name: my-sonarqube-sonarqube-prometheus-config items: - key: prometheus-config.yaml path: prometheus-config.yaml - name: prometheus-ce-config configMap: name: my-sonarqube-sonarqube-prometheus-ce-config items: - key: prometheus-ce-config.yaml path: prometheus-ce-config.yaml - name: sonarqube persistentVolumeClaim: claimName: my-sonarqube-sonarqube - name : tmp-dir emptyDir: {} --- # Source: sonarqube/templates/tests/sonarqube-test.yaml apiVersion: v1 kind: Pod metadata: name: "my-sonarqube-ui-test" annotations: "helm.sh/hook": test-success labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm spec: automountServiceAccountToken: false containers: - name: my-sonarqube-ui-test image: "registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community" imagePullPolicy: IfNotPresent command: ['wget'] args: [ '--retry-connrefused', '--waitretry=1', '--timeout=5', '-t', '12', '-qO-', 'my-sonarqube-sonarqube:9000/api/system/status' ] resources: limits: cpu: 500m ephemeral-storage: 1000M memory: 200M requests: cpu: 500m ephemeral-storage: 100M memory: 200M restartPolicy: Never 3、安装kubectl apply -f test.yaml4、svcapiVersion: v1 kind: Service metadata: name: sonarqube-nodeport spec: type: NodePort ports: - port: 9000 targetPort: 9000 nodePort: 32309 selector: app: sonarqube release: my-sonarqube5、启动的时候慢是正常的。http://192.168.3.200:32309/6、修改hosts文件notepad C:\Windows\System32\drivers\etc\hosts 6、给其他业务pod添加root@k8s01:~/helm/sonarqube# kubectl get svc -n traefik | grep traefik traefik-crds LoadBalancer 10.101.202.240 <pending> 80:31080/TCP,443:32480/TCP 87mroot@k8s01:~/helm/sonarqube# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d my-sonarqube-postgresql ClusterIP 10.102.103.88 <none> 5432/TCP 23h my-sonarqube-postgresql-headless ClusterIP None <none> 5432/TCP 23h my-sonarqube-sonarqube ClusterIP 10.107.136.0 <none> 9000/TCP 23h sonarqube-nodeport NodePort 10.106.168.209 <none> 9000:32309/TCP 22h test-app ClusterIP 10.101.249.224 <none> 80/TCP 6d23h apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: sonarqube-ingress namespace: default # 确保与 SonarQube 服务同命名空间 spec: entryPoints: - web # HTTP 入口(如需 HTTPS 使用 websecure) routes: - match: Host(`sonarqube.local.com`) kind: Rule services: - name: my-sonarqube-sonarqube # 使用 ClusterIP 服务 port: 9000这样可以实验出流量的两个走发 1、直接走业务pod本身的端口 2、先走traefik然后由它来分发给业务pod
2025年06月05日
11 阅读
0 评论
0 点赞
2025-05-30
k8s安装traefik与实操
一、安装traefik先安装traefik-crds、下面的--- # Source: traefik/templates/rbac/serviceaccount.yaml kind: ServiceAccount apiVersion: v1 metadata: name: traefik-release namespace: traefik labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm annotations: automountServiceAccountToken: false --- # Source: traefik/templates/rbac/clusterrole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-release-default labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm rules: - apiGroups: - "" resources: - configmaps - nodes - services verbs: - get - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch - apiGroups: - "" resources: - secrets verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io resources: - ingressclasses - ingresses verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - traefik.io resources: - ingressroutes - ingressroutetcps - ingressrouteudps - middlewares - middlewaretcps - serverstransports - serverstransporttcps - tlsoptions - tlsstores - traefikservices verbs: - get - list - watch --- # Source: traefik/templates/rbac/clusterrolebinding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-release-default labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-release-default subjects: - kind: ServiceAccount name: traefik-release namespace: traefik --- # 添加PVC定义 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: traefik-data-pvc namespace: traefik spec: accessModes: - ReadWriteMany # CephFS支持多节点读写 storageClassName: ceph-cephfs resources: requests: storage: 1Gi # 根据实际需求调整大小 --- # Source: traefik/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: traefik-release namespace: traefik labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm annotations: spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 0 template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" prometheus.io/port: "9100" labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm spec: securityContext: runAsUser: 0 runAsGroup: 0 fsGroup: 0 capabilities: add: ["NET_BIND_SERVICE"] serviceAccountName: traefik-release automountServiceAccountToken: true terminationGracePeriodSeconds: 60 hostNetwork: false containers: - image: registry.cn-guangzhou.aliyuncs.com/xingcangku/traefik:v3.0.0 imagePullPolicy: IfNotPresent name: traefik-release resources: readinessProbe: httpGet: path: /ping port: 9000 scheme: HTTP failureThreshold: 1 initialDelaySeconds: 2 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 livenessProbe: httpGet: path: /ping port: 9000 scheme: HTTP failureThreshold: 3 initialDelaySeconds: 2 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 lifecycle: ports: - name: metrics containerPort: 9100 protocol: TCP - name: traefik containerPort: 9000 protocol: TCP - name: web containerPort: 8000 protocol: TCP - name: websecure containerPort: 8443 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL - NET_BIND_SERVICE readOnlyRootFilesystem: false volumeMounts: - name: data mountPath: /data readOnly: false # 允许写入 args: - "--global.checknewversion" - "--global.sendanonymoususage" - "--entryPoints.metrics.address=:9100/tcp" - "--entryPoints.traefik.address=:9000/tcp" - "--entryPoints.web.address=:8000/tcp" - "--entryPoints.websecure.address=:8443/tcp" - "--api.dashboard=true" - "--ping=true" - "--metrics.prometheus=true" - "--metrics.prometheus.entrypoint=metrics" - "--providers.kubernetescrd" - "--providers.kubernetescrd.allowEmptyServices=true" - "--providers.kubernetesingress" - "--providers.kubernetesingress.allowEmptyServices=true" - "--providers.kubernetesingress.ingressendpoint.publishedservice=traefik/traefik-release" - "--entryPoints.websecure.http.tls=true" - "--log.level=INFO" env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumes: # 替换emptyDir为PVC - name: data persistentVolumeClaim: claimName: traefik-data-pvc securityContext: runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 --- # Source: traefik/templates/service.yaml apiVersion: v1 kind: Service metadata: name: traefik namespace: traefik labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm annotations: spec: type: LoadBalancer selector: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default ports: - port: 8000 name: web targetPort: web protocol: TCP - port: 8443 name: websecure targetPort: websecure protocol: TCP --- # Source: traefik/templates/ingressclass.yaml apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: annotations: ingressclass.kubernetes.io/is-default-class: "true" labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm name: traefik-release spec: controller: traefik.io/ingress-controller root@k8s01:~/helm/traefik/traefik-helm-chart-35.4.0/traefik# cat dashboard.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: dashboard namespace: traefik spec: entryPoints: - web routes: - match: Host(`traefik.local.com`) kind: Rule services: - name: api@internal kind: TraefikService 二、测试traefikkubectl create ns test-ns kubectl -n test-ns create deployment test-app --image=registry.cn-guangzhou.aliyuncs.com/xingcangku/nginx-alpine:1.0 kubectl -n test-ns expose deployment test-app --port=80cat <<EOF | kubectl apply -f - > apiVersion: networking.k8s.io/v1 > kind: Ingress > metadata: > name: test-ingress > namespace: test-ns > spec: > ingressClassName: traefik > rules: > - http: > paths: > - path: /test > pathType: Prefix > backend: > service: > name: test-app > port: > number: 80 > EOFWEB_PORT=$(kubectl get svc -n traefik traefik -o jsonpath='{.spec.ports[?(@.name=="web")].nodePort}')curl -v http://$NODE_IP:$WEB_PORT/test * Trying 192.168.3.200:32305... * Connected to 192.168.3.200 (192.168.3.200) port 32305 (#0) > GET /test HTTP/1.1 > Host: 192.168.3.200:32305 > User-Agent: curl/7.81.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 404 Not Found < Content-Length: 153 < Content-Type: text/html < Date: Thu, 29 May 2025 18:06:51 GMT < Server: nginx/1.27.5 < <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.27.5</center> </body> </html> * Connection #0 to host 192.168.3.200 left intact#更新路径 cat <<EOF | kubectl apply -f - > apiVersion: networking.k8s.io/v1 > kind: Ingress > metadata: > name: test-ingress > namespace: test-ns > spec: > ingressClassName: traefik > rules: > - http: > paths: > - path: / > pathType: Prefix > backend: > service: > name: test-app > port: > number: 80 > EOF # 测试访问根路径 curl -v http://$NODE_IP:$WEB_PORT/ * Trying 192.168.3.200:32305... * Connected to 192.168.3.200 (192.168.3.200) port 32305 (#0) > GET / HTTP/1.1 > Host: 192.168.3.200:32305 > User-Agent: curl/7.81.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Accept-Ranges: bytes < Content-Length: 615 < Content-Type: text/html < Date: Thu, 29 May 2025 18:08:48 GMT < Etag: "67ffa8c6-267" < Last-Modified: Wed, 16 Apr 2025 12:55:34 GMT < Server: nginx/1.27.5 < <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> * Connection #0 to host 192.168.3.200 left intact#可以在集群内部访问 curl http://traefik-service.default.svc.cluster.localroot@k8s01:~/helm/traefik/test# kubectl get ingress -n test-ns NAME CLASS HOSTS ADDRESS PORTS AGE test-ingress traefik * 80 48m root@k8s01:~/helm/traefik/test# kubectl describe ingress test-ingress -n test-ns Name: test-ingress Labels: <none> Namespace: test-ns Address: Ingress Class: traefik Default backend: <default> Rules: Host Path Backends ---- ---- -------- * / test-app:80 (10.244.1.13:80) Annotations: <none> Events: <none> #命令查询 root@k8s01:~/helm/traefik/traefik-helm-chart-35.4.0/traefik# kubectl -n traefik describe svc traefik Name: traefik Namespace: traefik Labels: app.kubernetes.io/instance=traefik-release-default app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=traefik helm.sh/chart=traefik-35.4.0 Annotations: <none> Selector: app.kubernetes.io/instance=traefik-release-default,app.kubernetes.io/name=traefik Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.103.186.47 IPs: 10.103.186.47 Port: web 8000/TCP TargetPort: web/TCP NodePort: web 30615/TCP Endpoints: 10.244.2.30:8000 Port: websecure 8443/TCP TargetPort: websecure/TCP NodePort: websecure 32113/TCP Endpoints: 10.244.2.30:8443 Session Affinity: None External Traffic Policy: Cluster Events: <none> root@k8s01:~/helm/traefik/traefik-helm-chart-35.4.0/traefik# kubectl -n traefik get endpoints NAME ENDPOINTS AGE traefik 10.244.2.30:8000,10.244.2.30:8443 26s traefik-crds 10.244.0.169:8000,10.244.0.169:8443 2d4h root@k8s01:~/helm/traefik/traefik-helm-chart-35.4.0/traefik# cat service.yaml # Source: traefik/templates/service.yaml apiVersion: v1 kind: Service metadata: name: traefik namespace: traefik labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm annotations: spec: type: LoadBalancer selector: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default ports: - port: 8000 name: web targetPort: web protocol: TCP - port: 8443 name: websecure targetPort: websecure protocol: TCP root@k8s01:~/helm/traefik/traefik-helm-chart-35.4.0/traefik# kubectl get -n traefik svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE traefik LoadBalancer 10.103.186.47 <pending> 8000:30615/TCP,8443:32113/TCP 7m11s traefik-crds LoadBalancer 10.101.202.240 <pending> 80:31080/TCP,443:32480/TCP,6379:30634/TCP 2d5h root@k8s01:~/helm/traefik/traefik-helm-chart-35.4.0/traefik# kubectl get pods -n traefik -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES traefik-crds-766d79b985-c2sbr 1/1 Running 3 (6h13m ago) 2d4h 10.244.0.169 k8s01 <none> <none> traefik-release-589c7ff647-pdjh4 1/1 Running 0 25m 10.244.2.30 k8s03 <none> <none> root@k8s01:~/helm/traefik/traefik-helm-chart-35.4.0/traefik#
2025年05月30日
7 阅读
0 评论
1 点赞
1
...
9
10
11
...
16