首页
导航
统计
留言
更多
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
Search
1
Ubuntu安装 kubeadm 部署k8s 1.30
372 阅读
2
kubeadm 部署k8s 1.30
255 阅读
3
rockylinux 9.3详细安装drbd
238 阅读
4
k8s 高可用部署+升级
198 阅读
5
rockylinux 9.3详细安装drbd+keepalived
174 阅读
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
nexus
test
›
test2
test3
istio
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
zabbix
zookeeper
hadoop
登录
/
注册
Search
标签搜索
k8s
linux
docker
drbd+keepalivde
ansible
dcoker
webhook
星
累计撰写
156
篇文章
累计收到
1,007
条评论
首页
栏目
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
nexus
test
test2
test3
istio
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
zabbix
zookeeper
hadoop
页面
导航
统计
留言
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
搜索到
6
篇与
的结果
2026-03-17
OpenTelemetry实战
一、环境情况我现在这环境他有两个grafana,是因为自建机房服务器都是用的机械硬盘读写很慢。所以他是一个grafana看日志一个granfana看图。 这个环境其实已经有 Loki 了,访问的不是那套带 Loki 的 Grafana,或者说当前这个 Grafana 没把 Loki 数据源加进去。在看图这个granfana中加上Loki的数据即可。[root@k8s-node-35 ~]# kubectl get pod -n logs -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES loki-stack-0 1/1 Running 0 89d 172.20.185.120 192.168.169.36 <none> <none> loki-stack-grafana-6467d7c65b-27f9k 2/2 Running 3 129d 172.20.209.169 192.168.169.34 <none> <none> loki-stack-grafana-6467d7c65b-87p7g 2/2 Running 1 90d 172.20.166.108 192.168.169.38 <none> <none> loki-stack-promtail-2dsqv 1/1 Running 0 362d 172.20.166.98 192.168.169.38 <none> <none> loki-stack-promtail-2zdcz 1/1 Running 0 362d 172.20.209.147 192.168.169.34 <none> <none> loki-stack-promtail-4lpvj 1/1 Running 8 (40d ago) 362d 172.20.68.202 192.168.169.16 <none> <none> loki-stack-promtail-595dl 1/1 Running 0 362d 172.20.185.77 192.168.169.36 <none> <none> loki-stack-promtail-5dv2m 1/1 Running 0 362d 172.20.133.244 192.168.169.37 <none> <none> loki-stack-promtail-682l6 1/1 Running 0 362d 172.20.183.205 192.168.169.32 <none> <none> loki-stack-promtail-776jg 1/1 Running 0 362d 172.20.254.34 192.168.169.27 <none> <none> loki-stack-promtail-7pqgv 1/1 Running 21 (20d ago) 362d 172.20.71.125 192.168.169.28 <none> <none> loki-stack-promtail-8656v 1/1 Running 0 362d 172.20.246.213 192.168.169.33 <none> <none> loki-stack-promtail-nhczz 1/1 Running 8 (40d ago) 362d 172.20.161.235 192.168.169.26 <none> <none> loki-stack-promtail-nxs9p 1/1 Running 0 35s 172.20.104.100 192.168.169.35 <none> <none> loki-stack-promtail-rjxnc 1/1 Running 0 362d 172.20.215.145 192.168.169.31 <none> <none> loki-stack-promtail-rmd7s 1/1 Running 0 362d 172.20.170.155 192.168.169.14 <none> <none> loki-stack-promtail-tdqlp 1/1 Running 0 362d 172.20.13.175 192.168.169.30 <none> <none> loki-stack-promtail-wvmm5 1/1 Running 8 (40d ago) 362d 172.20.169.88 192.168.169.25 <none> <none> loki-stack-promtail-wzbv8 1/1 Running 8 (40d ago) 362d 172.20.95.247 192.168.169.29 <none> <none> [root@k8s-node-35 ~]# kubectl get -n monitoring pod NAME READY STATUS RESTARTS AGE dingtalk-hook-869f4cd9d8-w2qbc 1/1 Running 0 286d grafana-6df5c7857b-qhqjb 1/1 Running 1 88d prometheus-alert-778f6866f5-pj76x 1/1 Running 0 92d prometheus-alertmanager-0 1/1 Running 0 89d prometheus-blackbox-exporter-6d9c9b4d8-rbf8n 1/1 Running 0 286d prometheus-kube-state-metrics-777f85f5f6-pzn6z 1/1 Running 0 90d prometheus-server-6cdb87f85f-dsmz8 2/2 Running 8 (30d ago) 30d You have mail in /var/spool/mail/root [root@k8s-node-35 ~]# kubectl get pod -n logs -o wide -w NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES loki-stack-0 1/1 Running 0 89d 172.20.185.120 192.168.169.36 <none> <none> loki-stack-grafana-6467d7c65b-27f9k 2/2 Running 3 129d 172.20.209.169 192.168.169.34 <none> <none> loki-stack-grafana-6467d7c65b-87p7g 2/2 Running 1 90d 172.20.166.108 192.168.169.38 <none> <none> loki-stack-promtail-2dsqv 1/1 Running 0 362d 172.20.166.98 192.168.169.38 <none> <none> loki-stack-promtail-2zdcz 1/1 Running 0 362d 172.20.209.147 192.168.169.34 <none> <none> loki-stack-promtail-4lpvj 1/1 Running 8 (40d ago) 362d 172.20.68.202 192.168.169.16 <none> <none> loki-stack-promtail-595dl 1/1 Running 0 362d 172.20.185.77 192.168.169.36 <none> <none> loki-stack-promtail-5dv2m 1/1 Running 0 362d 172.20.133.244 192.168.169.37 <none> <none> loki-stack-promtail-682l6 1/1 Running 0 362d 172.20.183.205 192.168.169.32 <none> <none> loki-stack-promtail-776jg 1/1 Running 0 362d 172.20.254.34 192.168.169.27 <none> <none> loki-stack-promtail-7pqgv 1/1 Running 21 (20d ago) 362d 172.20.71.125 192.168.169.28 <none> <none> loki-stack-promtail-8656v 1/1 Running 0 362d 172.20.246.213 192.168.169.33 <none> <none> loki-stack-promtail-nhczz 1/1 Running 8 (40d ago) 362d 172.20.161.235 192.168.169.26 <none> <none> loki-stack-promtail-nxs9p 1/1 Running 0 96s 172.20.104.100 192.168.169.35 <none> <none> loki-stack-promtail-rjxnc 1/1 Running 0 362d 172.20.215.145 192.168.169.31 <none> <none> loki-stack-promtail-rmd7s 1/1 Running 0 362d 172.20.170.155 192.168.169.14 <none> <none> loki-stack-promtail-tdqlp 1/1 Running 0 362d 172.20.13.175 192.168.169.30 <none> <none> loki-stack-promtail-wvmm5 1/1 Running 8 (40d ago) 362d 172.20.169.88 192.168.169.25 <none> <none> loki-stack-promtail-wzbv8 1/1 Running 8 (40d ago) 362d 172.20.95.247 192.168.169.29 <none> <none> ^C[root@k8s-node-35 ~]# kubectl get -n monitoring all NAME READY STATUS RESTARTS AGE pod/dingtalk-hook-869f4cd9d8-w2qbc 1/1 Running 0 286d pod/grafana-6df5c7857b-qhqjb 1/1 Running 1 88d pod/prometheus-alert-778f6866f5-pj76x 1/1 Running 0 92d pod/prometheus-alertmanager-0 1/1 Running 0 89d pod/prometheus-blackbox-exporter-6d9c9b4d8-rbf8n 1/1 Running 0 286d pod/prometheus-kube-state-metrics-777f85f5f6-pzn6z 1/1 Running 0 90d pod/prometheus-server-6cdb87f85f-dsmz8 2/2 Running 8 (30d ago) 30d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dingtalk-hook ClusterIP 10.68.209.173 <none> 5000/TCP 2y296d service/grafana ClusterIP 10.68.61.176 <none> 80/TCP 2y296d service/prometheus-alert ClusterIP 10.68.168.221 <none> 8080/TCP 136d service/prometheus-alertmanager ClusterIP 10.68.226.233 <none> 9093/TCP 2y296d service/prometheus-alertmanager-headless ClusterIP None <none> 9093/TCP 2y296d service/prometheus-blackbox-exporter ClusterIP 10.68.89.122 <none> 9115/TCP 2y295d service/prometheus-kube-state-metrics ClusterIP 10.68.0.15 <none> 8080/TCP 2y296d service/prometheus-prometheus-node-exporter ClusterIP 10.68.106.135 <none> 9100/TCP 2y296d service/prometheus-server ClusterIP 10.68.233.242 <none> 80/TCP 2y296d NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/prometheus-prometheus-node-exporter 0 0 0 0 0 never-schedule=true 2y185d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/dingtalk-hook 1/1 1 1 2y296d deployment.apps/grafana 1/1 1 1 2y296d deployment.apps/prometheus-alert 1/1 1 1 136d deployment.apps/prometheus-blackbox-exporter 1/1 1 1 2y295d deployment.apps/prometheus-kube-state-metrics 1/1 1 1 2y296d deployment.apps/prometheus-server 1/1 1 1 2y296d NAME DESIRED CURRENT READY AGE replicaset.apps/dingtalk-hook-57f9854468 0 0 0 2y296d replicaset.apps/dingtalk-hook-869f4cd9d8 1 1 1 636d replicaset.apps/dingtalk-hook-fc597d99d 0 0 0 636d replicaset.apps/grafana-5b94cbf46f 0 0 0 2y296d replicaset.apps/grafana-6df5c7857b 1 1 1 636d replicaset.apps/prometheus-alert-58c5d4bc69 0 0 0 136d replicaset.apps/prometheus-alert-778f6866f5 1 1 1 136d replicaset.apps/prometheus-blackbox-exporter-6d9c9b4d8 1 1 1 2y282d replicaset.apps/prometheus-blackbox-exporter-865546f7c6 0 0 0 2y295d replicaset.apps/prometheus-kube-state-metrics-6dc44cc4d9 0 0 0 2y296d replicaset.apps/prometheus-kube-state-metrics-777f85f5f6 1 1 1 2y296d replicaset.apps/prometheus-server-54d74bcf44 0 0 0 129d replicaset.apps/prometheus-server-55f646d6dc 0 0 0 129d replicaset.apps/prometheus-server-6bf7d76745 0 0 0 2y235d replicaset.apps/prometheus-server-6cdb87f85f 1 1 1 47d replicaset.apps/prometheus-server-758c489655 0 0 0 129d replicaset.apps/prometheus-server-7667445855 0 0 0 129d replicaset.apps/prometheus-server-7749c4c4c4 0 0 0 2y202d replicaset.apps/prometheus-server-866fdcddf5 0 0 0 636d replicaset.apps/prometheus-server-f6885b6fc 0 0 0 129d replicaset.apps/prometheus-server-fb6797484 0 0 0 636d replicaset.apps/prometheus-server-fc98c9bc6 0 0 0 2y234d NAME READY AGE statefulset.apps/prometheus-alertmanager 1/1 2y296d [root@k8s-node-35 ~]# [root@k8s-node-35 ~]# kubectl get ing -n monitoring NAME CLASS HOSTS ADDRESS PORTS AGE grafana nginx grafana.telewave.tech 192.168.169.27,192.168.169.32,192.168.169.33,192.168.169.37,192.168.169.38 80 2y296d prometheus-server nginx prometheus-dev.telewave.tech 192.168.169.27,192.168.169.32,192.168.169.33,192.168.169.37,192.168.169.38 80 2y296d [root@k8s-node-35 ~]# [root@k8s-node-35 ~]# kubectl get svc -n logs NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE loki-stack ClusterIP 10.68.107.19 <none> 3100/TCP 3y92d loki-stack-grafana ClusterIP 10.68.203.99 <none> 80/TCP 3y92d loki-stack-headless ClusterIP None <none> 3100/TCP 3y92d loki-stack-memberlist ClusterIP None <none> 7946/TCP 3y92d#地址 http://loki-stack.logs.svc.cluster.local:3100二、配置OpenTelemetry + Tempo 2.1 部署Tempohelm repo add grafana https://grafana.github.io/helm-charts helm repo update kubectl create namespace tracing helm upgrade --install tempo grafana/tempo -n tracingkubectl get svc -n tracing kubectl get pod -n tracingTempo 文档里的典型端口是: 查询 HTTP:3200 OTLP/gRPC:4317 OTLP/HTTP:4318helm upgrade --install tempo grafana/tempo \ -n tracing \ --reuse-values \ --set tempo.repository=harbor.telewave.tech/monitoring/grafana/tempo \ --set tempo.tag=2.9.02.2 部署 OpenTelemetry Collectorhelm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo update kubectl create namespace observability# otel-values.yaml mode: deployment image: repository: otel/opentelemetry-collector-k8s config: receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 processors: memory_limiter: check_interval: 5s limit_percentage: 80 spike_limit_percentage: 25 batch: {} exporters: otlp/tempo: endpoint: tempo.tracing.svc.cluster.local:4317 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [memory_limiter, batch] exporters: [otlp/tempo]helm upgrade --install otel-collector \ open-telemetry/opentelemetry-collector \ -n observability \ -f otel-values.yaml[root@k8s-node-35 ~]# kubectl get pod -n observability -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES otel-collector-opentelemetry-collector-7b45dbb5c5-hmcmj 1/1 Running 0 3h5m 172.20.161.252 192.168.169.26 <none> <none> You have new mail in /var/spool/mail/root [root@k8s-node-35 ~]# kubectl get svc -n observability NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE otel-collector-opentelemetry-collector ClusterIP 10.68.248.222 <none> 6831/UDP,14250/TCP,14268/TCP,4317/TCP,4318/TCP,9411/TCP 5h12m [root@k8s-node-35 ~]# 三、pod加入可观行测因为是测试没有加入到jenkins流水线镜像里面 单纯测试[root@k8s-master-14 ~]# kubectl get -n efp-service-test pod NAME READY STATUS RESTARTS AGE efp-alarm-transfer-client-578d5487b-4n787 1/1 Running 0 287d efp-app-web-568fd9879f-cvxph 1/1 Running 0 287d efp-cti-client-5cd4fc7b79-lvkvf 1/1 Running 13 (286d ago) 286d efp-enterprise-web-67c885d5dc-rxcsr 1/1 Running 0 34d efp-event-bridge-client-69c7999bdb-42bps 1/1 Running 11 (132d ago) 133d efp-event-client-c6fb7977-bf8sz 1/1 Running 0 144d efp-external-gateway-client-7c9848664f-wtxrv 1/1 Running 1 (4d11h ago) 4d11h efp-fas-client-5bcd98b498-6gwqd 1/1 Running 0 306d efp-file-client-6cf59cd4c7-nj2xm 1/1 Running 94 (90d ago) 90d efp-frm-web-696f4987fd-6k7gt 1/1 Running 0 4d11h efp-handheld-web-64774f78-9hpcl 1/1 Running 0 286d efp-knowledge-client-5d57f9696d-z2cfv 1/1 Running 12 (287d ago) 287d efp-message-client-78594b6546-csxjw 1/1 Running 0 34d efp-nocoding-web-65846f8b87-kkf6z 1/1 Running 0 136d efp-system-client-5b5d44c55d-pz5r6 1/1 Running 0 30d efp-tenant-web-554df94f8d-xb6n2 1/1 Running 0 129d efp-training-client-6f88764b78-97z8q 1/1 Running 61 (136d ago) 136d efp-uac-client-54b46df589-fpbmt 1/1 Running 0 132d efp-workspace-client-797878974d-dbmcz 1/1 Running 0 4d11h efp-xxljob-web-service-d7676b9bb-7b7tj 1/1 Running 0 144d fireproof-linkage-web-579767c6d5-l2tgx 1/1 Running 0 90d ifpco-enterprise-web-65598d9868-b44dq 1/1 Running 0 90d ifpco-system-client-55d57856d-2lwpp 1/1 Running 60 (136d ago) 136d ifpco-xxj-web-75696cd97-qd8rh 1/1 Running 0 90d ipcc-cting-web-954569f84-cphls 1/1 Running 0 92d keycloak2-0 1/1 Running 0 90d#下载地址 需要加载进pod里面 https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar我选择固定要一个节点上,然后挂载。apiVersion: apps/v1 kind: Deployment metadata: annotations: meta.helm.sh/release-name: efp-message-client meta.helm.sh/release-namespace: efp-service-test labels: app: efp-message-client app.kubernetes.io/managed-by: Helm chart: efp-message-client-0.0.1 version: run name: efp-message-client namespace: efp-service-test spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 5 selector: matchLabels: app: efp-message-client version: run strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: annotations: kubesphere.io/restartedAt: '2025-11-05T02:53:55.251Z' prometheus.io/path: /metrics/prometheus prometheus.io/port: '80' prometheus.io/scrape: 'true' labels: app: efp-message-client version: run spec: nodeName: 192.168.169.26 volumes: - name: otel-agent hostPath: path: /root/opentelemetry-javaagent.jar type: File containers: - name: efp-message-client command: - java args: - '-jar' - /opt/app.jar - '--spring.profiles.active=k8s' - $(JAVA_OPTS) env: - name: SPRING_PROFIES_ACTIVE value: k8s - name: JAVA_OPTS value: '-Xms2G -Xmx2G -Xmn1512m -Xss512k' - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: JAVA_TOOL_OPTIONS value: "-javaagent:/otel/opentelemetry-javaagent.jar" - name: OTEL_SERVICE_NAME value: "efp-message-client" - name: OTEL_TRACES_EXPORTER value: "otlp" - name: OTEL_METRICS_EXPORTER value: "none" - name: OTEL_LOGS_EXPORTER value: "none" - name: OTEL_EXPORTER_OTLP_PROTOCOL value: "http/protobuf" - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otel-collector.observability.svc.cluster.local:4318" - name: OTEL_JAVAAGENT_DEBUG value: "true" envFrom: - configMapRef: name: configmap-efp-message-client-env image: harbor.telewave.tech/efp-service-test/efp-message-client:20251024154807-test-22 imagePullPolicy: IfNotPresent volumeMounts: - name: otel-agent mountPath: /otel/opentelemetry-javaagent.jar readOnly: true livenessProbe: failureThreshold: 10 httpGet: path: / port: 80 scheme: HTTP initialDelaySeconds: 200 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 ports: - containerPort: 80 name: server protocol: TCP readinessProbe: failureThreshold: 10 httpGet: path: / port: 80 scheme: HTTP initialDelaySeconds: 50 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: requests: cpu: 200m memory: 1Gi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst imagePullSecrets: - name: harbor-registry-secret restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: efp-message-client serviceAccountName: efp-message-client terminationGracePeriodSeconds: 20 --- apiVersion: v1 kind: Service metadata: annotations: meta.helm.sh/release-name: efp-message-client meta.helm.sh/release-namespace: efp-service-test labels: app: efp-message-client app.kubernetes.io/managed-by: Helm chart: efp-message-client-0.0.1 name: efp-message-client namespace: efp-service-test spec: clusterIP: 10.68.247.210 clusterIPs: - 10.68.247.210 externalTrafficPolicy: Cluster internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http nodePort: 31003 port: 80 protocol: TCP targetPort: 80 selector: app: efp-message-client sessionAffinity: None type: NodePort[root@k8s-master-14 ~]# kubectl get pod -n efp-service-test -l app=efp-message-client -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES efp-message-client-78594b6546-csxjw 1/1 Running 0 34d 172.20.161.206 192.168.169.26 <none> <none> efp-message-client-86cc56867d-q4thn 0/1 Running 0 19s 172.20.161.215 192.168.169.26 <none> <none> [root@k8s-master-14 ~]# kubectl get pod -n efp-service-test -l app=efp-message-client -o wide -w NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES efp-message-client-78594b6546-csxjw 1/1 Running 0 34d 172.20.161.206 192.168.169.26 <none> <none> efp-message-client-86cc56867d-q4thn 0/1 Running 0 25s 172.20.161.215 192.168.169.26 <none> <none> efp-message-client-86cc56867d-q4thn 1/1 Running 0 61s 172.20.161.215 192.168.169.26 <none> <none> efp-message-client-78594b6546-csxjw 1/1 Terminating 0 34d 172.20.161.206 192.168.169.26 <none> <none> efp-message-client-78594b6546-csxjw 0/1 Terminating 0 34d 172.20.161.206 192.168.169.26 <none> <none> efp-message-client-78594b6546-csxjw 0/1 Terminating 0 34d 172.20.161.206 192.168.169.26 <none> <none> efp-message-client-78594b6546-csxjw 0/1 Terminating 0 34d 172.20.161.206 192.168.169.26 <none> <none> ^CYou have mail in /var/spool/mail/root [root@k8s-master-14 ~]# ^C [root@k8s-master-14 ~]# kubectl get pod -n efp-service-test -l app=efp-message-client -o wide -w NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES efp-message-client-86cc56867d-q4thn 1/1 Running 0 14m 172.20.161.215 192.168.169.26 <none> <none> ^C[root@k8s-master-14 ~]# ^C [root@k8s-master-14 ~]# kubectl describe pod -n efp-service-test -l app=efp-message-client Name: efp-message-client-86cc56867d-q4thn Namespace: efp-service-test Priority: 0 Node: 192.168.169.26/192.168.169.26 Start Time: Tue, 17 Mar 2026 16:59:36 +0800 Labels: app=efp-message-client pod-template-hash=86cc56867d version=run Annotations: kubesphere.io/restartedAt: 2025-11-05T02:53:55.251Z prometheus.io/path: /metrics/prometheus prometheus.io/port: 80 prometheus.io/scrape: true Status: Running IP: 172.20.161.215 IPs: IP: 172.20.161.215 Controlled By: ReplicaSet/efp-message-client-86cc56867d Containers: efp-message-client: Container ID: docker://92d27de3a4f43c54453dd9f29212ea308722359da7583405a23b2093600bcb78 Image: harbor.telewave.tech/efp-service-test/efp-message-client:20251024154807-test-22 Image ID: docker-pullable://harbor.telewave.tech/efp-service-test/efp-message-client@sha256:0d7aa4f85045b58b67da623cc24ae2b3c7c0be7fca1a47ac36acc9f37c6f28b9 Port: 80/TCP Host Port: 0/TCP Command: java Args: -jar /opt/app.jar --spring.profiles.active=k8s $(JAVA_OPTS) State: Running Started: Tue, 17 Mar 2026 16:59:37 +0800 Ready: True Restart Count: 0 Requests: cpu: 200m memory: 1Gi Liveness: http-get http://:80/ delay=200s timeout=5s period=10s #success=1 #failure=10 Readiness: http-get http://:80/ delay=50s timeout=5s period=10s #success=1 #failure=10 Environment Variables from: configmap-efp-message-client-env ConfigMap Optional: false Environment: SPRING_PROFIES_ACTIVE: k8s JAVA_OPTS: -Xms2G -Xmx2G -Xmn1512m -Xss512k POD_NAMESPACE: efp-service-test (v1:metadata.namespace) JAVA_TOOL_OPTIONS: -javaagent:/otel/opentelemetry-javaagent.jar OTEL_SERVICE_NAME: efp-message-client OTEL_TRACES_EXPORTER: otlp OTEL_METRICS_EXPORTER: none OTEL_LOGS_EXPORTER: none OTEL_EXPORTER_OTLP_PROTOCOL: http/protobuf OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector.observability.svc.cluster.local:4318 OTEL_JAVAAGENT_DEBUG: true Mounts: /otel/opentelemetry-javaagent.jar from otel-agent (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mz6sq (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: otel-agent: Type: HostPath (bare host directory volume) Path: /root/opentelemetry-javaagent.jar HostPathType: File kube-api-access-mz6sq: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 15m kubelet Container image "harbor.telewave.tech/efp-service-test/efp-message-client:20251024154807-test-22" already present on machine Normal Created 15m kubelet Created container efp-message-client Normal Started 15m kubelet Started container efp-message-client [root@k8s-master-14 ~]# [root@k8s-master-14 ~]# [root@k8s-master-14 ~]# kubectl logs -n efp-service-test deploy/efp-message-client --tail=200 at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:85) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:74) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at io.opentelemetry.exporter.sender.okhttp.internal.RetryInterceptor.intercept(RetryInterceptor.java:96) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:226) at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:574) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) [otel.javaagent 2026-03-17 17:14:47:143 +0800] [BatchSpanProcessor_WorkerThread-1] DEBUG io.opentelemetry.sdk.trace.export.BatchSpanProcessor - Exporter failed [otel.javaagent 2026-03-17 17:14:49:592 +0800] [redisson-netty-2-26] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 46af0f1ba654c16fc0fd52dc5b9c32b0 3cce2c99c1955fde CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-9, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=37}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:51:356 +0800] [http-nio-80-exec-1] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'OnCommittedResponseWrapper.sendError' : 1270fa2a3df14fe897f25f19be07fa59 66ded01d450914f9 INTERNAL [tracer: io.opentelemetry.servlet-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-1, code.function=sendError, code.namespace=org.springframework.security.web.util.OnCommittedResponseWrapper, thread.id=232}, capacity=128, totalAddedValues=4} [otel.javaagent 2026-03-17 17:14:51:359 +0800] [http-nio-80-exec-1] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /error' : 1270fa2a3df14fe897f25f19be07fa59 3ef1cd2a59102cde SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-1, server.port=80, network.protocol.version=1.1, user_agent.original=Prometheus/2.44.0, http.response.status_code=401, thread.id=232, http.request.method=GET, network.peer.port=50478, http.route=/error, server.address=172.20.161.215, client.address=172.20.161.251, network.peer.address=172.20.161.251, url.path=/metrics/prometheus}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:14:54:597 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 335f5fa96c540a03bf3be7f6303dd265 9c029c117177f749 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-10, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=38}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:54:880 +0800] [redisson-netty-2-178] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET' : a45d076ac7f917cef89a1a22e14522dd 409180087bbaaa53 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-4, network.type=ipv4, db.operation=GET, db.statement=GET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=235}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:54:882 +0800] [redisson-netty-2-93] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'SET' : befd439e4291801962be29b30ce4db41 2d671b37217527b4 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-4, network.type=ipv4, db.operation=SET, db.statement=SET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a ?, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=235}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:54:883 +0800] [redisson-netty-2-176] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET' : 3bddf66cd43b649a07deb5ea077b17b9 d17089c0072e60a7 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-4, network.type=ipv4, db.operation=GET, db.statement=GET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=235}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:748 +0800] [redisson-netty-2-12] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f800a78e31d31e50e4c8228ed3b392a4 1f7fd7f5b53bf076 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:748 +0800] [redisson-netty-2-26] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 740d5945d96e5a25706d8c13e347e36b 285b374f7601abe4 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:748 +0800] [redisson-netty-2-25] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 80a1d26e1ddbf414920db0fb8b640d69 6d54f053d8906f77 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:847 +0800] [redisson-netty-2-14] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c7e893183edb48dff3ade949a9ad46cf e1a1bf8947ba9d88 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:847 +0800] [redisson-netty-2-16] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 398d26c9caadf9229f7a58025bbc2834 ff501a2b403d005f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:847 +0800] [redisson-netty-2-7] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 0ec09af58fe468a89da54d0d4512866e 22c2ec6172ce5d61 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:847 +0800] [redisson-netty-2-17] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 546fa3714339838a35043b6d95062259 dba38e80c815b363 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:848 +0800] [redisson-netty-2-61] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 25ae4af31c5eae259d404a900a5ff172 34f98a3f1f613a16 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:848 +0800] [redisson-netty-2-33] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 04c00ad94d650679c31db02ed17e1fa3 290a5d83c3ffe647 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:848 +0800] [redisson-netty-2-57] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b06548c732d45f917e8b94aab5a4ec2e 04193bf33be3f56b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-20] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7822ed98018effbdfdd6ebaa024b195c 1df2daa943a3eaad CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-52] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3a77b170d4d388fe60465075581731d0 4b6e42dd8885bc29 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-11] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 41b590bcd24d0d7764e7ef1e6198e1f9 0d86c0a7a0357a87 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-38] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 82e9141805af35ea395d484db29d31fc 761d0249e26b2cab CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-37] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3150873c5ea57d9e1fd2457153a472c4 17dd6894892ff8a0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-44] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 01bffd833c2789177812a26dade5c108 2373599189057840 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-60] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : d1e5368915fa618ea74c5386cc495e3b 845e70cb5354098d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-46] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 854ab91a8a817cad5571edc5f36ea042 2821be2346907f64 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-48] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 91555325eae2e0877468f610799dde53 9c986d04c17f7525 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-43] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : e1f832a99d9dbe107853659dab9fec56 7a1803e1775c31c0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:849 +0800] [redisson-netty-2-47] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a5432564c9d20664972857ec3ddc18fc 957340323ec2e8d9 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:850 +0800] [redisson-netty-2-45] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f75ddee1f517d10d44d45f21ec816b53 c01da4ae3a61a3e6 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:850 +0800] [redisson-netty-2-42] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 0aca3eaaf3eb13ef80a0bd5058630baa 5fcd32173b485a29 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:850 +0800] [redisson-netty-2-49] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 84a7180ef98e8f1551c8aa8fef4672bf ab3ba33e92cc4f4f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:850 +0800] [redisson-netty-2-40] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 644cbf1f46123969002c529b1ed53ba7 a144325b3d9cb59f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:850 +0800] [redisson-netty-2-50] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b53a56ab3f4c0d1ea523890a4b17abd3 88ab899b4a074dfa CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:850 +0800] [redisson-netty-2-41] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 817baf6744aa38cb46f0a4291422d84a ff99115ae0d4fb69 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:850 +0800] [redisson-netty-2-54] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 139c343339b9a82ed42342c1801858c2 be21165e7ae0169e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:850 +0800] [redisson-netty-2-51] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 1b52c59bc40508f0c2a35e6500b63991 917ab77d37fa440a CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:947 +0800] [redisson-netty-2-92] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : aeeea0756a9637a626e280e8b21c3331 74e69160031b17d4 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:947 +0800] [redisson-netty-2-96] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 953de29b3c649ac4ecec284d4748f053 7f6ceb9e2a2cab42 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:947 +0800] [redisson-netty-2-94] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c7495430313563bbcd6c9b3e4f86ccf9 1d03d94ff8df54a0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:947 +0800] [redisson-netty-2-93] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7b69dffbb57d06dc9318760053bc94e0 16e207e4ea8e9fec CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:947 +0800] [redisson-netty-2-95] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 04eaa83d0aa9bd5da547839cde7ea0fa 2c87023137a692ee CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:948 +0800] [redisson-netty-2-120] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 62416233753f846691d1a964daa1aa0a b041df8b21dc5aef CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:948 +0800] [redisson-netty-2-105] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3042acf49c0e123797d4b3cfc755f5c5 75b332790ed9fae6 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:948 +0800] [redisson-netty-2-110] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 63c7d864bf15f9c1879d9585947beb84 ce75e32800bedad8 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:948 +0800] [redisson-netty-2-126] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 306da9969d8ad1c72d33805b9e2c2c76 7c5c01203071cb77 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:948 +0800] [redisson-netty-2-130] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f07ac9f71bce6f762f4c3f51505c8d50 bdae3c98fe79d369 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:948 +0800] [redisson-netty-2-112] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : fbe47f1c281000902bc708eb1c29fbf5 19b955f83fe14357 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-117] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b6573f7e6033acc008c9918fd831423d 6ad2f35e7ad07b7e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-119] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 6c5cb9e01c400a5170e0343dca08708f c01666589acfb9f6 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-116] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 61af22c752dd67f1a6d51c7b216680f0 5463731bce89af15 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-148] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : fadbe193a8ad24b6e58571e4450d6744 f436e13e6e85abc0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-134] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a183e66a1dbde5a3eac6546c2020362a 2ac495ad7a56950f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-141] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7d00128ddfa29b011e00d335efceaf5b 4a371ad7d241e997 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-122] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : fbd2896aa40a19082478f8a1f27a308a 2cb27ccba363a534 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-132] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 71f08cd211f86cfb8c378141bc6f7c6c 82d480e746badff8 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:949 +0800] [redisson-netty-2-118] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 1a19f6708e5d9d707bd8e73014f96485 62ebc86b9c15622e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:950 +0800] [redisson-netty-2-161] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 6b35e0cce348fee4a81b8aa4c2f82c5d b330fe5092d57ef3 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:55:950 +0800] [redisson-netty-2-111] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : ab616525e91dce42b87c7d93e12b42e5 4fb2a489c210d9b2 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:047 +0800] [redisson-netty-2-174] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 6b03236d03f51c324d8257079d20a49f 1e9f8548ef4bae99 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:047 +0800] [redisson-netty-2-177] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c8ea177d2871dfd4e4e42120dc5cfab6 1c1bdfb6de5720fa CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:047 +0800] [redisson-netty-2-152] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : bde55818e8043cc251d6c0179c2dc9e0 d345aec9a51e404b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:048 +0800] [redisson-netty-2-162] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : aaf798baaa10765fbb6e9fd0e5fd8a12 7cf040efe0a3abe0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:048 +0800] [redisson-netty-2-160] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3f695cdc6e7ee668230cb663eefbedfc 274cabb8e6b76210 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:048 +0800] [redisson-netty-2-159] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f35718e54be5447b1c4528fb84ed4a25 f151c0523f05e141 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:048 +0800] [redisson-netty-2-188] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 29623d937cfbc6ef48ff975a4e1a260c db3871971e52413c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:049 +0800] [redisson-netty-2-1] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c4f9c86a4575b57ce0dc52a00b558c9d a450b794f4fc093f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:049 +0800] [redisson-netty-2-138] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 388e20b57e90f4e43725e24725fcb522 ea9e6e9887e6e408 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:049 +0800] [redisson-netty-2-178] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 005616858a5e03dc9ea836df295eb9b1 96ea980dd3a08599 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:56:049 +0800] [redisson-netty-2-173] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7eb254dac7d9cf015b7773b8ff06ec8b 4e0963bb65402a27 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=28}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:14:59:603 +0800] [redisson-netty-2-26] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : b83aa31eb45ce7980f6305371f03ed60 8c6cd987b2fa686d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-11, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=39}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:15:00:139 +0800] [OkHttp http://otel-collector.observability.svc.cluster.local:4318/...] ERROR io.opentelemetry.exporter.internal.http.HttpExporter - Failed to export spans. The request could not be executed. Full error message: otel-collector.observability.svc.cluster.local java.net.UnknownHostException: otel-collector.observability.svc.cluster.local at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:796) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1504) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1363) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1297) at okhttp3.Dns$Companion$DnsSystem.lookup(Dns.kt:50) at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.kt:170) at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.kt:132) at okhttp3.internal.connection.RouteSelector.next(RouteSelector.kt:70) at okhttp3.internal.connection.RealRoutePlanner.planConnect$okhttp(RealRoutePlanner.kt:164) at okhttp3.internal.connection.RealRoutePlanner.plan(RealRoutePlanner.kt:75) at okhttp3.internal.connection.FastFallbackExchangeFinder.launchTcpConnect(FastFallbackExchangeFinder.kt:119) at okhttp3.internal.connection.FastFallbackExchangeFinder.find(FastFallbackExchangeFinder.kt:62) at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:298) at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:101) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:85) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:74) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at io.opentelemetry.exporter.sender.okhttp.internal.RetryInterceptor.intercept(RetryInterceptor.java:96) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:226) at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:574) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) [otel.javaagent 2026-03-17 17:15:00:140 +0800] [BatchSpanProcessor_WorkerThread-1] DEBUG io.opentelemetry.sdk.trace.export.BatchSpanProcessor - Exporter failed [otel.javaagent 2026-03-17 17:15:04:610 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : ed583f36ea55ee7122444cad80435ac4 f58211348583b775 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-12, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=40}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:15:06:750 +0800] [http-nio-80-exec-6] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : c1669486f3de68151736a9a46aa7db12 dbc8171a974bacca SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-6, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=237, http.request.method=GET, network.peer.port=50684, http.route=/, server.address=172.20.161.215, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:15:06:752 +0800] [http-nio-80-exec-3] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : 4919beca73980fc69ce3742e2159be19 fbfd99c1642f5902 SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-3, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=234, http.request.method=GET, network.peer.port=50690, http.route=/, server.address=172.20.161.215, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:15:09:620 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 94b131a09698af470317b486de6c091e 5b060bcb6680553f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-13, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=41}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:15:13:514 +0800] [OkHttp http://otel-collector.observability.svc.cluster.local:4318/...] ERROR io.opentelemetry.exporter.internal.http.HttpExporter - Failed to export spans. The request could not be executed. Full error message: otel-collector.observability.svc.cluster.local java.net.UnknownHostException: otel-collector.observability.svc.cluster.local at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:796) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1504) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1363) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1297) at okhttp3.Dns$Companion$DnsSystem.lookup(Dns.kt:50) at okhttp3.internal.connection.RouteSelector.resetNextInetSocketAddress(RouteSelector.kt:170) at okhttp3.internal.connection.RouteSelector.nextProxy(RouteSelector.kt:132) at okhttp3.internal.connection.RouteSelector.next(RouteSelector.kt:70) at okhttp3.internal.connection.RealRoutePlanner.planConnect$okhttp(RealRoutePlanner.kt:164) at okhttp3.internal.connection.RealRoutePlanner.plan(RealRoutePlanner.kt:75) at okhttp3.internal.connection.FastFallbackExchangeFinder.launchTcpConnect(FastFallbackExchangeFinder.kt:119) at okhttp3.internal.connection.FastFallbackExchangeFinder.find(FastFallbackExchangeFinder.kt:62) at okhttp3.internal.connection.RealCall.initExchange$okhttp(RealCall.kt:298) at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.kt:32) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.kt:101) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.kt:85) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.kt:74) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at io.opentelemetry.exporter.sender.okhttp.internal.RetryInterceptor.intercept(RetryInterceptor.java:96) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.kt:126) at okhttp3.internal.connection.RealCall.getResponseWithInterceptorChain$okhttp(RealCall.kt:226) at okhttp3.internal.connection.RealCall$AsyncCall.run(RealCall.kt:574) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) [otel.javaagent 2026-03-17 17:15:13:515 +0800] [BatchSpanProcessor_WorkerThread-1] DEBUG io.opentelemetry.sdk.trace.export.BatchSpanProcessor - Exporter failed [otel.javaagent 2026-03-17 17:15:14:625 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : afe4da9f86298af4600422431dcf9b3c a3ed9adab3b4eceb CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-14, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=42}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:15:16:751 +0800] [http-nio-80-exec-7] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : 51b6e82863601f50d4589847fc76a880 ea4a39adc558fd2b SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-7, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=238, http.request.method=GET, network.peer.port=37890, http.route=/, server.address=172.20.161.215, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:15:16:751 +0800] [http-nio-80-exec-9] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : 3c620b76752b30d61f70aac06fb4079d 0071e9214be378f8 SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-9, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=240, http.request.method=GET, network.peer.port=37892, http.route=/, server.address=172.20.161.215, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:15:19:634 +0800] [redisson-netty-2-26] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : fb4d5f037414baaf3213080c2fdd7337 da468e02145e1ea4 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-15, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=43}, capacity=128, totalAddedValues=8} [root@k8s-master-14 ~]# 1.上面日志表明java Agent已经挂上 有大量的内容 otel.javaagent LoggingSpanExporter SERVER [tracer: io.opentelemetry.tomcat-7.0...] CLIENT [tracer: io.opentelemetry.redisson-3.0...] 这说明: 应用已经被 OTel Java Agent 接管 trace/span 已经在生成 Tomcat 入站请求、Redis 调用都已经被抓到了 2.报错关键导出地址:UnknownHostException: otel-collector.observability.svc.cluster.local 应用想把 trace 发到 otel-collector.observability.svc.cluster.local 但这个名字解析不到 所以 exporter 发不出去#解决 #Collector 的默认资源名大概率不是 otel-collector,而是 http://otel-collector.observability.svc.cluster.local:4318 kubectl set env deployment/efp-message-client -n efp-service-test \ OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector-opentelemetry-collector.observability.svc.cluster.local:4318[root@k8s-node-26 ~]# kubectl set env deployment/efp-message-client -n efp-service-test \ > OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector-opentelemetry-collector.observability.svc.cluster.local:4318 deployment.apps/efp-message-client env updated You have new mail in /var/spool/mail/root [root@k8s-node-26 ~]# kubectl rollout status deployment/efp-message-client -n efp-service-test Waiting for deployment "efp-message-client" rollout to finish: 1 old replicas are pending termination... Waiting for deployment "efp-message-client" rollout to finish: 1 old replicas are pending termination... deployment "efp-message-client" successfully rolled out [root@k8s-node-26 ~]# kubectl logs -n efp-service-test deploy/efp-message-client --tail=200 [otel.javaagent 2026-03-17 17:24:19:459 +0800] [redisson-netty-2-44] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c008adcc355241a295cfecff485ff58d 5d4781b6c9493a92 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:459 +0800] [redisson-netty-2-54] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : df9c5664a6e88e6c7c9603b9ff1a132d 2accc7d03e7fd9f5 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:460 +0800] [redisson-netty-2-22] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 52f9faf30383d6914aff4511aa7cac5b 91de52970a97f3c0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:461 +0800] [redisson-netty-2-55] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : cacab1029fc438a40baaa30d290fe751 5dace87656801185 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:554 +0800] [redisson-netty-2-39] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : aeb6deca158f4b041d910439a0f891ef e9356055bf94bae4 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:554 +0800] [redisson-netty-2-40] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7a7fbf9c1fb013433b2f97d3762bdaed 7c3349a12197f830 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:554 +0800] [redisson-netty-2-41] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 92d57fbf90fa314161088212ae79c2d1 bdf4210b8387fed1 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:554 +0800] [redisson-netty-2-37] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c6071384e8bee086902717f6fe85ffad b3daf1bb36370db3 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:555 +0800] [redisson-netty-2-93] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 15783906221eafce53616e4126a06281 d376b02ff3cc1e85 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:555 +0800] [redisson-netty-2-92] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 1df869dd5c6870a0cbb9eee7f8998aa1 97acebf4def6a137 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:654 +0800] [redisson-netty-2-98] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 32231f84e16fd77f74f8e1fcf23fd86c 5891e49f51bba0ac CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:654 +0800] [redisson-netty-2-101] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : d6241a80f97d7dfd6a58c9f90a84526e c1fa6a515165b992 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:654 +0800] [redisson-netty-2-95] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 8fb9ac8b27e622402e7ffcbe2c6b4407 21372e82ae41607b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:655 +0800] [redisson-netty-2-104] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : ed17425e986e51fd0c28ba6b0501ffb6 65856a1d0dbacb99 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:655 +0800] [redisson-netty-2-94] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : db5928ea05461dacf1a5f2f75960589f a45f804d7a663e2e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:655 +0800] [redisson-netty-2-113] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 8b22fb3ce37e8e9f70844687ced7b255 3ce7ef84e079c089 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:655 +0800] [redisson-netty-2-115] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 77b526fd20f034f9a6ca6a3e76f40356 19e595dbcd6e9df8 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:655 +0800] [redisson-netty-2-114] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : e321b0de224108d7902bbe0ff9012f49 9cd8aa5aba25a61b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:655 +0800] [redisson-netty-2-110] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3f9c9d9426451cc90a247e4f7eabc97c 6c591f3122e5dd52 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:656 +0800] [redisson-netty-2-111] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 925f0753a5098cd3d9ef5f8ec953d550 fb16e64884ca52c6 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:656 +0800] [redisson-netty-2-109] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c5f2679a1a85c7ad0c14340412367d7b 8a0b3d788cf65555 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:656 +0800] [redisson-netty-2-124] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3cbf3326e88248385d7c0f531c91fcec d7a8085885fc5bcc CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:656 +0800] [redisson-netty-2-112] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c6f0696ccf494b58eb70ad7a2f72084d df32022b95e5f70a CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:656 +0800] [redisson-netty-2-97] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 0316d16ab3dd4c4dba7c6b20ca3507ad 6895605aa0bd316c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:656 +0800] [redisson-netty-2-151] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : dd358a69cebbc645bfa2ed6431c1294e a51da43f779baec4 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:657 +0800] [redisson-netty-2-158] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : d03cf028a7dedf64a1af8b91e4eb7ee4 95cd17e54dc5d978 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:657 +0800] [redisson-netty-2-163] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 149eae3b024204179a10d4ceff25cdb8 b482bc1a01c56913 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:657 +0800] [redisson-netty-2-160] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 042bf1d48554449f8661a8699eacf754 77fc04cc3c4870ec CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:657 +0800] [redisson-netty-2-162] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 5b22c271e25d0528437055fb74bdb5d9 f65893a1cdf26588 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:657 +0800] [redisson-netty-2-156] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 62354c02d7f023e09b0ffdf23aca6fa0 d083d42eb34ceb40 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-146] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 431942f4226f829fd7c3fa698d5d32cd fd9e2b85557f8309 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-100] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3f44337231eadb7e3e964ef2816df1b5 ce6bc301d7b1e93f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-96] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f9c2add276d9155a2a4a5d352868a6ed e0c87384be3951c0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-107] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : ccaa01a74dac0863f65afcce0119b940 51a5e80878a2fac9 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-147] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 78babb3a65a10a52c8ba78f3621b64e8 23ab9402914ec165 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-106] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 80ddf0fd321146ec2e9776b8f00bd2c2 72c48a3c127687fb CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-103] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : cd311a2baa14935250cb80eb897f1b7e 22af9e6111ac08a8 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-108] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : d71c6872548f9d90686ba5c3353f6298 89abeacd54fd0735 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:658 +0800] [redisson-netty-2-155] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 07f0282ab294c33e4a4921f191fdc1ac 99724263abf971d9 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:659 +0800] [redisson-netty-2-149] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : ee2709d8b00c90fc49c53c76e109a13d 95c67dbafb07e27e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:659 +0800] [redisson-netty-2-99] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 366c092b6593ce3d6f4ee0a617c44d12 05a56c90e2b1486a CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:659 +0800] [redisson-netty-2-153] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 966ab1ea8cd91eee7721bcf023d6bf8e 4fba2ee68913d88c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:661 +0800] [redisson-netty-2-4] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : b2daa866a0b63a2fa0c07c6df4ea9871 1169fbb89bd56227 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-37, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=66}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:755 +0800] [redisson-netty-2-142] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3fc989afb9e2ceaeaaf266790d557236 75d355bd9f0dbb2f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:755 +0800] [redisson-netty-2-143] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a7022da55e9f9edafc4d1d94999bc89e f8acdd3df94825e2 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:755 +0800] [redisson-netty-2-138] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c0394125583a3fd6e4b8ee4ce7d4e8c8 7a8ded25e8e8dd98 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:755 +0800] [redisson-netty-2-152] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 00181b26734d2ecf17ea4569e2889dd0 4368e3cd21da482d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:755 +0800] [redisson-netty-2-116] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 156a2be0f7c70119dca238974a384762 9ab829f2a0c9357e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:756 +0800] [redisson-netty-2-177] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 6cffc3419cac73ed77f11cc54f402a9b 5a81bf14486217fc CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:756 +0800] [redisson-netty-2-176] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b1e4d9c42aad6be49186502d28504d8f 06e474cd1a0bcf5c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:756 +0800] [redisson-netty-2-175] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 4387ff6cdcef3446714b734d601004ad 70a7354a8674664e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:756 +0800] [redisson-netty-2-139] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 09a95ab2b76719b38665ecd4de2c0343 650c9d3cf5f92618 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:756 +0800] [redisson-netty-2-171] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 17dc8e7b3c1e67beb310d473403ab8d5 0515a7882cc6bbba CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:756 +0800] [redisson-netty-2-185] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b7d89143f21f0019690def74cd9c9c30 48bb4eefef1013ed CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:756 +0800] [redisson-netty-2-190] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : bef8995cf65ca5c4993a70f86fe8bc31 00c9131eecabc364 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-5] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 9cae5e5c35319c30fa33ab66a254ffc7 f9c901fde632a0e1 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-1] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a3219be8904b29855688f48025004cc9 e4523279e7232a37 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-9] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 893b557e5e635b88040f6179c2381f0d 8858d246bcd0dc6b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-189] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 00980ba6c51fb60312ceb9564d96907b 0dca046990f514c5 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-191] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 9daca56294f0cffbda27052fb83f4f39 7fe44733ff0dd448 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-179] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 356f1ba102e2616a5dc777df015b5aab beae115c09f50bf6 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-174] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 5b82b29b26ecbd38fbe708c13c65ebe8 adffabaffc669a91 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-183] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 09ca676e86f13308f8733c3cff978b36 88e11eca662e4976 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-165] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 763e84d2e522c8159d4bfffd7e6aa294 5b7ecd1281e222eb CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-8] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 316ca30f2424b295084ebda3f5b83e0b 2e7d178e523d8610 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:19:757 +0800] [redisson-netty-2-181] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : fb406c116813a8e3f6cfb048f9a4d790 47eb5d1fc6a82be3 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:23:596 +0800] [http-nio-80-exec-1] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : 25c637219bab98a2c740f703ae001df0 e9fb1a3f093524ff SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-1, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=233, http.request.method=GET, network.peer.port=42510, http.route=/, server.address=172.20.161.241, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:24:24:670 +0800] [redisson-netty-2-27] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 47e1d12ffd57ed35da4602ea22273a26 d7263a1a9bc5691e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-38, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=67}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:29:676 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 503c1fa543edf45c00b84cd13459b4e3 d62e34db8a487968 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-39, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=68}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:33:597 +0800] [http-nio-80-exec-3] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : 1b3988b364a31163bb510c6d532605a9 97a17aa7c7b06e9c SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-3, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=235, http.request.method=GET, network.peer.port=58276, http.route=/, server.address=172.20.161.241, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:24:34:684 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : b5c59ae3c954dc17dc3a5b07d28ad699 fb761eef030bd425 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-40, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=69}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:35:890 +0800] [redisson-netty-2-155] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET' : 1974b4174826024bc7dc23f9af2cc181 74cdc94bef802147 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-4, network.type=ipv4, db.operation=GET, db.statement=GET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=236}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:35:891 +0800] [redisson-netty-2-42] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'SET' : 7e248f4930bd95077e40196a75994929 ad7f5b314d89cd7e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-4, network.type=ipv4, db.operation=SET, db.statement=SET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a ?, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=236}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:36:707 +0800] [http-nio-80-exec-5] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'OnCommittedResponseWrapper.sendError' : 15737c498eeb47ea35ae4ff28322af17 90c706288aa1130a INTERNAL [tracer: io.opentelemetry.servlet-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-5, code.function=sendError, code.namespace=org.springframework.security.web.util.OnCommittedResponseWrapper, thread.id=237}, capacity=128, totalAddedValues=4} [otel.javaagent 2026-03-17 17:24:36:713 +0800] [http-nio-80-exec-5] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /error' : 15737c498eeb47ea35ae4ff28322af17 c301475ad3d69ef6 SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-5, server.port=80, network.protocol.version=1.1, user_agent.original=Prometheus/2.44.0, http.response.status_code=401, thread.id=237, http.request.method=GET, network.peer.port=33010, http.route=/error, server.address=172.20.161.241, client.address=172.20.161.251, network.peer.address=172.20.161.251, url.path=/metrics/prometheus}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:24:39:689 +0800] [redisson-netty-2-4] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 57adc1ca891f501392d9fd7b8550aaf6 06407e675283b224 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-41, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=70}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:39:815 +0800] [redisson-netty-2-14] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'EVAL' : 3ea8889d2bbd3ad9181c6ba4eebb900e 24d757d3aa914102 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-35, network.type=ipv4, db.operation=EVAL, db.statement=EVAL if redis.call('setnx', KEYS[6], ARGV[4]) == 0 then return -1;end;redis.call('expire', KEYS[6], ARGV[3]); local expiredKeys1 = redis.call('zrangebyscore', KEYS[2], 0, ARGV[1], 'limit', 0, ARGV[2]); for i, key in ipairs(expiredKeys1) do local v = redis.call('hget', KEYS[1], key); if v ~= false then local t, val = struct.unpack('dLc0', v); local msg = struct.pack('Lc0Lc0', string.len(key), key, string.len(val), val); local listeners = redis.call('publish', KEYS[4], msg); if (listeners == 0) then break;end; end;end;if #expiredKeys1 > 0 then redis.call('zrem', KEYS[5], unpack(expiredKeys1)); redis.call('zrem', KEYS[3], unpack(expiredKeys1)); redis.call('zrem', KEYS[2], unpack(expiredKeys1)); redis.call('hdel', KEYS[1], unpack(expiredKeys1)); end; local expiredKeys2 = redis.call('zrangebyscore', KEYS[3], 0, ARGV[1], 'limit', 0, ARGV[2]); for i, key in ipairs(expiredKeys2) do local v = redis.call('hget', KEYS[1], key); if v ~= false then local t, val = struct.unpack('dLc0', v); local msg = struct.pack('Lc0Lc0', string.len(key), key, string.len(val), val); local listeners = redis.call('publish', KEYS[4], msg); if (listeners == 0) then break;end; end;end;if #expiredKeys2 > 0 then redis.call('zrem', KEYS[5], unpack(expiredKeys2)); redis.call('zrem', KEYS[3], unpack(expiredKeys2)); redis.call('zrem', KEYS[2], unpack(expiredKeys2)); redis.call('hdel', KEYS[1], unpack(expiredKeys2)); end; return #expiredKeys1 + #expiredKeys2; 6 redissonCacheMap redisson__timeout__set:{redissonCacheMap} redisson__idle__set:{redissonCacheMap} redisson_map_cache_expired:{redissonCacheMap} redisson__map_cache__last_access__set:{redissonCacheMap} redisson__execute_task_once_latch:{redissonCacheMap} ? ? ? ?, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=64}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:43:597 +0800] [http-nio-80-exec-6] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : 24a30dce76dcaa6e10cdd729a70bbc1b af614426c0ed86a2 SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-6, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=238, http.request.method=GET, network.peer.port=54048, http.route=/, server.address=172.20.161.241, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:24:44:698 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : cf01b97bd4a95d9633f2ac3d4383b069 4c904cfb49783d6b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-42, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=71}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:355 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 925e95d677af13c2b4ffaa5b1df98e0f 9835b319ed63e3a1 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:355 +0800] [redisson-netty-2-4] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 67552896cdeb7cde42176dcaa09ff9c8 2ac28c88a0205f60 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:454 +0800] [redisson-netty-2-6] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 75d74e8824cfc01dbb224eb9e0879909 5d59de3f741b7cb1 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:454 +0800] [redisson-netty-2-17] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 60a6912a911032d079f6ac1a7060b164 8c681bb8f38c74e9 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:454 +0800] [redisson-netty-2-20] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 84d2d36296fdc2465cf2cc5fc3a6bcf0 1a7c8cca4af78d4b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:455 +0800] [redisson-netty-2-11] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 4622276be28aeb9bbc8553badbc1816d d907f9cb0f37fd93 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:455 +0800] [redisson-netty-2-8] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c0dd1665b05afe19c6d1052c721db0bb a7d674e264ddb3a6 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:455 +0800] [redisson-netty-2-27] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : dca2b0f8651cd752a924ff5c8cfc22dd 5fd5672668c77b5d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:455 +0800] [redisson-netty-2-7] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 0575c6151886cc509669abfeda4df453 d003845907b158a3 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:455 +0800] [redisson-netty-2-10] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f37cbbe01393f3d52d5f28aeea37e23f e947d66fff5a3864 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:455 +0800] [redisson-netty-2-9] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c3e0615af0caa4717774739aaee3c748 34c70f6724f7d5d8 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:555 +0800] [redisson-netty-2-25] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 2788fadf2ef19b9601195145385e9402 4efcc5210a6d9586 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:555 +0800] [redisson-netty-2-21] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 37c38d83322f47f4c5e0d73d192160e6 f45c516d03e1567f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:555 +0800] [redisson-netty-2-19] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7b7c81248887df0aa6b213da7d7412a0 ea26533840a8530a CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:556 +0800] [redisson-netty-2-13] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b8a1e2f74343d78bf2482d5085103322 9dbd717ea97fbdd4 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:556 +0800] [redisson-netty-2-16] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b1c7ce35a0bea66440c212f44174aa48 f93474ab70806407 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:556 +0800] [redisson-netty-2-24] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 38e54a180f61b4435b1e5511adb136dd fa8b88fd907e6893 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:556 +0800] [redisson-netty-2-61] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 618de55bbcc293eb276bc34518fbc2f0 2de3742489fa5b2c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:556 +0800] [redisson-netty-2-23] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c9e7828f0302f140d8e41a68c6c142fd 8c0096d397677f8e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:556 +0800] [redisson-netty-2-50] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c7a97aa015a3d72987f94bd522e0fab8 063a3f6e6754cf5f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-59] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 5dfdb335a03a52380b97ae9ecfabab3c 3af145ef52e1292f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-44] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 91fa1374f28ae0de17fadb55538fbf32 316b5ae0702884bc CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-22] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : feef207ba245b7f7f653027d7d8faa34 cc99de597cd34434 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-35] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3921fa4fc0dbf463637470bacb4a4cab 805c2193560effbe CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-15] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 633a3fd9a1336c3b5d1e53df051578ab 689f8db1ebb665ec CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-43] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 072383b1b3b58b72bf5082123a038b97 f3255979ae4b3236 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-18] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : e869ba21ce3e5a873976ca8455551a60 eb427a21e2ae74ca CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-12] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 24a7c5ce568995f43780f8b06b545a45 a374c8c6cdaf402a CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-42] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 98ae623d7b46fb99f8f2c574af3224fd d0e0f528e1136721 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-56] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b0508dc182b4962c026dc52ea168533a 28952773844e2582 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:557 +0800] [redisson-netty-2-38] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : d5491f839b4ed2895619850b1b153c03 dae6db4e8ed1ef64 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-45] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 6b22795f2a6db86253c97874770371aa 9ab322bf9ba71274 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-14] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3ea8b82bcf250407cb1ae53858d3940f c0cc1e9f178cc5ba CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-48] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b9b70e47fb37ecdf17b9a8eced6c6925 8113acb82a4e2f8d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-47] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 85843a91cd73426a83cf0fb25b6eed35 c9e686c14477137c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-54] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a1f39c57970be4ecc6a8f95e2379d48e c674e25fd7237c31 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-51] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : cbd3708d7e35e1cbf82bbc8019b8bb79 8a7892a409ce1135 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-55] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f1090ad2d8eac626b1ad2c4f75a7782f 590aca5f12f03264 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-46] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7053f2b176b00385330105218202b999 dbe3923d389dbae2 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:558 +0800] [redisson-netty-2-57] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3e18a026de384a5c692abe7cc900c8eb 28856b1974080615 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:559 +0800] [redisson-netty-2-52] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 158a7624041eff4c303c48d7088e0e3e 0cbe7b364a270835 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6002, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:654 +0800] [redisson-netty-2-41] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : e7c23ed1055c83de4baf38d7fabcb30f c08a04eae6219ed0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:654 +0800] [redisson-netty-2-40] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 36760b8d3b6caa88b53e7b451e63cb2d 29a453dd3e5a57d8 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:655 +0800] [redisson-netty-2-37] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f02456f8d377b5d1b2ba219f24de29f6 f48682bf9f37f455 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:655 +0800] [redisson-netty-2-93] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 4d66777cf5855e023ae1530a7c8b749c 3770bcf965bce2f8 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:655 +0800] [redisson-netty-2-39] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : ef71e92fd46e5d4cb77befec87ec2d2f a06600fa18c5ebc2 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:655 +0800] [redisson-netty-2-92] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a306be8401059f3c5fdb95d39c69ce88 94d5bd148b6614bc CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:703 +0800] [redisson-netty-2-27] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 30cf1eeca5e6ce5adaed9ab78453ad7c f66cca7d9e5915fb CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-45, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=74}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:754 +0800] [redisson-netty-2-95] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 9d2b17f390bded336ca6681af39ef82e b481750418812e85 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:755 +0800] [redisson-netty-2-99] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3e9d86608335eac843343acca7b84a6f 62e258e17c62a1da CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:755 +0800] [redisson-netty-2-98] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : e2ecf7e0ad7b05ea5cec64b48aa90313 bcbc83c6e721e848 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:755 +0800] [redisson-netty-2-96] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 72995189c3de74972c7d570b48590195 fcd17b5a20cef320 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:755 +0800] [redisson-netty-2-94] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 6e05bd1a52bcc89d3ec7a86177b769aa 00659a24bcdad9f5 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:755 +0800] [redisson-netty-2-104] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 42981c8e26549429299acf2910a431da ab93eb2e94f330d0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:755 +0800] [redisson-netty-2-101] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 2fffc61cba9c2bd99f7d7412191d35a5 8cce8ddb084706e6 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:755 +0800] [redisson-netty-2-100] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7dd5e9245fdb6b87c210e6a7fc89c89d 51b23bf188bc82a5 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:756 +0800] [redisson-netty-2-115] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f6979f546fda72507467c7ba7b7cbf76 51e49f38ac7c6351 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:756 +0800] [redisson-netty-2-110] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 046c23b0cb3d7f710fac9a660c91fe14 daaacb12f88d2f51 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:756 +0800] [redisson-netty-2-114] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 03d8e519ad7e9ef6bf0e0983fa848752 18e0bf1a20653216 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:756 +0800] [redisson-netty-2-103] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 1bd112120f4dfc78ea24d2617cf17fd6 ef99f45fdd96e549 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:756 +0800] [redisson-netty-2-109] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 984b3af1fb4574737fc4f0dc102d378b 8f399b04175878f1 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:757 +0800] [redisson-netty-2-106] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 2cff069e51095545411e695e00181d0f a17fb239860ff03e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:757 +0800] [redisson-netty-2-111] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 8448ca91b497d0745e84686c54efb638 3a494fdcdb36df4b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:757 +0800] [redisson-netty-2-112] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : bd7dbb2d6891d66f344ed750f31b43d4 a62c9337fdd7346c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:757 +0800] [redisson-netty-2-124] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b52251f3585f0421a3265d9bf48087a1 19856e0afeb94017 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:757 +0800] [redisson-netty-2-97] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 4bf4a7eba04a8e2d6f4cec93754524b2 84c8dd413d929229 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:757 +0800] [redisson-netty-2-158] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 8baeb638dc112b9c55a8ab62e3666c32 0a8cf2e5bb108f2f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:757 +0800] [redisson-netty-2-162] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 5d7f1e1aa547ec3982f45c28b653cb43 76e2dadfb81dadb1 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:758 +0800] [redisson-netty-2-108] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 29160000a2aa4c3262bc6ef569b80eab 3343776e61854d12 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:758 +0800] [redisson-netty-2-156] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 0ad62a06b0b32fad2c09df885c62101c 506b9fb29089b736 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:758 +0800] [redisson-netty-2-160] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 6016969337ef8cfbc5f5b95b2903a695 2d7966f7efd07884 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:758 +0800] [redisson-netty-2-163] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a17e4155287dfa7e9d7c696655dd5ea9 f94bc8f95bb0dc6c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:758 +0800] [redisson-netty-2-147] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 50b0c038636e5062edd6fd0d503aa0c4 1eb73458c562cb3f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:758 +0800] [redisson-netty-2-146] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7305566e80afc3eac555c57375653f09 c63ce2f2bd71b2a3 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:758 +0800] [redisson-netty-2-151] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 5f9aa5e793010a2e98c6140d2dad32c5 0ea457d8f2231335 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:759 +0800] [redisson-netty-2-149] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f5c7eb511191988ed900b8eab018862f 738b4c38cccaf536 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:759 +0800] [redisson-netty-2-107] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : fa38102c8f85fe6e3391dbe096390124 4ae4c39a3b6a9c68 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:759 +0800] [redisson-netty-2-155] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b00c8a3a882dc57224cb19d0ef39143b 2f91563e80151bfc CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:759 +0800] [redisson-netty-2-153] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : fb7e6c6191f548c90e74299afff6c26e f73b824497a63e04 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:761 +0800] [redisson-netty-2-113] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 65739ffd9536154f2054a4e67cc15d1e 8991d2c7048211d2 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:854 +0800] [redisson-netty-2-152] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 2cced5fe4eb37528f86372a19d5b5004 9e0b75657190d54d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:854 +0800] [redisson-netty-2-142] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : f6916f6235fd53af4f914db792a6cda7 be61271d7a5aad0e CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:854 +0800] [redisson-netty-2-116] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 3092ba8375986857e71ab8571ef797cc a44a4d8fa5c7b418 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:855 +0800] [redisson-netty-2-177] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 2a45734edc73b6b0fe24d960a988c654 880ed701b1255eb0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:855 +0800] [redisson-netty-2-179] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 99841af6682ec151e166ae0810e8a23b d6588437c86deb95 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:855 +0800] [redisson-netty-2-175] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : b981d15b35a81649e4f7569598d15ae0 029b8f229b2bdd5a CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:855 +0800] [redisson-netty-2-174] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a27eea52d9a3e96f63db07351b6ec015 82463f525070b166 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:855 +0800] [redisson-netty-2-183] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 393f71c044f4f9627871c8ea81bc405f cf4b51a5844805bc CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:855 +0800] [redisson-netty-2-171] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 998f14c0fc6213038a153a9865f2e66f 837abaa13a8821d6 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:855 +0800] [redisson-netty-2-176] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 4ff6b05a9a92be6001bc300576117a99 68a19fdcd915ff9f CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:855 +0800] [redisson-netty-2-185] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : d1da215b5eb2ae7bf7883b6f9abbe937 c37d9ca9dcec4cb3 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-190] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 43cbc845e064dd448cb4af19895dc5e7 80c5afba58b92693 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-1] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 8a590b1650924256cced3f322c85db1b 047fce1fc20b54f4 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-5] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 23a53bbbad83c0b6bee4721ea4352834 f40210eeb596f3aa CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-143] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 5fee43dc37490f807102c64d09ba4736 46cdd0771365e883 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6004, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-8] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 093f3dc56c9954afd22818433738fa87 ab818fbf450d1038 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-165] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : c19b76723d98567b1443d566f57c68df 762c826a4d951833 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-138] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : a65d56d8e6cd68570bbbcc103289fd8a 4bd63c3dc415ec02 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-139] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 7c3a1809c031d745f06b88ae88ce0730 14f1e7c6fb28472b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-191] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 739ad9a4b0743832be3430aefe713185 a1b0ff82600558d5 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-9] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : db8f9f2b2a4926ef9d846b83cb99c401 5c1487be86f68ecc CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-181] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : 2c7b492aafc4dbdc42ecaa4c30ec65d0 32daff0d0db4cdd0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6005, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:49:856 +0800] [redisson-netty-2-189] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'PING' : ddb75945f705a8e0ecd01542414fe2ca f6a61ac9102dcd77 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-timer-4-1, network.type=ipv4, db.operation=PING, db.statement=PING, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=29}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:53:596 +0800] [http-nio-80-exec-7] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : 24f092796426791cf8dcb8933d468c5a af560cfeb3adb5d2 SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-7, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=239, http.request.method=GET, network.peer.port=34192, http.route=/, server.address=172.20.161.241, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:24:54:711 +0800] [redisson-netty-2-27] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 0df329ea8aacfe87dc1627b59d44eaa9 c398a222d1ecd958 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-44, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=73}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:54:853 +0800] [redisson-netty-2-165] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET' : f560463b86cf18ddc934181d072640a4 60c12f39abdc615d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-8, network.type=ipv4, db.operation=GET, db.statement=GET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=240}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:54:856 +0800] [redisson-netty-2-43] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'SET' : 6c3fd60176d94f1919ec7ce1910151b0 2ea6c9a79cb9719d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-8, network.type=ipv4, db.operation=SET, db.statement=SET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a ?, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=240}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:54:857 +0800] [redisson-netty-2-171] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET' : 0a2632b16124e4bbf4f0068faed3439a 3b37c0d6e27ce9cb CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-8, network.type=ipv4, db.operation=GET, db.statement=GET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=240}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:54:859 +0800] [redisson-netty-2-35] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'SET' : 4f0419ef707d15e0a37d16f4cdb06053 347024f1091788f3 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-8, network.type=ipv4, db.operation=SET, db.statement=SET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a ?, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=240}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:54:860 +0800] [redisson-netty-2-5] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET' : 3adbada5c04dbe8d095e67af1f365dc5 2878e16dcc01e18d CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-8, network.type=ipv4, db.operation=GET, db.statement=GET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=240}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:54:861 +0800] [redisson-netty-2-45] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'SET' : d5e58b956f997b4e34b1aad733375f0a f783afce2c49d49c CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-8, network.type=ipv4, db.operation=SET, db.statement=SET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a ?, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=240}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:54:862 +0800] [redisson-netty-2-190] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET' : 86ec3c83d6fad91cc46765856146f32e 78e198218116cce0 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-8, network.type=ipv4, db.operation=GET, db.statement=GET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=240}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:54:863 +0800] [redisson-netty-2-92] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'SET' : 1e89df1d7efe3eb704814cf9adb4f9b8 ef1743d8cd55618b CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-8, network.type=ipv4, db.operation=SET, db.statement=SET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a ?, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=240}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:24:59:720 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : c542d0d6506f7d94750a1703377d0714 5436a101d0dd0527 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-46, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=75}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:25:03:597 +0800] [http-nio-80-exec-10] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : d935acffe83fd1bd30ed026bd2d0fdde da8e52b62e549a92 SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-10, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=242, http.request.method=GET, network.peer.port=38916, http.route=/, server.address=172.20.161.241, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:25:04:728 +0800] [redisson-netty-2-27] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 5d24bbcf3c6877cebd879ab7920b44ec 83aa68668898f171 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-47, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=76}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:25:05:879 +0800] [redisson-netty-2-8] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET' : a604be0fa585b3581a4afd342963574e 1fdb2b2f8fb82270 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-1, network.type=ipv4, db.operation=GET, db.statement=GET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a, network.peer.port=6006, db.system=redis, network.peer.address=192.168.169.57, thread.id=233}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:25:05:880 +0800] [redisson-netty-2-93] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'SET' : 49088f9a65a54b1e197fc69b38305c74 275a6803ea40e337 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=http-nio-80-exec-1, network.type=ipv4, db.operation=SET, db.statement=SET ws:efp-enterprise-web:2a1cdc67-2b58-4cd5-a7b6-5eeddd46a58a ?, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=233}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:25:09:738 +0800] [redisson-netty-2-27] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 504c7bf6049183329155a56484023982 7fc95c0eb0483832 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-48, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6003, db.system=redis, network.peer.address=192.168.169.57, thread.id=77}, capacity=128, totalAddedValues=8} [otel.javaagent 2026-03-17 17:25:13:596 +0800] [http-nio-80-exec-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'GET /' : 4c3dbb707f2f3cea6b14d5594e9ca084 b4f810a9462cd48b SERVER [tracer: io.opentelemetry.tomcat-7.0:2.26.0-alpha] AttributesMap{data={url.scheme=http, thread.name=http-nio-80-exec-2, server.port=80, network.protocol.version=1.1, user_agent.original=kube-probe/1.22, http.response.status_code=200, thread.id=234, http.request.method=GET, network.peer.port=39784, http.route=/, server.address=172.20.161.241, client.address=192.168.169.26, network.peer.address=192.168.169.26, url.path=/}, capacity=128, totalAddedValues=14} [otel.javaagent 2026-03-17 17:25:14:746 +0800] [redisson-netty-2-2] INFO io.opentelemetry.exporter.logging.LoggingSpanExporter - 'CLUSTER' : 6a9795491a083f0aec9b2f8412556647 f6e680a13fa43526 CLIENT [tracer: io.opentelemetry.redisson-3.0:2.26.0-alpha] AttributesMap{data={thread.name=redisson-netty-2-49, network.type=ipv4, db.operation=CLUSTER, db.statement=CLUSTER NODES, network.peer.port=6001, db.system=redis, network.peer.address=192.168.169.57, thread.id=78}, capacity=128, totalAddedValues=8} You have mail in /var/spool/mail/root [root@k8s-node-26 ~]# SERVER [tracer: io.opentelemetry.tomcat-7.0...] CLIENT [tracer: io.opentelemetry.redisson-3.0...] GET / CLUSTER PING SET GET 这说明: 入站 HTTP 请求被抓到了 Redis 调用也被抓到了 efp-message-client 的自动埋点是生效的#链路 efp-message-client → OTel Java Agent → OTel Collector → Tempo → Grafana现在已经能看到: HTTP 入站请求 span Redis 相关 span 其他客户端调用 span 同一个服务的 trace 列表 这就是 OpenTelemetry + Tempo 的核心价值已经出来了。#上面的调试日志很多可以关掉 kubectl set env deployment/efp-message-client -n efp-service-test OTEL_JAVAAGENT_DEBUG=false四、配置granfana #指标 traces_spanmetrics_calls_total traces_spanmetrics_latency_bucket traces_service_graph_request_total这三个指标名能选到,但都返回 No data。 这说明现在 Service Graph 依赖的指标并没有真正写进 Prometheus[root@k8s-node-26 ~]# kubectl get all -n tracing NAME READY STATUS RESTARTS AGE pod/tempo-0 1/1 Running 0 104m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/tempo ClusterIP 10.68.99.114 <none> 6831/UDP,6832/UDP,3200/TCP,14268/TCP,14250/TCP,9411/TCP,55680/TCP,55681/TCP,4317/TCP,4318/TCP,55678/TCP 120m NAME READY AGE statefulset.apps/tempo 1/1 120m [root@k8s-node-26 ~]# kubectl get cm -n tracing NAME DATA AGE kube-root-ca.crt 1 127m tempo 2 120m [root@k8s-node-26 ~]# kubectl get secret -n tracing NAME TYPE DATA AGE default-token-g4wpb kubernetes.io/service-account-token 3 127m sh.helm.release.v1.tempo.v1 helm.sh/release.v1 1 120m sh.helm.release.v1.tempo.v2 helm.sh/release.v1 1 104m tempo-token-5vrk8 kubernetes.io/service-account-token 3 120m [root@k8s-node-26 ~]# kubectl exec -n tracing tempo-0 -- sh -c 'cat /conf/tempo.yaml | egrep -n "metrics_generator|remote_write|service-graphs|span-metrics|overrides" -A5 -B3' 40- {} 41-query_frontend: 42- {} 43:overrides: 44- defaults: {} 45: per_tenant_override_config: /conf/overrides.yaml [root@k8s-node-26 ~]# kubectl exec -n tracing tempo-0 -- sh -c 'find / -name "*tempo*.yaml" 2>/dev/null' /conf/..2026_03_17_08_02_47.597651194/tempo.yaml /conf/tempo.yaml command terminated with exit code 1 [root@k8s-node-26 ~]# You have mail in /var/spool/mail/root [root@k8s-node-26 ~]# [root@k8s-node-26 ~]# kubectl get deploy prometheus-server -n monitoring -o yaml | egrep -n "enable-remote-write-receiver|args:" -A5 -B3 46- helm.sh/chart: prometheus-22.6.2 47- spec: 48- containers: 49: - args: 50- - --watched-dir=/etc/config 51- - --reload-url=http://127.0.0.1:9090/-/reload 52- image: harbor.telewave.tech/monitoring/prometheus-config-reloader:v0.65.1 53- imagePullPolicy: IfNotPresent 54- name: prometheus-server-configmap-reload -- 59- - mountPath: /etc/config 60- name: config-volume 61- readOnly: true 62: - args: 63- - --storage.tsdb.retention.time=15d 64- - --config.file=/etc/config/prometheus.yml 65- - --storage.tsdb.path=/data 66- - --web.console.libraries=/etc/prometheus/console_libraries 67- - --web.console.templates=/etc/prometheus/consoles [root@k8s-node-26 ~]# Tempo 没开 metrics-generator overrides: defaults: {} 没有这些关键项: metrics_generator remote_write service-graphs span-metrics 所以目前 Tempo 只存 trace,没有生成 service graph / spanmetrics 指标。Prometheus 大概率也没开 remote write receiver grep 出来的 prometheus-server 参数里,没有看到:--web.enable-remote-write-receiver 所以即使 Tempo 后面开始往 Prometheus 写指标,Prometheus 这边也未必接得住。#tempo-cm-metrics.yaml apiVersion: v1 kind: ConfigMap metadata: name: tempo namespace: tracing data: overrides.yaml: | overrides: {} tempo.yaml: | memberlist: cluster_label: "tempo.tracing" multitenancy_enabled: false usage_report: reporting_enabled: true compactor: compaction: block_retention: 24h distributor: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 thrift_binary: endpoint: 0.0.0.0:6832 thrift_compact: endpoint: 0.0.0.0:6831 thrift_http: endpoint: 0.0.0.0:14268 opencensus: {} otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 zipkin: {} ingester: {} server: http_listen_port: 3200 storage: trace: backend: local local: path: /var/tempo/traces wal: path: /var/tempo/wal querier: {} query_frontend: {} overrides: defaults: metrics_generator: processors: - service-graphs - span-metrics per_tenant_override_config: /conf/overrides.yaml metrics_generator: registry: external_labels: source: tempo cluster: tracing storage: path: "/tmp/tempo" remote_write: - url: http://prometheus-server.monitoring.svc.cluster.local/api/v1/write send_exemplars: true traces_storage: path: "/tmp/traces"kubectl apply -f tempo-cm-metrics.yaml kubectl rollout restart sts/tempo -n tracing kubectl rollout status sts/tempo -n tracing[root@k8s-node-35 ~]# kubectl exec -n tracing tempo-0 -- sh -c 'cat /conf/tempo.yaml | egrep -n "metrics_generator|remote_write|service-graphs|span-metrics|overrides" -A5 -B3' 48- 49-query_frontend: {} 50- 51:overrides: 52- defaults: 53: metrics_generator: 54- processors: 55: - service-graphs 56: - span-metrics 57: per_tenant_override_config: /conf/overrides.yaml 58- 59:metrics_generator: 60- registry: 61- external_labels: 62- source: tempo 63- cluster: tracing 64- storage: 65- path: "/tmp/tempo" 66: remote_write: 67- - url: http://prometheus-server.monitoring.svc.cluster.local/api/v1/write 68- send_exemplars: true 69- traces_storage: 70- path: "/tmp/traces" [root@k8s-node-35 ~]# 在 主容器 prometheus-server 的 args 里新增:--web.enable-remote-write-receiverapiVersion: apps/v1 kind: Deployment metadata: annotations: meta.helm.sh/release-name: prometheus meta.helm.sh/release-namespace: monitoring labels: app.kubernetes.io/component: server app.kubernetes.io/instance: prometheus app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: prometheus app.kubernetes.io/version: v2.44.0 helm.sh/chart: prometheus-22.6.2 name: prometheus-server namespace: monitoring spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/component: server app.kubernetes.io/instance: prometheus app.kubernetes.io/name: prometheus strategy: type: Recreate template: metadata: annotations: kubectl.kubernetes.io/restartedAt: '2026-01-28T17:30:30+08:00' kubesphere.io/restartedAt: '2023-08-28T03:11:18.623Z' labels: app.kubernetes.io/component: server app.kubernetes.io/instance: prometheus app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: prometheus app.kubernetes.io/version: v2.44.0 helm.sh/chart: prometheus-22.6.2 spec: containers: - args: - '--watched-dir=/etc/config' - '--reload-url=http://127.0.0.1:9090/-/reload' image: 'harbor.telewave.tech/monitoring/prometheus-config-reloader:v0.65.1' imagePullPolicy: IfNotPresent name: prometheus-server-configmap-reload resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/config name: config-volume readOnly: true - args: - '--storage.tsdb.retention.time=15d' - '--config.file=/etc/config/prometheus.yml' - '--storage.tsdb.path=/data' - '--web.console.libraries=/etc/prometheus/console_libraries' - '--web.console.templates=/etc/prometheus/consoles' - '--web.enable-lifecycle' - '--web.enable-remote-write-receiver' image: 'harbor.telewave.tech/monitoring/prometheus:v2.44.0' imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /-/healthy port: 9090 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 15 successThreshold: 1 timeoutSeconds: 10 name: prometheus-server ports: - containerPort: 9090 protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /-/ready port: 9090 scheme: HTTP initialDelaySeconds: 30 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 4 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/config name: config-volume - mountPath: /data name: storage-volume dnsPolicy: ClusterFirst enableServiceLinks: true restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 65534 runAsGroup: 65534 runAsNonRoot: true runAsUser: 65534 serviceAccount: prometheus-server serviceAccountName: prometheus-server terminationGracePeriodSeconds: 300 volumes: - configMap: defaultMode: 420 name: prometheus-server name: config-volume - name: storage-volume persistentVolumeClaim: claimName: prometheus-server --- apiVersion: v1 kind: Service metadata: annotations: meta.helm.sh/release-name: prometheus meta.helm.sh/release-namespace: monitoring labels: app.kubernetes.io/component: server app.kubernetes.io/instance: prometheus app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: prometheus app.kubernetes.io/version: v2.44.0 helm.sh/chart: prometheus-22.6.2 name: prometheus-server namespace: monitoring spec: clusterIP: 10.68.233.242 clusterIPs: - 10.68.233.242 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - name: http port: 80 protocol: TCP targetPort: 9090 selector: app.kubernetes.io/component: server app.kubernetes.io/instance: prometheus app.kubernetes.io/name: prometheus sessionAffinity: None type: ClusterIP --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: meta.helm.sh/release-name: prometheus meta.helm.sh/release-namespace: monitoring labels: app.kubernetes.io/component: server app.kubernetes.io/instance: prometheus app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: prometheus app.kubernetes.io/part-of: prometheus app.kubernetes.io/version: v2.44.0 helm.sh/chart: prometheus-22.6.2 name: prometheus-server namespace: monitoring spec: ingressClassName: nginx rules: - host: prometheus-dev.telewave.tech http: paths: - backend: service: name: prometheus-server port: number: 80 path: / pathType: Prefixkubectl apply -f prometheus-server-rw.yaml kubectl rollout status deploy/prometheus-server -n monitoring [root@k8s-node-35 ~]# kubectl get deploy prometheus-server -n monitoring -o yaml | grep enable-remote-write-receiver - --web.enable-remote-write-receiver 需要在数据源配置service graph 选中Prometheus
2026年03月17日
9 阅读
0 评论
0 点赞
2025-06-20
基于OpenTelemetry+Grafana可观测性实践
一、方案介绍OpenTelemetry + Prometheus + Loki + Tempo + Grafana 是一套现代化、云备份的可安装性解决方案组合,涵盖Trace(追踪追踪)、Log(日志)、Metrics(指标)三大核心维度,为微服务架构中的应用提供统一的可安装性平台。二、组件介绍三、系统架构四、部署示例应用4.1 应用介绍https://opentelemetry.io/docs/demo/kubernetes-deployment/ 官方为大家写了一个opentelemetry-demo。 这个项目模拟了一个微服务版本的电子商城,主要包含了以下一些项目:4.2 部署应用4.2.1获取图表包# helm repo open-telemetry https://open-telemetry.github.io 添加/opentelemetry-helm-charts # helm pull open-telemetry/opentelemetry-demo --untar # cd opentelemetry-demo # ls Chart.lock Chart.yaml 示例 grafana-dashboards README.md UPGRADING.md values.yaml 图表 ci flagd 产品模板values.schema.json4.2.2 自定义图表包,默认图表包集成了opentelemetry-collector、prometheus、grafana、opensearch、jaeger组件,我们先将其取消# vim 值.yaml 默认: # 评估所有组件的环境变量列表 环境: -名称:OTEL_COLLECTOR_NAME 值:center-collector.opentelemetry.svc opentelemetry-收集器: 已启用:false 耶格尔: 已启用:false 普罗米修斯: 已启用:false 格拉法纳: 已启用:false 开放搜索: 已启用:false4.2.3安装示例应用# helm install demo .-f values.yaml -所有服务渴望通过前置代理获得:http://localhost:8080 通过运行以下命令: kubectl --namespace 默认端口转发 svc/frontend-proxy 8080 :8080 通过端口转发暴露frontend-proxy服务后,这些路径上可以使用以下服务: 网上商店 http://localhost:8080/ Jaeger 用户界面 http://localhost:8080/jaeger/ui/ Grafana http://localhost:8080/grafana/ 负载生成器 UI http://localhost:8080/loadgen/ 功能标志UI http://localhost:8080/feature/ # kubectl 获取 pod 名称 就绪状态 重启时间 Accounting-79cdcf89df-h8nnc 1 /1 运动 0 2分15秒 ad-dc6768b6-lvzcq 1 /1 跑步 0 2分14秒 cart-65c89fcdd7-8tcwp 1 /1 运动 0 2分15秒 checkout-7c45459f67-xvft2 1 /1 运动 0 2分13秒 currency-65dd8c8f6-pxxbb 1 /1 跑步 0 2分15秒 email-5659b8d84f-9ljr9 1 /1 运动 0 2分15秒 flagd-57fdd95655-xrmsk 2 /2 运动 0 2分14秒 欺诈检测-7db9cbbd4d-znxq6 1 /1 运动 0 2分15秒 frontend-6bd764b6b9-gmstv 1 /1 跑步 0 2分15秒 frontend-proxy-56977d5ddb-cl87k 1 /1 跑步 0 2分15秒 image-provider-54b56c68b8-gdgnv 1 /1 跑步 0 2分15秒 kafka-976bc899f-79vd7 1 /1 运动 0 2分14秒 load-generator-79dd9d8d58-hcw8c 1 /1 运行 0 2分15秒 payment-6d9748df64-46zwt 1/1 正在播放 0 2分15秒 产品目录-658d99b4d4-xpczv 1/1 运行 0 2m13s quote-5dfbb544f5-6r8gr 1/1 播放 0 2分14秒 推荐-764b6c5cf8-lnkm6 1/1 播放 0 2分14秒 Shipping-5f65469746-zdr2g 1/1 运行 0 2分15秒 valkey-cart-85ccb5db-kr74s 1/1 运动 0 2分15秒 # kubectl 获取服务 名称类型 供应商 IP 外部 IP 端口年龄 广告 ClusterIP 10.103.72.85 <无> 8080/TCP 2分19秒 购物车 ClusterIP 10.106.118.178 <无> 8080/TCP 2分19秒 检出 ClusterIP 10.109.56.238 <无> 8080/TCP 2m19s 货币 ClusterIP 10.96.112.137 <无> 8080/TCP 2m19s 电子邮件 ClusterIP 10.103.214.222 <无> 8080/TCP 2分19秒 flagd ClusterIP 10.101.48.231 <无> 8013/TCP,8016/TCP,4000/TCP 2分19秒 前 ClusterIP 10.103.70.199 <无> 8080/TCP 2m19s 增强代理 ClusterIP 10.106.13.80 <无> 8080/TCP 2分19秒 镜像提供者 ClusterIP 10.109.69.146 <无> 8081/TCP 2m19s kafka ClusterIP 10.104.9.210 <无> 9092/TCP,9093/TCP 2分19秒 kubernetes ClusterIP 10.96.0.1 <无> 443/TCP 176d 负载生成器 ClusterIP 10.106.97.167 <none> 8089/TCP 2m19s 付款 ClusterIP 10.102.143.196 <无> 8080/TCP 2m19s 产品目录 ClusterIP 10.109.219.138 <无> 8080/TCP 2m19s 引用 ClusterIP 10.111.139.80 <无> 8080/TCP 2m19s 建议 ClusterIP 10.97.118.12 <无> 8080/TCP 2m19s 货物运输IP 10.107.102.160 <无> 8080/TCP 2m19s valkey-cart ClusterIP 10.104.34.233 <无> 6379/TCP 2分19秒4.2.4 接下来创建 ingress 资源,引入 frontend-proxy 服务 8080 端口api版本:traefik.io/v1alpha1 种类:IngressRoute 元数据: 名称: 练习 规格: 入口点: - 网络 路线: - 匹配:主持人(`demo.cuiliangblog.cn`) 种类:规则 服务: - 名称:前置代理 端口:80804.2.5创建完成ingress资源后添加主机解析并访问验证。4.3配置Ingress输出以 ingress 为例,从 Traefik v2.6 开始,Traefik 初步支持使用 OpenTelemetry 协议导出数据追踪(traces),这使得你可以将 Traefik 的数据发送到兼容 OTel 的湖南。Traefik 部署可参考文档:https://www.cuiliangblog.cn/detail/section/140101250, 访问配置参考文档:https://doc.traefik.io/traefik/observability/access-logs/#opentelemetry# vim 值.yaml 实验性:#实验性功能配置 otlpLogs: true # 日志导出otlp格式 extraArguments: # 自定义启动参数 —“--experimental.otlpLogs=true” —“--accesslog.otlp=true” -“--accesslog.otlp.grpc=true” “--accesslog.otlp.grpc.endpoint=center-collector.opentelemetry.svc:4317” –“--accesslog.otlp.grpc.insecure=true” 指标: # 指标 addInternals: true # 追踪内部流量 otlp: enabled: true #导出otlp格式 grpc: # 使用grpc协议 端点:“center-collector.opentelemetry.svc:4317”#OpenTelemetry地址 insecure: true # 跳过证书 追踪:#仓库追踪 addInternals: true # 追踪内部流量(如重定向) otlp: enabled: true #导出otlp格式 grpc: # 使用grpc协议 端点:“center-collector.opentelemetry.svc:4317”#OpenTelemetry地址 insecure: true # 跳过证书五、MinIO部署5.1配置MinIO对象存储5.1.1配置minIO[root@k8s-master minio]# cat > minio.yaml << EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: minio-pvc namespace: minio spec: storageClassName: nfs-client accessModes: - ReadWriteOnce resources: requests: storage: 50Gi --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: minio name: minio namespace: minio spec: selector: matchLabels: app: minio template: metadata: labels: app: minio spec: containers: - name: minio image: quay.io/minio/minio:latest command: - /bin/bash - -c args: - minio server /data --console-address :9090 volumeMounts: - mountPath: /data name: data ports: - containerPort: 9090 name: console - containerPort: 9000 name: api env: - name: MINIO_ROOT_USER # 指定用户名 value: "admin" - name: MINIO_ROOT_PASSWORD # 指定密码,最少8位置 value: "minioadmin" volumes: - name: data persistentVolumeClaim: claimName: minio-pvc --- apiVersion: v1 kind: Service metadata: name: minio-service namespace: minio spec: type: NodePort selector: app: minio ports: - name: console port: 9090 protocol: TCP targetPort: 9090 nodePort: 30300 - name: api port: 9000 protocol: TCP targetPort: 9000 nodePort: 30200 EOF [root@k8s-master minio]# kubectl apply -f minio.yaml deployment.apps/minio created service/minio-service created5.1.2使用NodePort方式访问网页[root@k8s-master minio]# kubectl get pod -n minio NAME READY STATUS RESTARTS AGE minio-86577f8755-l65mf 1/1 Running 0 11m [root@k8s-master minio]# kubectl get svc -n minio NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE minio-service NodePort 10.102.223.132 <none> 9090:30300/TCP,9000:30200/TCP 10m访问k8s节点ip:30300,默认用户名密码都是admin5.1.3使用ingress方式访问[root@k8s-master minio]# cat minio-ingress.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: minio-console namespace: minio spec: entryPoints: - web routes: - match: Host(`minio.test.com`) # 域名 kind: Rule services: - name: minio-service # 与svc的name一致 port: 9090 # 与svc的port一致 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: minio-api namespace: minio spec: entryPoints: - web routes: - match: Host(`minio-api.test.com`) # 域名 kind: Rule services: - name: minio-service # 与svc的name一致 port: 9000 # 与svc的port一致 [root@k8s-master minio]# kubectl apply -f minio-ingress.yaml ingressroute.traefik.containo.us/minio-console created ingressroute.traefik.containo.us/minio-api created添加hosts记录192.168.10.10 minio.test.com访问域名即可5.2helmminIO 部署集群minIO 集群方式部署使用operator或者helm。如果是一套 k8s 集群部署方式 minio 推荐 shiyonghelm 方式部署,operator 更适合多套 minio 集群多机场场景使用。 helmminIO部署参考文档:https://artifacthub.io/packages/helm/bitnami/minio。5.2.1资源角色规划使用分散方式部署高可用的minIO负载时,驱动器总数至少是4个,以保证纠错码。我们可以在k8s-work1和k8s-work2上的data1和data2路径存放minIO数据,使用本地pv方式持久化数据。# 创建数据存放路径 [root@k8s-work1 ~]# mkdir -p /data1/minio [root@k8s-work1 ~]# mkdir -p /data2/minio [root@k8s-work2 ~]# mkdir -p /data1/minio [root@k8s-work2 ~]# mkdir -p /data2/minio5.2.2下载helm包[root@k8s-master ~]# helm repo add bitnami https://charts.bitnami.com/bitnami [root@k8s-master ~]# helm search repo minio NAME CHART VERSION APP VERSION DESCRIPTION bitnami/minio 14.1.4 2024.3.30 MinIO(R) is an object storage server, compatibl... [root@k8s-master ~]# helm pull bitnami/minio --untar [root@k8s-master ~]# cd minio root@k8s01:~/helm/minio/minio-demo# ls minio minio-17.0.5.tgz root@k8s01:~/helm/minio/minio-demo# cd minio/ root@k8s01:~/helm/minio/minio-demo/minio# ls Chart.lock Chart.yaml ingress.yaml pv.yaml storageClass.yaml values.yaml charts demo.yaml pvc.yaml README.md templates values.yaml.bak 5.2.3创建scprovisioner 字段定义为 no-provisioner,这是尚不支持动态预配置动态生成 PV,所以我们需要提前手动创建 PV。volumeBindingMode 因为关系 定义为 WaitForFirstConsumer,是本地持久卷里一个非常重要的特性,即:延迟绑定。延迟绑定就是在我们提交 PVC 文件时,StorageClass 为我们延迟绑定 PV 与 PVC 的对应。root@k8s01:~/helm/minio/minio-demo/minio# cat storageClass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer5.2.4创建pvroot@k8s01:~/helm/minio/minio-demo/minio# cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: minio-pv1 labels: app: minio-0 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage # storageClass名称,与前面创建的storageClass保持一致 local: path: /data1/minio # 本地存储路径 nodeAffinity: # 调度至work1节点 required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s01 --- apiVersion: v1 kind: PersistentVolume metadata: name: minio-pv2 labels: app: minio-1 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage local: path: /data2/minio nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s01 --- apiVersion: v1 kind: PersistentVolume metadata: name: minio-pv3 labels: app: minio-2 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage local: path: /data1/minio nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s02 --- apiVersion: v1 kind: PersistentVolume metadata: name: minio-pv4 labels: app: minio-3 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage local: path: /data2/minio nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s02 root@k8s01:~/helm/minio/minio-demo/minio# kubectl get pv | grep minio minio-pv1 10Gi RWO Retain Bound minio/data-0-minio-demo-1 local-storage 10d minio-pv2 10Gi RWO Retain Bound minio/data-1-minio-demo-1 local-storage 10d minio-pv3 10Gi RWO Retain Bound minio/data-0-minio-demo-0 local-storage 10d minio-pv4 10Gi RWO Retain Bound minio/data-1-minio-demo-0 local-storage 10d5.2.5创建pvc创建的时候注意pvc的名字的构成:pvc的名字 = volume_name-statefulset_name-序号,然后通过selector标签选择,强制将pvc与pv绑定。root@k8s01:~/helm/minio/minio-demo/minio# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-minio-0 namespace: minio spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: app: minio-0 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-minio-1 namespace: minio spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: app: minio-1 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-minio-2 namespace: minio spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: app: minio-2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-minio-3 namespace: minio spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: app: minio-3root@k8s01:~/helm/minio/minio-demo/minio# kubectl get pvc -n minio NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-0-minio-demo-0 Bound minio-pv3 10Gi RWO local-storage 10d data-0-minio-demo-1 Bound minio-pv1 10Gi RWO local-storage 10d data-1-minio-demo-0 Bound minio-pv4 10Gi RWO local-storage 10d data-1-minio-demo-1 Bound minio-pv2 10Gi RWO local-storage 10d data-minio-0 Pending local-storage 10d 5.2.6 修改配置68 image: 69 registry: docker.io 70 repository: bitnami/minio 71 tag: 2024.3.30-debian-12-r0 104 mode: distributed # 集群模式,单节点为standalone,分布式集群为distributed 197 statefulset: 215 replicaCount: 2 # 节点数 218 zones: 1 # 区域数,1个即可 221 drivesPerNode: 2 # 每个节点数据目录数.2节点×2目录组成4节点的mimio集群 558 #podAnnotations: {} # 导出Prometheus指标 559 podAnnotations: 560 prometheus.io/scrape: "true" 561 prometheus.io/path: "/minio/v2/metrics/cluster" 562 prometheus.io/port: "9000" 1049 persistence: 1052 enabled: true 1060 storageClass: "local-storage" 1063 mountPath: /bitnami/minio/data 1066 accessModes: 1067 - ReadWriteOnce 1070 size: 10Gi 1073 annotations: {} 1076 existingClaim: ""5.2.7 部署miniOkubectl create ns minioroot@k8s01:~/helm/minio/minio-demo/minio# cat demo.yaml --- # Source: minio/templates/console/networkpolicy.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: minio-demo-console namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2.0.1 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: podSelector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console app.kubernetes.io/part-of: minio policyTypes: - Ingress - Egress egress: - {} ingress: # Allow inbound connections - ports: - port: 9090 --- # Source: minio/templates/networkpolicy.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: podSelector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio policyTypes: - Ingress - Egress egress: - {} ingress: # Allow inbound connections - ports: - port: 9000 --- # Source: minio/templates/console/pdb.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: minio-demo-console namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2.0.1 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: maxUnavailable: 1 selector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console app.kubernetes.io/part-of: minio --- # Source: minio/templates/pdb.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: maxUnavailable: 1 selector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio --- # Source: minio/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/part-of: minio automountServiceAccountToken: false secrets: - name: minio-demo --- # Source: minio/templates/secrets.yaml apiVersion: v1 kind: Secret metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio type: Opaque data: root-user: "YWRtaW4=" root-password: "OGZHWWlrY3lpNA==" --- # Source: minio/templates/console/service.yaml apiVersion: v1 kind: Service metadata: name: minio-demo-console namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2.0.1 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: type: ClusterIP ports: - name: http port: 9090 targetPort: http nodePort: null selector: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console app.kubernetes.io/part-of: minio --- # Source: minio/templates/headless-svc.yaml apiVersion: v1 kind: Service metadata: name: minio-demo-headless namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: type: ClusterIP clusterIP: None ports: - name: tcp-api port: 9000 targetPort: api publishNotReadyAddresses: true selector: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio --- # Source: minio/templates/service.yaml apiVersion: v1 kind: Service metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: type: ClusterIP ports: - name: tcp-api port: 9000 targetPort: api nodePort: null selector: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio --- # Source: minio/templates/console/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: minio-demo-console namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2.0.1 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: replicas: 1 strategy: type: RollingUpdate selector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console app.kubernetes.io/part-of: minio template: metadata: labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: serviceAccountName: minio-demo automountServiceAccountToken: false affinity: podAffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console topologyKey: kubernetes.io/hostname weight: 1 nodeAffinity: securityContext: fsGroup: 1001 fsGroupChangePolicy: Always supplementalGroups: [] sysctls: [] containers: - name: console image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-bitnami-minio-object-browser:2.0.1-debian-12-r2 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 1001 runAsNonRoot: true runAsUser: 1001 seLinuxOptions: {} seccompProfile: type: RuntimeDefault args: - server - --host - "0.0.0.0" - --port - "9090" env: - name: CONSOLE_MINIO_SERVER value: "http://minio-demo:9000" resources: limits: cpu: 150m ephemeral-storage: 2Gi memory: 192Mi requests: cpu: 100m ephemeral-storage: 50Mi memory: 128Mi ports: - name: http containerPort: 9090 livenessProbe: failureThreshold: 5 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 tcpSocket: port: http readinessProbe: failureThreshold: 5 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 httpGet: path: /minio port: http volumeMounts: - name: empty-dir mountPath: /tmp subPath: tmp-dir - name: empty-dir mountPath: /.console subPath: app-console-dir volumes: - name: empty-dir emptyDir: {} --- # Source: minio/templates/application.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: selector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio podManagementPolicy: Parallel replicas: 2 serviceName: minio-demo-headless updateStrategy: type: RollingUpdate template: metadata: labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio annotations: checksum/credentials-secret: b06d639ea8d96eecf600100351306b11b3607d0ae288f01fe3489b67b6cc4873 prometheus.io/path: /minio/v2/metrics/cluster prometheus.io/port: "9000" prometheus.io/scrape: "true" spec: serviceAccountName: minio-demo affinity: podAffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio topologyKey: kubernetes.io/hostname weight: 1 nodeAffinity: automountServiceAccountToken: false securityContext: fsGroup: 1001 fsGroupChangePolicy: OnRootMismatch supplementalGroups: [] sysctls: [] initContainers: containers: - name: minio image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-bitnami-minio:2025.5.24-debian-12-r6 imagePullPolicy: "IfNotPresent" securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 1001 runAsNonRoot: true runAsUser: 1001 seLinuxOptions: {} seccompProfile: type: RuntimeDefault env: - name: BITNAMI_DEBUG value: "false" - name: MINIO_DISTRIBUTED_MODE_ENABLED value: "yes" - name: MINIO_DISTRIBUTED_NODES value: "minio-demo-{0...1}.minio-demo-headless.minio.svc.cluster.local:9000/bitnami/minio/data-{0...1}" - name: MINIO_SCHEME value: "http" - name: MINIO_FORCE_NEW_KEYS value: "no" - name: MINIO_ROOT_USER_FILE value: /opt/bitnami/minio/secrets/root-user - name: MINIO_ROOT_PASSWORD_FILE value: /opt/bitnami/minio/secrets/root-password - name: MINIO_SKIP_CLIENT value: "yes" - name: MINIO_API_PORT_NUMBER value: "9000" - name: MINIO_BROWSER value: "off" - name: MINIO_PROMETHEUS_AUTH_TYPE value: "public" - name: MINIO_DATA_DIR value: "/bitnami/minio/data-0" ports: - name: api containerPort: 9000 livenessProbe: httpGet: path: /minio/health/live port: api scheme: "HTTP" initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: tcpSocket: port: api initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 5 resources: limits: cpu: 375m ephemeral-storage: 2Gi memory: 384Mi requests: cpu: 250m ephemeral-storage: 50Mi memory: 256Mi volumeMounts: - name: empty-dir mountPath: /tmp subPath: tmp-dir - name: empty-dir mountPath: /opt/bitnami/minio/tmp subPath: app-tmp-dir - name: empty-dir mountPath: /.mc subPath: app-mc-dir - name: minio-credentials mountPath: /opt/bitnami/minio/secrets/ - name: data-0 mountPath: /bitnami/minio/data-0 - name: data-1 mountPath: /bitnami/minio/data-1 volumes: - name: empty-dir emptyDir: {} - name: minio-credentials secret: secretName: minio-demo volumeClaimTemplates: - metadata: name: data-0 labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "10Gi" storageClassName: local-storage - metadata: name: data-1 labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "10Gi" storageClassName: local-storage 5.2.8查看资源信息root@k8s01:~/helm/minio/minio-demo/minio# kubectl get all -n minio NAME READY STATUS RESTARTS AGE pod/minio-demo-0 1/1 Running 10 (5h27m ago) 10d pod/minio-demo-1 1/1 Running 10 (5h27m ago) 27h pod/minio-demo-console-7b586c5f9c-l8hnc 1/1 Running 9 (5h27m ago) 10d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/minio-demo ClusterIP 10.97.92.61 <none> 9000/TCP 10d service/minio-demo-console ClusterIP 10.101.127.112 <none> 9090/TCP 10d service/minio-demo-headless ClusterIP None <none> 9000/TCP 10d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/minio-demo-console 1/1 1 1 10d NAME DESIRED CURRENT READY AGE replicaset.apps/minio-demo-console-7b586c5f9c 1 1 1 10d NAME READY AGE statefulset.apps/minio-demo 2/2 10d 5.2.9创建ingress资源#以ingrss-nginx为例: # cat > ingress.yaml << EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minio-ingreess namespace: minio annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: minio.local.com http: paths: - path: / pathType: Prefix backend: service: name: minio port: number: 9001 EOF#以traefik为例: root@k8s01:~/helm/minio/minio-demo/minio# cat ingress.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: minio-console namespace: minio spec: entryPoints: - web routes: - match: Host(`minio.local.com`) kind: Rule services: - name: minio-demo-console # 修正为 Console Service 名称 port: 9090 # 修正为 Console 端口 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: minio-api namespace: minio spec: entryPoints: - web routes: - match: Host(`minio-api.local.com`) kind: Rule services: - name: minio-demo # 保持 API Service 名称 port: 9000 # 保持 API 端口5.2.10获取用户名密码# 获取用户名和密码 [root@k8s-master minio]# kubectl get secret --namespace minio minio -o jsonpath="{.data.root-user}" | base64 -d admin [root@k8s-master minio]# kubectl get secret --namespace minio minio -o jsonpath="{.data.root-password}" | base64 -d HWLLGMhgkp5.2.11访问web管理页5.3operator部署minIO企业版需要收费六、部署 Prometheus如果已安装metrics-server需要先卸载,否则冲突https://axzys.cn/index.php/archives/423/七、部署Thanos监控[可选]Thanos 很好的弥补了 Prometheus 在持久化存储和 多个 prometheus 集群之间跨集群查询方面的不足的问题。具体可参考文档https://thanos.io/, 部署参考文档:https://github.com/thanos-io/kube-thanos,本实例使用 receive 模式部署。 如果需要使用 sidecar 模式部署,可参考文档:https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/platform/thanos.mdhttps://www.cuiliangblog.cn/detail/section/215968508八、部署 Grafanahttps://axzys.cn/index.php/archives/423/九、部署 OpenTelemetryhttps://www.cuiliangblog.cn/detail/section/215947486root@k8s01:~/helm/opentelemetry/cert-manager# cat new-center-collector.yaml apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector # 元数据定义部分 metadata: name: center # Collector 的名称为 center namespace: opentelemetry # 具体的配置内容 spec: replicas: 1 # 设置副本数量为1 # image: otel/opentelemetry-collector-contrib:latest # 使用支持 elasticsearch 的镜像 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/otel-opentelemetry-collector-contrib-latest:latest config: # 定义 Collector 配置 receivers: # 接收器,用于接收遥测数据(如 trace、metrics、logs) otlp: # 配置 OTLP(OpenTelemetry Protocol)接收器 protocols: # 启用哪些协议来接收数据 grpc: endpoint: 0.0.0.0:4317 # 启用 gRPC 协议 http: endpoint: 0.0.0.0:4318 # 启用 HTTP 协议 processors: # 处理器,用于处理收集到的数据 batch: {} # 批处理器,用于将数据分批发送,提高效率 exporters: # 导出器,用于将处理后的数据发送到后端系统 debug: {} # 使用 debug 导出器,将数据打印到终端(通常用于测试或调试) otlp: # 数据发送到tempo的grpc端口 endpoint: "tempo:4317" tls: # 跳过证书验证 insecure: true prometheus: endpoint: "0.0.0.0:9464" # prometheus指标暴露端口 loki: endpoint: http://loki-gateway.loki.svc/loki/api/v1/push headers: X-Scope-OrgID: "fake" # 与Grafana配置一致 labels: attributes: # 从日志属性提取 k8s.pod.name: "pod" k8s.container.name: "container" k8s.namespace.name: "namespace" app: "application" # 映射应用中设置的标签 resource: # 从SDK资源属性提取 service.name: "service" service: # 服务配置部分 telemetry: logs: level: "debug" # 设置 Collector 自身日志等级为 debug(方便观察日志) pipelines: # 定义处理管道 traces: # 定义 trace 类型的管道 receivers: [otlp] # 接收器为 OTLP processors: [batch] # 使用批处理器 exporters: [otlp] # 将数据导出到OTLP metrics: # 定义 metrics 类型的管道 receivers: [otlp] # 接收器为 OTLP processors: [batch] # 使用批处理器 exporters: [prometheus] # 将数据导出到prometheus logs: receivers: [otlp] processors: [batch] # 使用批处理器 exporters: [loki] 十、部署 Tempo 10.1Tempo 介绍Grafana Tempo是一个开源、易于使用的大规模分布式跟踪后端。Tempo具有成本效益,仅需要对象存储即可运行,并且与Grafana,Prometheus和Loki深度集成,Tempo可以与任何开源跟踪协议一起使用,包括Jaeger、Zipkin和OpenTelemetry。它仅支持键/值查找,并且旨在与用于发现的日志和度量标准(示例性)协同工作。https://axzys.cn/index.php/archives/418/十一、部署Loki日志收集 11.1 loki 介绍 11.1.1组件功能Loki架构十分简单,由以下三个部分组成: Loki 是主服务器,负责存储日志和处理查询 。 promtail 是代理,负责收集日志并将其发送给 loki 。 Grafana 用于 UI 展示。 只要在应用程序服务器上安装promtail来收集日志然后发送给Loki存储,就可以在Grafana UI界面通过添加Loki为数据源进行日志查询11.1.2系统架构Distributor(接收日志入口):负责接收客户端发送的日志,进行标签解析、预处理、分片计算,转发给 Ingester。 Ingester(日志暂存处理):处理 Distributor 发送的日志,缓存到内存,定期刷写到对象存储或本地。支持查询时返回缓存数据。 Querier(日志查询器):负责处理来自 Grafana 或其他客户端的查询请求,并从 Ingester 和 Store 中读取数据。 Index:boltdb-shipper 模式的 Index 提供者 在分布式部署中,读取和缓存 index 数据,避免 S3 等远程存储频繁请求。 Chunks 是Loki 中一种核心的数据结构和存储形式,主要由 ingester 负责生成和管理。它不是像 distributor、querier 那样的可部署服务,但在 Loki 架构和存储中极其关键。11.1.3 部署 lokiloki 也分为整体式 、微服务式、可扩展式三种部署模式,具体可参考文档https://grafana.com/docs/loki/latest/setup/install/helm/concepts/,此处以可扩展式为例: loki 使用 minio 对象存储配置可参考文档:https://blog.min.io/how-to-grafana-loki-minio/# helm repo add grafana https://grafana.github.io/helm-charts "grafana" has been added to your repositories # helm pull grafana/loki --untar # ls charts Chart.yaml README.md requirements.lock requirements.yaml templates values.yaml--- # Source: loki/templates/backend/poddisruptionbudget-backend.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-backend namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: backend spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend maxUnavailable: 1 --- # Source: loki/templates/chunks-cache/poddisruptionbudget-chunks-cache.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-memcached-chunks-cache namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: memcached-chunks-cache spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: memcached-chunks-cache maxUnavailable: 1 --- # Source: loki/templates/read/poddisruptionbudget-read.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-read namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: read spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read maxUnavailable: 1 --- # Source: loki/templates/results-cache/poddisruptionbudget-results-cache.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-memcached-results-cache namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: memcached-results-cache spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: memcached-results-cache maxUnavailable: 1 --- # Source: loki/templates/write/poddisruptionbudget-write.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-write namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: write spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write maxUnavailable: 1 --- # Source: loki/templates/loki-canary/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: loki-canary namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: canary automountServiceAccountToken: true --- # Source: loki/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: loki namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" automountServiceAccountToken: true --- # Source: loki/templates/config.yaml apiVersion: v1 kind: ConfigMap metadata: name: loki namespace: loki data: config.yaml: | auth_enabled: true bloom_build: builder: planner_address: loki-backend-headless.loki.svc.cluster.local:9095 enabled: false bloom_gateway: client: addresses: dnssrvnoa+_grpc._tcp.loki-backend-headless.loki.svc.cluster.local enabled: false chunk_store_config: chunk_cache_config: background: writeback_buffer: 500000 writeback_goroutines: 1 writeback_size_limit: 500MB memcached: batch_size: 4 parallelism: 5 memcached_client: addresses: dnssrvnoa+_memcached-client._tcp.loki-chunks-cache.loki.svc consistent_hash: true max_idle_conns: 72 timeout: 2000ms common: compactor_address: 'http://loki-backend:3100' path_prefix: /var/loki replication_factor: 3 frontend: scheduler_address: "" tail_proxy_url: "" frontend_worker: scheduler_address: "" index_gateway: mode: simple limits_config: max_cache_freshness_per_query: 10m query_timeout: 300s reject_old_samples: true reject_old_samples_max_age: 168h split_queries_by_interval: 15m volume_enabled: true memberlist: join_members: - loki-memberlist pattern_ingester: enabled: false query_range: align_queries_with_step: true cache_results: true results_cache: cache: background: writeback_buffer: 500000 writeback_goroutines: 1 writeback_size_limit: 500MB memcached_client: addresses: dnssrvnoa+_memcached-client._tcp.loki-results-cache.loki.svc consistent_hash: true timeout: 500ms update_interval: 1m ruler: storage: s3: access_key_id: admin bucketnames: null endpoint: minio-demo.minio.svc:9000 insecure: true s3: s3://admin:8fGYikcyi4@minio-demo.minio.svc:9000/loki s3forcepathstyle: true secret_access_key: 8fGYikcyi4 type: s3 wal: dir: /var/loki/ruler-wal runtime_config: file: /etc/loki/runtime-config/runtime-config.yaml schema_config: configs: - from: "2024-04-01" index: period: 24h prefix: index_ object_store: s3 schema: v13 store: tsdb server: grpc_listen_port: 9095 http_listen_port: 3100 http_server_read_timeout: 600s http_server_write_timeout: 600s storage_config: aws: access_key_id: admin secret_access_key: 8fGYikcyi4 region: "" endpoint: minio-demo.minio.svc:9000 insecure: true s3forcepathstyle: true bucketnames: loki bloom_shipper: working_directory: /var/loki/data/bloomshipper boltdb_shipper: index_gateway_client: server_address: dns+loki-backend-headless.loki.svc.cluster.local:9095 hedging: at: 250ms max_per_second: 20 up_to: 3 tsdb_shipper: index_gateway_client: server_address: dns+loki-backend-headless.loki.svc.cluster.local:9095 tracing: enabled: false --- # Source: loki/templates/gateway/configmap-gateway.yaml apiVersion: v1 kind: ConfigMap metadata: name: loki-gateway namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: gateway data: nginx.conf: | worker_processes 5; ## loki: 1 error_log /dev/stderr; pid /tmp/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 4096; ## loki: 1024 } http { client_body_temp_path /tmp/client_temp; proxy_temp_path /tmp/proxy_temp_path; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; client_max_body_size 4M; proxy_read_timeout 600; ## 10 minutes proxy_send_timeout 600; proxy_connect_timeout 600; proxy_http_version 1.1; #loki_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $status ' '"$request" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /dev/stderr main; sendfile on; tcp_nopush on; resolver kube-dns.kube-system.svc.cluster.local.; server { listen 8080; listen [::]:8080; location = / { return 200 'OK'; auth_basic off; } ######################################################## # Configure backend targets location ^~ /ui { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # Distributor location = /api/prom/push { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1/push { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location = /distributor/ring { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location = /otlp/v1/logs { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # Ingester location = /flush { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location ^~ /ingester/ { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location = /ingester { internal; # to suppress 301 } # Ring location = /ring { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # MemberListKV location = /memberlist { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # Ruler location = /ruler/ring { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /api/prom/rules { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location ^~ /api/prom/rules/ { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1/rules { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location ^~ /loki/api/v1/rules/ { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /prometheus/api/v1/alerts { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /prometheus/api/v1/rules { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } # Compactor location = /compactor/ring { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1/delete { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1/cache/generation_numbers { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } # IndexGateway location = /indexgateway/ring { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } # QueryScheduler location = /scheduler/ring { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } # Config location = /config { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # QueryFrontend, Querier location = /api/prom/tail { proxy_pass http://loki-read.loki.svc.cluster.local:3100$request_uri; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location = /loki/api/v1/tail { proxy_pass http://loki-read.loki.svc.cluster.local:3100$request_uri; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ^~ /api/prom/ { proxy_pass http://loki-read.loki.svc.cluster.local:3100$request_uri; } location = /api/prom { internal; # to suppress 301 } # if the X-Query-Tags header is empty, set a noop= without a value as empty values are not logged set $query_tags $http_x_query_tags; if ($query_tags !~* '') { set $query_tags "noop="; } location ^~ /loki/api/v1/ { # pass custom headers set by Grafana as X-Query-Tags which are logged as key/value pairs in metrics.go log messages proxy_set_header X-Query-Tags "${query_tags},user=${http_x_grafana_user},dashboard_id=${http_x_dashboard_uid},dashboard_title=${http_x_dashboard_title},panel_id=${http_x_panel_id},panel_title=${http_x_panel_title},source_rule_uid=${http_x_rule_uid},rule_name=${http_x_rule_name},rule_folder=${http_x_rule_folder},rule_version=${http_x_rule_version},rule_source=${http_x_rule_source},rule_type=${http_x_rule_type}"; proxy_pass http://loki-read.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1 { internal; # to suppress 301 } } } --- # Source: loki/templates/runtime-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: loki-runtime namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" data: runtime-config.yaml: | {} --- # Source: loki/templates/backend/clusterrole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" name: loki-clusterrole rules: - apiGroups: [""] # "" indicates the core API group resources: ["configmaps", "secrets"] verbs: ["get", "watch", "list"] --- # Source: loki/templates/backend/clusterrolebinding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: loki-clusterrolebinding labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" subjects: - kind: ServiceAccount name: loki namespace: loki roleRef: kind: ClusterRole name: loki-clusterrole apiGroup: rbac.authorization.k8s.io --- # Source: loki/templates/backend/query-scheduler-discovery.yaml apiVersion: v1 kind: Service metadata: name: loki-query-scheduler-discovery namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP clusterIP: None publishNotReadyAddresses: true ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend --- # Source: loki/templates/backend/service-backend-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-backend-headless namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend variant: headless prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP clusterIP: None ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP appProtocol: tcp selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend --- # Source: loki/templates/backend/service-backend.yaml apiVersion: v1 kind: Service metadata: name: loki-backend namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: backend annotations: spec: type: ClusterIP ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend --- # Source: loki/templates/chunks-cache/service-chunks-cache-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-chunks-cache labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: "memcached-chunks-cache" annotations: {} namespace: "loki" spec: type: ClusterIP clusterIP: None ports: - name: memcached-client port: 11211 targetPort: 11211 - name: http-metrics port: 9150 targetPort: 9150 selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-chunks-cache" --- # Source: loki/templates/gateway/service-gateway.yaml apiVersion: v1 kind: Service metadata: name: loki-gateway namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: gateway prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP ports: - name: http-metrics port: 80 targetPort: http-metrics protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: gateway --- # Source: loki/templates/loki-canary/service.yaml apiVersion: v1 kind: Service metadata: name: loki-canary namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: canary annotations: spec: type: ClusterIP ports: - name: http-metrics port: 3500 targetPort: http-metrics protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: canary --- # Source: loki/templates/read/service-read-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-read-headless namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read variant: headless prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP clusterIP: None ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP appProtocol: tcp selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read --- # Source: loki/templates/read/service-read.yaml apiVersion: v1 kind: Service metadata: name: loki-read namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: read annotations: spec: type: ClusterIP ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read --- # Source: loki/templates/results-cache/service-results-cache-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-results-cache labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: "memcached-results-cache" annotations: {} namespace: "loki" spec: type: ClusterIP clusterIP: None ports: - name: memcached-client port: 11211 targetPort: 11211 - name: http-metrics port: 9150 targetPort: 9150 selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-results-cache" --- # Source: loki/templates/service-memberlist.yaml apiVersion: v1 kind: Service metadata: name: loki-memberlist namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" annotations: spec: type: ClusterIP clusterIP: None ports: - name: tcp port: 7946 targetPort: http-memberlist protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/part-of: memberlist --- # Source: loki/templates/write/service-write-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-write-headless namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write variant: headless prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP clusterIP: None ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP appProtocol: tcp selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write --- # Source: loki/templates/write/service-write.yaml apiVersion: v1 kind: Service metadata: name: loki-write namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: write annotations: spec: type: ClusterIP ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write --- # Source: loki/templates/loki-canary/daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: loki-canary namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: canary spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: canary updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: canary spec: serviceAccountName: loki-canary securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 containers: - name: loki-canary image: registry.cn-guangzhou.aliyuncs.com/xingcangku/grafana-loki-canary-3.5.0:3.5.0 imagePullPolicy: IfNotPresent args: - -addr=loki-gateway.loki.svc.cluster.local.:80 - -labelname=pod - -labelvalue=$(POD_NAME) - -user=self-monitoring - -tenant-id=self-monitoring - -pass= - -push=true securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true volumeMounts: ports: - name: http-metrics containerPort: 3500 protocol: TCP env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name readinessProbe: httpGet: path: /metrics port: http-metrics initialDelaySeconds: 15 timeoutSeconds: 1 volumes: --- # Source: loki/templates/gateway/deployment-gateway-nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: loki-gateway namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: gateway spec: replicas: 1 strategy: type: RollingUpdate revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: gateway template: metadata: annotations: checksum/config: 440a9cd2e87de46e0aad42617818d58f1e2daacb1ae594bad1663931faa44ebc labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: gateway spec: serviceAccountName: loki enableServiceLinks: true securityContext: fsGroup: 101 runAsGroup: 101 runAsNonRoot: true runAsUser: 101 terminationGracePeriodSeconds: 30 containers: - name: nginx image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-nginxinc-nginx-unprivileged-1.28-alpine:1.28-alpine imagePullPolicy: IfNotPresent ports: - name: http-metrics containerPort: 8080 protocol: TCP readinessProbe: httpGet: path: / port: http-metrics initialDelaySeconds: 15 timeoutSeconds: 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true volumeMounts: - name: config mountPath: /etc/nginx - name: tmp mountPath: /tmp - name: docker-entrypoint-d-override mountPath: /docker-entrypoint.d resources: {} affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: gateway topologyKey: kubernetes.io/hostname volumes: - name: config configMap: name: loki-gateway - name: tmp emptyDir: {} - name: docker-entrypoint-d-override emptyDir: {} --- # Source: loki/templates/read/deployment-read.yaml apiVersion: apps/v1 kind: Deployment metadata: name: loki-read namespace: loki labels: app.kubernetes.io/part-of: memberlist helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: read spec: replicas: 3 strategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read template: metadata: annotations: checksum/config: 1616415aaf41d5dec62fea8a013eab1aa2a559579f5f72299f7041e5cd6ea4c7 labels: app.kubernetes.io/part-of: memberlist app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read spec: serviceAccountName: loki automountServiceAccountToken: true securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 terminationGracePeriodSeconds: 30 containers: - name: loki image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-grafana-loki-3.5.0:3.5.0 imagePullPolicy: IfNotPresent args: - -config.file=/etc/loki/config/config.yaml - -target=read - -legacy-read-mode=false - -common.compactor-grpc-address=loki-backend.loki.svc.cluster.local:9095 ports: - name: http-metrics containerPort: 3100 protocol: TCP - name: grpc containerPort: 9095 protocol: TCP - name: http-memberlist containerPort: 7946 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true readinessProbe: httpGet: path: /ready port: http-metrics initialDelaySeconds: 30 timeoutSeconds: 1 volumeMounts: - name: config mountPath: /etc/loki/config - name: runtime-config mountPath: /etc/loki/runtime-config - name: tmp mountPath: /tmp - name: data mountPath: /var/loki resources: {} affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: read topologyKey: kubernetes.io/hostname volumes: - name: tmp emptyDir: {} - name: data emptyDir: {} - name: config configMap: name: loki items: - key: "config.yaml" path: "config.yaml" - name: runtime-config configMap: name: loki-runtime --- # Source: loki/templates/backend/statefulset-backend.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: loki-backend namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: backend app.kubernetes.io/part-of: memberlist spec: replicas: 3 podManagementPolicy: Parallel updateStrategy: rollingUpdate: partition: 0 serviceName: loki-backend-headless revisionHistoryLimit: 10 persistentVolumeClaimRetentionPolicy: whenDeleted: Delete whenScaled: Delete selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend template: metadata: annotations: checksum/config: 1616415aaf41d5dec62fea8a013eab1aa2a559579f5f72299f7041e5cd6ea4c7 labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: backend app.kubernetes.io/part-of: memberlist spec: serviceAccountName: loki automountServiceAccountToken: true securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 terminationGracePeriodSeconds: 300 containers: - name: loki-sc-rules image: "registry.cn-guangzhou.aliyuncs.com/xingcangku/kiwigrid-k8s-sidecar-1.30.3:1.30.3" imagePullPolicy: IfNotPresent env: - name: METHOD value: WATCH - name: LABEL value: "loki_rule" - name: FOLDER value: "/rules" - name: RESOURCE value: "both" - name: WATCH_SERVER_TIMEOUT value: "60" - name: WATCH_CLIENT_TIMEOUT value: "60" - name: LOG_LEVEL value: "INFO" securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true volumeMounts: - name: sc-rules-volume mountPath: "/rules" - name: loki image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-grafana-loki-3.5.0:3.5.0 imagePullPolicy: IfNotPresent args: - -config.file=/etc/loki/config/config.yaml - -target=backend - -legacy-read-mode=false ports: - name: http-metrics containerPort: 3100 protocol: TCP - name: grpc containerPort: 9095 protocol: TCP - name: http-memberlist containerPort: 7946 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true readinessProbe: httpGet: path: /ready port: http-metrics initialDelaySeconds: 30 timeoutSeconds: 1 volumeMounts: - name: config mountPath: /etc/loki/config - name: runtime-config mountPath: /etc/loki/runtime-config - name: tmp mountPath: /tmp - name: data mountPath: /var/loki - name: sc-rules-volume mountPath: "/rules" resources: {} affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: backend topologyKey: kubernetes.io/hostname volumes: - name: tmp emptyDir: {} - name: config configMap: name: loki items: - key: "config.yaml" path: "config.yaml" - name: runtime-config configMap: name: loki-runtime - name: sc-rules-volume emptyDir: {} volumeClaimTemplates: - metadata: name: data spec: storageClassName: "ceph-cephfs" # 显式指定存储类 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- # Source: loki/templates/chunks-cache/statefulset-chunks-cache.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: loki-chunks-cache labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: "memcached-chunks-cache" name: "memcached-chunks-cache" annotations: {} namespace: "loki" spec: podManagementPolicy: Parallel replicas: 1 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-chunks-cache" name: "memcached-chunks-cache" updateStrategy: type: RollingUpdate serviceName: loki-chunks-cache template: metadata: labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-chunks-cache" name: "memcached-chunks-cache" annotations: spec: serviceAccountName: loki securityContext: fsGroup: 11211 runAsGroup: 11211 runAsNonRoot: true runAsUser: 11211 initContainers: [] nodeSelector: {} affinity: {} topologySpreadConstraints: [] tolerations: [] terminationGracePeriodSeconds: 60 containers: - name: memcached image: registry.cn-guangzhou.aliyuncs.com/xingcangku/memcached-1.6.38-alpine:1.6.38-alpine imagePullPolicy: IfNotPresent resources: limits: memory: 4096Mi requests: cpu: 500m memory: 2048Mi ports: - containerPort: 11211 name: client args: - -m 4096 - --extended=modern,track_sizes - -I 5m - -c 16384 - -v - -u 11211 env: envFrom: securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true - name: exporter image: registry.cn-guangzhou.aliyuncs.com/xingcangku/prom-memcached-exporter-v0.15.2:v0.15.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9150 name: http-metrics args: - "--memcached.address=localhost:11211" - "--web.listen-address=0.0.0.0:9150" resources: limits: {} requests: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true --- # Source: loki/templates/results-cache/statefulset-results-cache.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: loki-results-cache labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: "memcached-results-cache" name: "memcached-results-cache" annotations: {} namespace: "loki" spec: podManagementPolicy: Parallel replicas: 1 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-results-cache" name: "memcached-results-cache" updateStrategy: type: RollingUpdate serviceName: loki-results-cache template: metadata: labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-results-cache" name: "memcached-results-cache" annotations: spec: serviceAccountName: loki securityContext: fsGroup: 11211 runAsGroup: 11211 runAsNonRoot: true runAsUser: 11211 initContainers: [] nodeSelector: {} affinity: {} topologySpreadConstraints: [] tolerations: [] terminationGracePeriodSeconds: 60 containers: - name: memcached image: registry.cn-guangzhou.aliyuncs.com/xingcangku/memcached-1.6.38-alpine:1.6.38-alpine imagePullPolicy: IfNotPresent resources: limits: memory: 1229Mi requests: cpu: 500m memory: 1229Mi ports: - containerPort: 11211 name: client args: - -m 1024 - --extended=modern,track_sizes - -I 5m - -c 16384 - -v - -u 11211 env: envFrom: securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true - name: exporter image: registry.cn-guangzhou.aliyuncs.com/xingcangku/prom-memcached-exporter-v0.15.2:v0.15.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9150 name: http-metrics args: - "--memcached.address=localhost:11211" - "--web.listen-address=0.0.0.0:9150" resources: limits: {} requests: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true --- # Source: loki/templates/write/statefulset-write.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: loki-write namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: write app.kubernetes.io/part-of: memberlist spec: replicas: 3 podManagementPolicy: Parallel updateStrategy: rollingUpdate: partition: 0 serviceName: loki-write-headless revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write template: metadata: annotations: checksum/config: 1616415aaf41d5dec62fea8a013eab1aa2a559579f5f72299f7041e5cd6ea4c7 labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: write app.kubernetes.io/part-of: memberlist spec: serviceAccountName: loki automountServiceAccountToken: true enableServiceLinks: true securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 terminationGracePeriodSeconds: 300 containers: - name: loki image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-grafana-loki-3.5.0:3.5.0 imagePullPolicy: IfNotPresent args: - -config.file=/etc/loki/config/config.yaml - -target=write ports: - name: http-metrics containerPort: 3100 protocol: TCP - name: grpc containerPort: 9095 protocol: TCP - name: http-memberlist containerPort: 7946 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true readinessProbe: httpGet: path: /ready port: http-metrics initialDelaySeconds: 30 timeoutSeconds: 1 volumeMounts: - name: config mountPath: /etc/loki/config - name: runtime-config mountPath: /etc/loki/runtime-config - name: data mountPath: /var/loki resources: {} affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: write topologyKey: kubernetes.io/hostname volumes: - name: config configMap: name: loki items: - key: "config.yaml" path: "config.yaml" - name: runtime-config configMap: name: loki-runtime volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: "10Gi" --- # Source: loki/templates/tests/test-canary.yaml apiVersion: v1 kind: Pod metadata: name: "loki-helm-test" namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: helm-test annotations: "helm.sh/hook": test spec: containers: - name: loki-helm-test image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-grafana-loki-helm-test-ewelch-distributed-helm-chart-1:ewelch-distributed-helm-chart-17db5ee env: - name: CANARY_SERVICE_ADDRESS value: "http://loki-canary:3500/metrics" - name: CANARY_PROMETHEUS_ADDRESS value: "" - name: CANARY_TEST_TIMEOUT value: "1m" args: - -test.v restartPolicy: Never root@k8s01:~/helm/loki/loki# kubectl get pod -n loki NAME READY STATUS RESTARTS AGE loki-backend-0 2/2 Running 2 (6h13m ago) 30h loki-backend-1 2/2 Running 2 (6h13m ago) 30h loki-backend-2 2/2 Running 2 (6h13m ago) 30h loki-canary-62z48 1/1 Running 1 (6h13m ago) 30h loki-canary-lg62j 1/1 Running 1 (6h13m ago) 30h loki-canary-nrph4 1/1 Running 1 (6h13m ago) 30h loki-chunks-cache-0 2/2 Running 0 6h12m loki-gateway-75d8cf9754-nwpdw 1/1 Running 13 (6h12m ago) 30h loki-read-dc7bdc98-8kzwk 1/1 Running 1 (6h13m ago) 30h loki-read-dc7bdc98-lmzcd 1/1 Running 1 (6h13m ago) 30h loki-read-dc7bdc98-nrz5h 1/1 Running 1 (6h13m ago) 30h loki-results-cache-0 2/2 Running 2 (6h13m ago) 30h loki-write-0 1/1 Running 1 (6h13m ago) 30h loki-write-1 1/1 Running 1 (6h13m ago) 30h loki-write-2 1/1 Running 1 (6h13m ago) 30h root@k8s01:~/helm/loki/loki# kubectl get svc -n loki NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE loki-backend ClusterIP 10.101.131.151 <none> 3100/TCP,9095/TCP 30h loki-backend-headless ClusterIP None <none> 3100/TCP,9095/TCP 30h loki-canary ClusterIP 10.109.131.175 <none> 3500/TCP 30h loki-chunks-cache ClusterIP None <none> 11211/TCP,9150/TCP 30h loki-gateway ClusterIP 10.98.126.160 <none> 80/TCP 30h loki-memberlist ClusterIP None <none> 7946/TCP 30h loki-query-scheduler-discovery ClusterIP None <none> 3100/TCP,9095/TCP 30h loki-read ClusterIP 10.103.248.164 <none> 3100/TCP,9095/TCP 30h loki-read-headless ClusterIP None <none> 3100/TCP,9095/TCP 30h loki-results-cache ClusterIP None <none> 11211/TCP,9150/TCP 30h loki-write ClusterIP 10.108.223.18 <none> 3100/TCP,9095/TCP 30h loki-write-headless ClusterIP None <none> 3100/TCP,9095/TCP 30h code here...
2025年06月20日
112 阅读
1 评论
0 点赞
2025-06-16
链路追踪数据收集与导出
链路追踪数据收集与导出一、链路数据收集方案在 Kubernetes 中部署应用进行链路追踪数据收集,常见有两种方案: 1、基于 Instrumentation Operator 的自动注入(自动埋点) 通过部署 OpenTelemetry Operator,并创建 Instrumentation 自定义资源(CRD),实现对应用容器的自动注入 SDK 或 Sidecar,从而无需修改应用代码即可采集追踪数据。适合需要快速接入、统一管理、降低改造成本的场景。 2、手动在应用中集成 OpenTelemetry SDK(手动埋点) 在应用程序代码中直接引入 OpenTelemetry SDK,手动埋点关键业务逻辑,控制 trace span 的粒度和内容,并将数据通过 OTLP(OpenTelemetry Protocol)协议导出到后端(如 OpenTelemetry Collector、Jaeger、Tempo 等)。适合需要精准控制追踪数据质量或已有自定义采集需求的场景。 接下来以Instrumentation Operator自动注入方式演示如何收集并处理数据。二、部署测试应用接下来我们部署一个HotROD 演示程序,它内置了OpenTelemetry SDK,我们只需要配置 opentelemetry 接收地址既可,具体可参考文档: https://github.com/jaegertracing/jaeger/tree/main/examples/hotrodapiVersion: apps/v1 kind: Deployment metadata: name: go-demo spec: selector: matchLabels: app: go-demo template: metadata: labels: app: go-demo spec: containers: - name: go-demo image: jaegertracing/example-hotrod:latest imagePullPolicy: IfNotPresent resources: limits: memory: "500Mi" cpu: "200m" ports: - containerPort: 8080 env: - name: OTEL_EXPORTER_OTLP_ENDPOINT # opentelemetry服务地址 value: http://center-collector.opentelemetry.svc:4318 --- apiVersion: v1 kind: Service metadata: name: go-demo spec: selector: app: go-demo ports: - port: 8080 targetPort: 8080 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: go-demo spec: entryPoints: - web routes: - match: Host(`go-demo.cuiliangblog.cn`) kind: Rule services: - name: go-demo port: 8080接下来浏览器添加 hosts 解析后访问测试三、Jaeger方案 3.1Jaeger介绍 Jaeger 是Uber公司研发,后来贡献给CNCF的一个分布式链路追踪软件,主要用于微服务链路追踪。它优点是性能高(能处理大量追踪数据)、部署灵活(支持单节点和分布式部署)、集成方便(兼容 OpenTelemetry),并且可视化能力强,可以快速定位性能瓶颈和故障。基于上述示意图,我们简要解析下 Jaeger 各个组件以及组件间的关系: Client libraries(客户端库) 功能:将追踪信息(trace/span)插入到应用程序中。 说明: 支持多种语言,如 Go、Java、Python、Node.js 等。 通常使用 OpenTelemetry SDK 或 Jaeger Tracer。 将生成的追踪数据发送到 Agent 或 Collector。 Agent(代理) 功能:接收客户端发来的追踪数据,批量转发给 Collector。 说明: 接收 UDP 数据包(更轻量) 向 Collector 使用 gRPC 发送数据 Collector(收集器) 功能: 接收 Agent 或直接从 SDK 发送的追踪数据。 处理(转码、校验等)后写入存储后端。 可横向扩展,提高吞吐能力。 Ingester(摄取器)(可选) 功能:在使用 Kafka 作为中间缓冲队列时,Ingester 从 Kafka 消费数据并写入存储。 用途:解耦收集与存储、提升稳定性。 Storage Backend(存储后端) 功能:保存追踪数据,供查询和分析使用。 支持: Elasticsearch Cassandra Kafka(用于异步摄取) Badger(仅用于开发) OpenSearch Query(查询服务) 功能:从存储中查询追踪数据,提供给前端 UI 使用。 提供 API 接口:供 UI 或其他系统(如 Grafana Tempo)调用。 UI(前端界面) 功能: 可视化展示 Trace、Span、服务依赖图。 支持搜索条件(服务名、时间范围、trace ID 等)。 常用用途: 查看慢请求 分析请求调用链 排查错误或瓶颈 在本示例中,指标数据采集与收集由 OpenTelemetry 实现,仅需要使用 jaeger-collector 组件接收输入,存入 elasticsearch,使用 jaeger-query 组件查询展示数据既可。3.2部署 Jaeger(all in one)apiVersion: apps/v1 kind: Deployment metadata: name: jaeger namespace: opentelemetry labels: app: jaeger spec: replicas: 1 selector: matchLabels: app: jaeger template: metadata: labels: app: jaeger spec: containers: - name: jaeger image: jaegertracing/all-in-one:latest args: - "--collector.otlp.enabled=true" # 启用 OTLP gRPC - "--collector.otlp.grpc.host-port=0.0.0.0:4317" resources: limits: memory: "2Gi" cpu: "1" ports: - containerPort: 6831 protocol: UDP - containerPort: 16686 protocol: TCP - containerPort: 4317 protocol: TCP --- apiVersion: v1 kind: Service metadata: name: jaeger namespace: opentelemetry labels: app: jaeger spec: selector: app: jaeger ports: - name: jaeger-udp port: 6831 targetPort: 6831 protocol: UDP - name: jaeger-ui port: 16686 targetPort: 16686 protocol: TCP - name: otlp-grpc port: 4317 targetPort: 4317 protocol: TCP --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: jaeger namespace: opentelemetry spec: entryPoints: - web routes: - match: Host(`jaeger.cuiliangblog.cn`) kind: Rule services: - name: jaeger port: 166863.3部署 Jaeger(分布式)all in one 数据存放在内存中不具备高可用性,生产环境中建议使用Elasticsearch 或 OpenSearch 作为 Cassandra 的存储后端,以 ElasticSearch 为例,部署操作具体可参考文档:https://www.cuiliangblog.cn/detail/section/162609409导出 ca 证书# kubectl -n elasticsearch get secret elasticsearch-es-http-certs-public -o go-template='{{index .data "ca.crt" | base64decode }}' > ca.crt # kubectl create secret -n opentelemetry generic es-tls-secret --from-file=ca.crt=./ca.crt secret/es-tls-secret created获取 chart 包# helm repo add jaegertracing https://jaegertracing.github.io/helm-charts "jaegertracing" has been added to your repositories # helm search repo jaegertracing NAME CHART VERSION APP VERSION DESCRIPTION jaegertracing/jaeger 3.4.1 1.53.0 A Jaeger Helm chart for Kubernetes jaegertracing/jaeger-operator 2.57.0 1.61.0 jaeger-operator Helm chart for Kubernetes # helm pull jaegertracing/jaeger --untar # cd jaeger # ls Chart.lock charts Chart.yaml README.md templates values.yaml修改安装参数apiVersion: v1 kind: ServiceAccount metadata: name: jaeger-collector labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: collector automountServiceAccountToken: false --- # Source: jaeger/templates/query-sa.yaml apiVersion: v1 kind: ServiceAccount metadata: name: jaeger-query labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: query automountServiceAccountToken: false --- # Source: jaeger/templates/spark-sa.yaml apiVersion: v1 kind: ServiceAccount metadata: name: jaeger-spark labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: spark automountServiceAccountToken: false --- # Source: jaeger/templates/collector-svc.yaml apiVersion: v1 kind: Service metadata: name: jaeger-collector labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: collector spec: ports: - name: grpc port: 14250 protocol: TCP targetPort: grpc appProtocol: grpc - name: http port: 14268 protocol: TCP targetPort: http appProtocol: http - name: otlp-grpc port: 4317 protocol: TCP targetPort: otlp-grpc - name: otlp-http port: 4318 protocol: TCP targetPort: otlp-http - name: admin port: 14269 targetPort: admin selector: app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/component: collector type: ClusterIP --- # Source: jaeger/templates/query-svc.yaml apiVersion: v1 kind: Service metadata: name: jaeger-query labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: query spec: ports: - name: query port: 80 protocol: TCP targetPort: query - name: grpc port: 16685 protocol: TCP targetPort: grpc - name: admin port: 16687 protocol: TCP targetPort: admin selector: app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/component: query type: ClusterIP --- # Source: jaeger/templates/collector-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: jaeger-collector labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: collector spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/component: collector template: metadata: annotations: checksum/config-env: 75a11da44c802486bc6f65640aa48a730f0f684c5c07a42ba3cd1735eb3fb070 labels: app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/component: collector spec: securityContext: {} serviceAccountName: jaeger-collector containers: - name: jaeger-collector securityContext: {} image: registry.cn-guangzhou.aliyuncs.com/xingcangku/jaeger-collector:1.53.0 imagePullPolicy: IfNotPresent args: env: - name: COLLECTOR_OTLP_ENABLED value: "true" - name: SPAN_STORAGE_TYPE value: elasticsearch - name: ES_SERVER_URLS value: https://elasticsearch-client.elasticsearch.svc:9200 - name: ES_TLS_SKIP_HOST_VERIFY # 添加临时跳过主机名验证 value: "true" - name: ES_USERNAME value: elastic - name: ES_PASSWORD valueFrom: secretKeyRef: name: jaeger-elasticsearch key: password - name: ES_TLS_ENABLED value: "true" - name: ES_TLS_CA value: /es-tls/ca.crt ports: - containerPort: 14250 name: grpc protocol: TCP - containerPort: 14268 name: http protocol: TCP - containerPort: 14269 name: admin protocol: TCP - containerPort: 4317 name: otlp-grpc protocol: TCP - containerPort: 4318 name: otlp-http protocol: TCP readinessProbe: httpGet: path: / port: admin livenessProbe: httpGet: path: / port: admin resources: {} volumeMounts: - name: es-tls-secret mountPath: /es-tls/ca.crt subPath: ca-cert.pem readOnly: true dnsPolicy: ClusterFirst restartPolicy: Always volumes: - name: es-tls-secret secret: secretName: es-tls-secret --- # Source: jaeger/templates/query-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: jaeger-query labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: query spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/component: query template: metadata: labels: app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/component: query spec: securityContext: {} serviceAccountName: jaeger-query containers: - name: jaeger-query securityContext: {} image: registry.cn-guangzhou.aliyuncs.com/xingcangku/jaegertracing-jaeger-query:1.53.0 imagePullPolicy: IfNotPresent args: env: - name: SPAN_STORAGE_TYPE value: elasticsearch - name: ES_SERVER_URLS value: https://elasticsearch-client.elasticsearch.svc:9200 - name: ES_TLS_SKIP_HOST_VERIFY # 添加临时跳过主机名验证 value: "true" - name: ES_USERNAME value: elastic - name: ES_PASSWORD valueFrom: secretKeyRef: name: jaeger-elasticsearch key: password - name: ES_TLS_ENABLED value: "true" - name: ES_TLS_CA value: /es-tls/ca.crt - name: QUERY_BASE_PATH value: "/" - name: JAEGER_AGENT_PORT value: "6831" ports: - name: query containerPort: 16686 protocol: TCP - name: grpc containerPort: 16685 protocol: TCP - name: admin containerPort: 16687 protocol: TCP resources: {} volumeMounts: - name: es-tls-secret mountPath: /es-tls/ca.crt subPath: ca-cert.pem readOnly: true livenessProbe: httpGet: path: / port: admin readinessProbe: httpGet: path: / port: admin - name: jaeger-agent-sidecar securityContext: {} image: registry.cn-guangzhou.aliyuncs.com/xingcangku/jaegertracing-jaeger-agent:1.53.0 imagePullPolicy: IfNotPresent args: env: - name: REPORTER_GRPC_HOST_PORT value: jaeger-collector:14250 ports: - name: admin containerPort: 14271 protocol: TCP resources: null volumeMounts: livenessProbe: httpGet: path: / port: admin readinessProbe: httpGet: path: / port: admin dnsPolicy: ClusterFirst restartPolicy: Always volumes: - name: es-tls-secret secret: secretName: es-tls-secret --- # Source: jaeger/templates/spark-cronjob.yaml apiVersion: batch/v1 kind: CronJob metadata: name: jaeger-spark labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: spark spec: schedule: "49 23 * * *" successfulJobsHistoryLimit: 5 failedJobsHistoryLimit: 5 concurrencyPolicy: Forbid jobTemplate: spec: template: metadata: labels: app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/component: spark spec: serviceAccountName: jaeger-spark securityContext: {} containers: - name: jaeger-spark image: registry.cn-guangzhou.aliyuncs.com/xingcangku/jaegertracing-spark-dependencies:latest imagePullPolicy: IfNotPresent args: env: - name: STORAGE value: elasticsearch - name: ES_SERVER_URLS value: https://elasticsearch-client.elasticsearch.svc:9200 - name: ES_USERNAME value: elastic - name: ES_PASSWORD valueFrom: secretKeyRef: name: jaeger-elasticsearch key: password - name: ES_TLS_ENABLED value: "true" - name: ES_TLS_CA value: /es-tls/ca.crt - name: ES_NODES value: https://elasticsearch-client.elasticsearch.svc:9200 - name: ES_NODES_WAN_ONLY value: "false" resources: {} volumeMounts: securityContext: {} restartPolicy: OnFailure volumes: --- # Source: jaeger/templates/elasticsearch-secret.yaml apiVersion: v1 kind: Secret metadata: name: jaeger-elasticsearch labels: helm.sh/chart: jaeger-3.4.1 app.kubernetes.io/name: jaeger app.kubernetes.io/instance: jaeger app.kubernetes.io/version: "1.53.0" app.kubernetes.io/managed-by: Helm annotations: "helm.sh/hook": pre-install,pre-upgrade "helm.sh/hook-weight": "-1" "helm.sh/hook-delete-policy": before-hook-creation "helm.sh/resource-policy": keep type: Opaque data: password: "ZWdvbjY2Ng=="安装 jaegerroot@k8s01:~/helm/jaeger/jaeger# kubectl delete -n opentelemetry -f test.yaml serviceaccount "jaeger-collector" deleted serviceaccount "jaeger-query" deleted serviceaccount "jaeger-spark" deleted service "jaeger-collector" deleted service "jaeger-query" deleted deployment.apps "jaeger-collector" deleted deployment.apps "jaeger-query" deleted cronjob.batch "jaeger-spark" deleted secret "jaeger-elasticsearch" deleted root@k8s01:~/helm/jaeger/jaeger# vi test.yaml root@k8s01:~/helm/jaeger/jaeger# kubectl apply -n opentelemetry -f test.yaml serviceaccount/jaeger-collector created serviceaccount/jaeger-query created serviceaccount/jaeger-spark created service/jaeger-collector created service/jaeger-query created deployment.apps/jaeger-collector created deployment.apps/jaeger-query created cronjob.batch/jaeger-spark created secret/jaeger-elasticsearch created root@k8s01:~/helm/jaeger/jaeger# kubectl get pods -n opentelemetry -w NAME READY STATUS RESTARTS AGE center-collector-78f7bbdf45-j798s 1/1 Running 2 (6h2m ago) 30h jaeger-7989549bb9-hn8jh 1/1 Running 2 (6h2m ago) 25h jaeger-collector-7f8fb4c946-nkg4m 1/1 Running 0 3s jaeger-query-5cdb7b68bd-xpftn 2/2 Running 0 3s ^Croot@k8s01:~/helm/jaeger/jaeger# kubectl get svc -n opentelemetry | grep jaeger jaeger ClusterIP 10.100.251.219 <none> 6831/UDP,16686/TCP,4317/TCP 25h jaeger-collector ClusterIP 10.111.17.41 <none> 14250/TCP,14268/TCP,4317/TCP,4318/TCP,14269/TCP 51s jaeger-query ClusterIP 10.98.118.118 <none> 80/TCP,16685/TCP,16687/TCP 51s创建 ingress 资源root@k8s01:~/helm/jaeger/jaeger# cat jaeger.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: jaeger namespace: opentelemetry spec: entryPoints: - web routes: - match: Host(`jaeger.axinga.cn`) kind: Rule services: - name: jaeger port: 16686接下来配置 hosts 解析后浏览器访问既可。配置 CollectorapiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector # 元数据定义部分 metadata: name: center # Collector 的名称为 center namespace: opentelemetry # 具体的配置内容 spec: replicas: 1 # 设置副本数量为1 config: # 定义 Collector 配置 receivers: # 接收器,用于接收遥测数据(如 trace、metrics、logs) otlp: # 配置 OTLP(OpenTelemetry Protocol)接收器 protocols: # 启用哪些协议来接收数据 grpc: endpoint: 0.0.0.0:4317 # 启用 gRPC 协议 http: endpoint: 0.0.0.0:4318 # 启用 HTTP 协议 processors: # 处理器,用于处理收集到的数据 batch: {} # 批处理器,用于将数据分批发送,提高效率 exporters: # 导出器,用于将处理后的数据发送到后端系统 # debug: {} # 使用 debug 导出器,将数据打印到终端(通常用于测试或调试) otlp: # 数据发送到jaeger的grpc端口 endpoint: "jaeger-collector:4317" tls: # 跳过证书验证 insecure: true service: # 服务配置部分 pipelines: # 定义处理管道 traces: # 定义 trace 类型的管道 receivers: [otlp] # 接收器为 OTLP processors: [batch] # 使用批处理器 exporters: [otlp] # 将数据发送到otlp接下来我们随机访问 demo 应用,并在 jaeger 查看链路追踪数据。Jaeger 系统找到了一些 trace 并显示了一些关于该 trace 的元数据,包括参与该 trace 的不同服务的名称以及每个服务发送到 Jaeger 的 span 记录数。jaeger 使用具体可参考文章https://medium.com/jaegertracing/take-jaeger-for-a-hotrod-ride-233cf43e46c2四、Tempo 方案4.1Tempo 介绍Grafana Tempo是一个开源、易于使用的大规模分布式跟踪后端。Tempo具有成本效益,仅需要对象存储即可运行,并且与Grafana,Prometheus和Loki深度集成,Tempo可以与任何开源跟踪协议一起使用,包括Jaeger、Zipkin和OpenTelemetry。它仅支持键/值查找,并且旨在与用于发现的日志和度量标准(示例性)协同工作Distributors(分发器) 功能:接收客户端发送的追踪数据并进行初步验证 说明: 对 Trace 进行分片、标签处理。 将数据转发给合适的 Ingesters。 Ingesters(摄取器) 功能:处理和持久化 Trace 数据 说明: 接收来自 Distributor 的数据。 在内存中缓存直到追踪完成(完整的 Trace)。 再写入后端对象存储。 Storage(对象存储) 功能:持久化存储 Trace 数据 说明: 支持多种对象存储(S3、GCS、MinIO、Azure Blob 等)。 Tempo 存储的是压缩的完整 Trace 文件,使用 trace ID 进行索引。 Compactor(数据压缩) 功能:合并 trace 数据,压缩多个小 block 成一个大 block。 说明: 可以单独运行 compactor 容器或进程。 通常以 后台任务 的方式运行,不参与实时 ingest 或 query。 Tempo Query(查询前端) 功能:处理来自用户或 Grafana 的查询请求 说明: 接收查询请求。 提供缓存、合并和调度功能,优化查询性能。 将请求转发给 Querier。 Querier(查询器) 功能:从存储中检索 Trace 数据 说明: 根据 trace ID 从对象存储中检索完整 trace。 解压和返回结构化的 Span 数据。 返回结果供 Grafana 或其他前端展示。4.2部署 Tempo推荐用Helm 安装,官方提供了tempo-distributed Helm chart 和 tempo Helm chart 两种部署模式,一般来说本地测试使用 tempo Helm chart,而生产环境可以使用 Tempo 的微服务部署方式 tempo-distributed。接下来以整体模式为例,具体可参考文档https://github.com/grafana/helm-charts/tree/main/charts/tempo 创建 s3 的 bucket、ak、sk 资源,并配置权限。具体可参考上面minio4.2.1获取 chart 包# helm repo add grafana https://grafana.github.io/helm-charts # helm pull grafana/tempo --untar # cd tempo # ls Chart.yaml README.md README.md.gotmpl templates values.yaml4.2.2修改配置,prometheus 默认未启用远程写入,可参考文章开启远程写入https://www.cuiliangblog.cn/detail/section/15189202# vim values.yaml tempo: storage: trace: # 默认使用本地文件存储,改为使用s3对象存储 backend: s3 s3: bucket: tempo # store traces in this bucket endpoint: minio-service.minio.svc:9000 # api endpoint access_key: zbsIQQnsp871ZnZ2AuKr # optional. access key when using static credentials. secret_key: zxL5EeXwU781M8inSBPcgY49mEbBVoR1lvFCX4JU # optional. secret key when using static credentials. insecure: true # 跳过证书验证4.2.3创建 temporoot@k8s01:~/helm/opentelemetry/tempo# cat test.yaml --- # Source: tempo/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: tempo namespace: opentelemetry labels: helm.sh/chart: tempo-1.23.1 app.kubernetes.io/name: tempo app.kubernetes.io/instance: tempo app.kubernetes.io/version: "2.8.0" app.kubernetes.io/managed-by: Helm automountServiceAccountToken: true --- # Source: tempo/templates/configmap-tempo.yaml apiVersion: v1 kind: ConfigMap metadata: name: tempo namespace: opentelemetry labels: helm.sh/chart: tempo-1.23.1 app.kubernetes.io/name: tempo app.kubernetes.io/instance: tempo app.kubernetes.io/version: "2.8.0" app.kubernetes.io/managed-by: Helm data: overrides.yaml: | overrides: {} tempo.yaml: | memberlist: cluster_label: "tempo.opentelemetry" multitenancy_enabled: false usage_report: reporting_enabled: true compactor: compaction: block_retention: 24h distributor: receivers: jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 thrift_binary: endpoint: 0.0.0.0:6832 thrift_compact: endpoint: 0.0.0.0:6831 thrift_http: endpoint: 0.0.0.0:14268 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 ingester: {} server: http_listen_port: 3200 storage: trace: backend: s3 s3: access_key: admin bucket: tempo endpoint: minio-demo.minio.svc:9000 secret_key: 8fGYikcyi4 insecure: true #tls: false wal: path: /var/tempo/wal querier: {} query_frontend: {} overrides: defaults: {} per_tenant_override_config: /conf/overrides.yaml --- # Source: tempo/templates/service.yaml apiVersion: v1 kind: Service metadata: name: tempo namespace: opentelemetry labels: helm.sh/chart: tempo-1.23.1 app.kubernetes.io/name: tempo app.kubernetes.io/instance: tempo app.kubernetes.io/version: "2.8.0" app.kubernetes.io/managed-by: Helm spec: type: ClusterIP ports: - name: tempo-jaeger-thrift-compact port: 6831 protocol: UDP targetPort: 6831 - name: tempo-jaeger-thrift-binary port: 6832 protocol: UDP targetPort: 6832 - name: tempo-prom-metrics port: 3200 protocol: TCP targetPort: 3200 - name: tempo-jaeger-thrift-http port: 14268 protocol: TCP targetPort: 14268 - name: grpc-tempo-jaeger port: 14250 protocol: TCP targetPort: 14250 - name: tempo-zipkin port: 9411 protocol: TCP targetPort: 9411 - name: tempo-otlp-legacy port: 55680 protocol: TCP targetPort: 55680 - name: tempo-otlp-http-legacy port: 55681 protocol: TCP targetPort: 55681 - name: grpc-tempo-otlp port: 4317 protocol: TCP targetPort: 4317 - name: tempo-otlp-http port: 4318 protocol: TCP targetPort: 4318 - name: tempo-opencensus port: 55678 protocol: TCP targetPort: 55678 selector: app.kubernetes.io/name: tempo app.kubernetes.io/instance: tempo --- # Source: tempo/templates/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: tempo namespace: opentelemetry labels: helm.sh/chart: tempo-1.23.1 app.kubernetes.io/name: tempo app.kubernetes.io/instance: tempo app.kubernetes.io/version: "2.8.0" app.kubernetes.io/managed-by: Helm spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: tempo app.kubernetes.io/instance: tempo serviceName: tempo-headless template: metadata: labels: app.kubernetes.io/name: tempo app.kubernetes.io/instance: tempo annotations: checksum/config: 563d333fcd3b266c31add18d53e0fa1f5e6ed2e1588e6ed4c466a8227285129b spec: serviceAccountName: tempo automountServiceAccountToken: true containers: - args: - -config.file=/conf/tempo.yaml - -mem-ballast-size-mbs=1024 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/grafana-tempo-2.8.0:2.8.0 imagePullPolicy: IfNotPresent name: tempo ports: - containerPort: 3200 name: prom-metrics - containerPort: 6831 name: jaeger-thrift-c protocol: UDP - containerPort: 6832 name: jaeger-thrift-b protocol: UDP - containerPort: 14268 name: jaeger-thrift-h - containerPort: 14250 name: jaeger-grpc - containerPort: 9411 name: zipkin - containerPort: 55680 name: otlp-legacy - containerPort: 4317 name: otlp-grpc - containerPort: 55681 name: otlp-httplegacy - containerPort: 4318 name: otlp-http - containerPort: 55678 name: opencensus livenessProbe: failureThreshold: 3 httpGet: path: /ready port: 3200 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 readinessProbe: failureThreshold: 3 httpGet: path: /ready port: 3200 initialDelaySeconds: 20 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: {} env: volumeMounts: - mountPath: /conf name: tempo-conf securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 volumes: - configMap: name: tempo name: tempo-conf updateStrategy: type: RollingUpdate root@k8s01:~/helm/opentelemetry/tempo# kubectl get pod -n opentelemetry NAME READY STATUS RESTARTS AGE center-collector-67dcddd7db-8hd98 1/1 Running 0 4h3m tempo-0 1/1 Running 35 (5h57m ago) 8d root@k8s01:~/helm/opentelemetry/tempo# kubectl get svc -n opentelemetry | grep tempo tempo ClusterIP 10.105.249.189 <none> 6831/UDP,6832/UDP,3200/TCP,14268/TCP,14250/TCP,9411/TCP,55680/TCP,55681/TCP,4317/TCP,4318/TCP,55678/TCP 8d root@k8s01:~/helm/opentelemetry/tempo# 4.2.4配置 Collector#按之前上面的完整配置 下面可以参考 tempo 服务的otlp 数据接收端口分别为4317(grpc)和4318(http),修改OpenTelemetryCollector 配置,将数据发送到 tempo 的 otlp 接收端口。 apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector # 元数据定义部分 metadata: name: center # Collector 的名称为 center namespace: opentelemetry # 具体的配置内容 spec: replicas: 1 # 设置副本数量为1 config: # 定义 Collector 配置 receivers: # 接收器,用于接收遥测数据(如 trace、metrics、logs) otlp: # 配置 OTLP(OpenTelemetry Protocol)接收器 protocols: # 启用哪些协议来接收数据 grpc: endpoint: 0.0.0.0:4317 # 启用 gRPC 协议 http: endpoint: 0.0.0.0:4318 # 启用 HTTP 协议 processors: # 处理器,用于处理收集到的数据 batch: {} # 批处理器,用于将数据分批发送,提高效率 exporters: # 导出器,用于将处理后的数据发送到后端系统 # debug: {} # 使用 debug 导出器,将数据打印到终端(通常用于测试或调试) otlp: # 数据发送到tempo的grpc端口 endpoint: "tempo:4317" tls: # 跳过证书验证 insecure: true service: # 服务配置部分 pipelines: # 定义处理管道 traces: # 定义 trace 类型的管道 receivers: [otlp] # 接收器为 OTLP processors: [batch] # 使用批处理器 exporters: [otlp] # 将数据打印到OTLP4.2.5访问验证4.2.6服务拓扑图配置Tempo Metrics Generator 是 Grafana Tempo 提供的一个 可选组件,用于将 Trace(链路追踪数据)转换为 Metrics(指标数据),从而实现 Trace-to-Metrics(T2M) 的能力,默认情况下 tempo 并未启用该功能。4.2.6.1prometheus 开启remote-write-receiver 功能,关键配置如下:# vim prometheus-prometheus.yaml spec: # enableFeatures: enableFeatures: # 开启远程写入 - remote-write-receiver externalLabels: web.enable-remote-write-receiver: "true" # kubectl apply -f prometheus-prometheus.yaml具体可参考文档:https://m.cuiliangblog.cn/detail/section/151892024.2.6.2tempo 开启metricsGenerator 功能,关键配置如下:# vim values.yaml global: per_tenant_override_config: /runtime-config/overrides.yaml metrics_generator_processors: - 'service-graphs' - 'span-metrics' tempo: metricsGenerator: enabled: true # 从 Trace 中自动生成 metrics(指标),用于服务调用关系图 remoteWriteUrl: "http://prometheus-k8s.monitoring.svc:9090/api/v1/write" # prometheus地址 overrides: # 多租户默认配置启用metrics defaults: metrics_generator: processors: - service-graphs - span-metrics4.2.6.3此时查询 prometheus 图表,可以获取traces 相关指标grafana 数据源启用节点图与服务图,配置如下查看服务图数据
2025年06月16日
40 阅读
1 评论
0 点赞
2025-06-15
OpenTelemetry数据收集
一、收集器配置详解OpenTelemetry 的 Collector 组件是实现观测数据(Trace、Metrics、Logs)收集、处理和导出的一站式服务。它的配置主要分为以下 四大核心模块: receivers(接收数据) processors(数据处理) exporters(导出数据) service(工作流程)1、配置格式#具体配置项可参考文档https://opentelemetry.io/docs/collector/configuration/ apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector # 定义资源类型为 OpenTelemetryCollector metadata: name: sidecar # Collector 的名称 spec: mode: sidecar # 以 sidecar 模式运行(与应用容器同 Pod) config: # Collector 配置部分(结构化 YAML) receivers: # 数据接收器(如 otlp、prometheus) processors: # 数据处理器(如 batch、resource、attributes) exporters: # 数据导出器(如 otlp、logging、jaeger、prometheus) service: # 服务配置(定义哪些 pipeline 生效) pipelines: traces: # trace 数据的处理流程 metrics: # metric 数据的处理流程 logs: # log 数据的处理流程2、Receivers(接收器)用于接收数据。支持的类型有很多, otlp:接收 otlp 协议的数据内容 receivers: otlp: protocols: grpc: # 高性能、推荐使用 endpoint: 0.0.0.0:4317 http: # 浏览器或无 gRPC 支持的环境 endpoint: 0.0.0.0:4318prometheus: 用于采集 /metrics 接口的数据。 receivers: prometheus: config: scrape_configs: - job_name: my-service static_configs: - targets: ['my-app:8080']filelog: 从文件读取日志 receivers: filelog: include: [ /var/log/myapp/*.log ] start_at: beginning operators: - type: json_parser parse_from: body timestamp: parse_from: attributes.time3、Processors(处理器)用于在导出前对数据进行修改、增强或过滤。常用的包括: batch : 将数据批处理后导出,提高吞吐量。 processors: batch: timeout: 10s send_batch_size: 1024resource : 为 trace/metric/log 添加统一标签。 processors: resource: attributes: - key: service.namespace value: demo action: insertattributes : 添加、修改或删除属性 processors: attributes: actions: - key: http.method value: GET action: insert处理器配置可参考文档:https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor4、Exporters(导出器)用于将数据导出到后端系统 otlp: 用于将数据发送到另一个 OTEL Collector、Jaeger、Tempo、Datadog 等。 exporters: otlp: endpoint: tempo-collector:4317 tls: insecure: truePrometheus: 用于暴露一个 /metrics HTTP 端口给 Prometheus 拉取。 exporters: prometheus: endpoint: "0.0.0.0:8889"logging : 调试用,打印数据到控制台。 exporters: debug: loglevel: debug5、Service(工作流程)service.pipelines 是一个“调度图”,告诉 OpenTelemetry Collector,对于某种类型的数据,比如 trace,请用哪个 receiver 来接收,用哪些 processor 来处理,最终送到哪些 exporter 去导出。service: pipelines: traces: receivers: [otlp] processors: [batch, resource] exporters: [otlp, logging] metrics: receivers: [prometheus] processors: [batch] exporters: [prometheus] logs: receivers: [filelog] processors: [batch] exporters: [otlp]二、Collector 发行版本区别opentelemetry-collector 和 opentelemetry-collector-contrib 是两个 OpenTelemetry Collector 的发行版本,它们的区别主要在于 内置组件的丰富程度 和 维护主体。
2025年06月15日
31 阅读
0 评论
0 点赞
2025-06-14
OpenTelemetry 应用埋点
一、部署示例应用 1、部署java应用apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: app: java-demo template: metadata: labels: app: java-demo spec: containers: - name: java-demo image: registry.cn-guangzhou.aliyuncs.com/xingcangku/spring-petclinic:1.5.1 imagePullPolicy: IfNotPresent resources: limits: memory: "1Gi" # 增加内存 cpu: "500m" ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: java-demo spec: type: ClusterIP # 改为 ClusterIP,Traefik 使用服务发现 selector: app: java-demo ports: - port: 80 targetPort: 8080 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: java-demo spec: entryPoints: - web # 使用 WEB 入口点 (端口 8000) routes: - match: Host(`java-demo.local.cn`) # 可以修改为您需要的域名 kind: Rule services: - name: java-demo port: 80 2、部署python应用apiVersion: apps/v1 kind: Deployment metadata: name: python-demo spec: selector: matchLabels: app: python-demo template: metadata: labels: app: python-demo spec: containers: - name: python-demo image: registry.cn-guangzhou.aliyuncs.com/xingcangku/python-demoapp:latest imagePullPolicy: IfNotPresent resources: limits: memory: "500Mi" cpu: "200m" ports: - containerPort: 5000 --- apiVersion: v1 kind: Service metadata: name: python-demo spec: selector: app: python-demo ports: - port: 5000 targetPort: 5000 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: python-demo spec: entryPoints: - web routes: - match: Host(`python-demo.local.com`) kind: Rule services: - name: python-demo port: 5000二、应用埋点 1、java应用自动埋点apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # 声明资源类型为 Instrumentation(用于语言自动注入) metadata: name: java-instrumentation # Instrumentation 资源的名称(可以被 Deployment 等引用) namespace: opentelemetry spec: propagators: # 指定用于 trace 上下文传播的方式,支持多种格式 - tracecontext # W3C Trace Context(最通用的跨服务追踪格式) - baggage # 传播用户定义的上下文键值对 - b3 # Zipkin 的 B3 header(用于兼容 Zipkin 环境) sampler: # 定义采样策略(决定是否收集 trace) type: always_on # 始终采样所有请求(适合测试或调试环境) java: # image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest # 使用的 Java 自动注入 agent 镜像地址 image: harbor.cuiliangblog.cn/otel/autoinstrumentation-java:latest env: - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://center-collector.opentelemetry.svc:4318#为了启用自动检测,我们需要更新部署文件并向其添加注解。这样我们可以告诉 OpenTelemetry Operator 将 sidecar 和 java-instrumentation 注入到我们的应用程序中。修改 Deployment 配置如下: apiVersion: apps/v1 kind: Deployment metadata: name: java-demo spec: selector: matchLabels: app: java-demo template: metadata: labels: app: java-demo annotations: instrumentation.opentelemetry.io/inject-java: "opentelemetry/java-instrumentation" # 填写 Instrumentation 资源的名称 sidecar.opentelemetry.io/inject: "opentelemetry/sidecar" # 注入一个 sidecar 模式的 OpenTelemetry Collector spec: containers: - name: java-demo image: registry.cn-guangzhou.aliyuncs.com/xingcangku/spring-petclinic:1.5.1 imagePullPolicy: IfNotPresent resources: limits: memory: "500Mi" cpu: "200m" ports: - containerPort: 8080#接下来更新 deployment,然后查看资源信息,java-demo 容器已经变为两个。 root@k8s01:~/helm/opentelemetry# kubectl get pods NAME READY STATUS RESTARTS AGE java-demo-5cdd74d47-vmqqx 0/2 Init:0/1 0 6s java-demo-5f4d989b88-xrzg7 1/1 Running 0 42m my-sonarqube-postgresql-0 1/1 Running 8 (2d21h ago) 9d my-sonarqube-sonarqube-0 0/1 Pending 0 6d6h python-demo-69c56c549c-jcgmj 1/1 Running 0 16m redis-5ff4857944-v2vz5 1/1 Running 5 (2d21h ago) 6d2h root@k8s01:~/helm/opentelemetry# kubectl get pods -w NAME READY STATUS RESTARTS AGE java-demo-5cdd74d47-vmqqx 0/2 PodInitializing 0 9s java-demo-5f4d989b88-xrzg7 1/1 Running 0 42m my-sonarqube-postgresql-0 1/1 Running 8 (2d21h ago) 9d my-sonarqube-sonarqube-0 0/1 Pending 0 6d6h python-demo-69c56c549c-jcgmj 1/1 Running 0 17m redis-5ff4857944-v2vz5 1/1 Running 5 (2d21h ago) 6d2h java-demo-5cdd74d47-vmqqx 2/2 Running 0 23s java-demo-5f4d989b88-xrzg7 1/1 Terminating 0 43m java-demo-5f4d989b88-xrzg7 0/1 Terminating 0 43m java-demo-5f4d989b88-xrzg7 0/1 Terminating 0 43m java-demo-5f4d989b88-xrzg7 0/1 Terminating 0 43m java-demo-5f4d989b88-xrzg7 0/1 Terminating 0 43m root@k8s01:~/helm/opentelemetry# kubectl get pods -w NAME READY STATUS RESTARTS AGE java-demo-5cdd74d47-vmqqx 2/2 Running 0 28s my-sonarqube-postgresql-0 1/1 Running 8 (2d21h ago) 9d my-sonarqube-sonarqube-0 0/1 Pending 0 6d6h python-demo-69c56c549c-jcgmj 1/1 Running 0 17m redis-5ff4857944-v2vz5 1/1 Running 5 (2d21h ago) 6d2h ^Croot@k8s01:~/helm/opentelemetry# kubectl get opentelemetrycollectors -A NAMESPACE NAME MODE VERSION READY AGE IMAGE MANAGEMENT opentelemetry center deployment 0.127.0 1/1 3h22m registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0 managed opentelemetry sidecar sidecar 0.127.0 3h19m managed root@k8s01:~/helm/opentelemetry# kubectl get instrumentations -A NAMESPACE NAME AGE ENDPOINT SAMPLER SAMPLER ARG opentelemetry java-instrumentation 2m26s always_on #查看 sidecar日志,已正常启动并发送 spans 数据 root@k8s01:~/helm/opentelemetry# kubectl logs java-demo-5cdd74d47-vmqqx -c otc-container 2025-06-14T15:31:35.013Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z info builders/builders.go:26 Development component. May change in the future. {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Beta component. May change in the future. {"resource": {}, "otelcol.component.id": "batch", "otelcol.component.kind": "processor", "otelcol.pipeline.id": "traces", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z debug otlpreceiver@v0.127.0/otlp.go:58 created signal-agnostic logger {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver"} 2025-06-14T15:31:35.021Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127.0", "NumCPU": 8} 2025-06-14T15:31:35.021Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:176 [core] original dial target is: "center-collector.opentelemetry.svc:4317" {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:459 [core] [Channel #1]Channel created {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:207 [core] [Channel #1]parsed dial target is: resolver.Target{URL:url.URL{Scheme:"passthrough", Opaque:"", User:(*url.Userinfo)(nil), Host:"", Path:"/center-collector.opentelemetry.svc:4317", RawPath:"", OmitHost:false, ForceQuery:false, RawQuery:"", Fragment:"", RawFragment:""}} {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:208 [core] [Channel #1]Channel authority set to "center-collector.opentelemetry.svc:4317" {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.022Z info grpc@v1.72.1/resolver_wrapper.go:210 [core] [Channel #1]Resolver state updated: { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } (resolver returned new addresses) {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.022Z info grpc@v1.72.1/balancer_wrapper.go:122 [core] [Channel #1]Channel switches to new LB policy "pick_first" {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info gracefulswitch/gracefulswitch.go:194 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc000bc6090] Received new config { "shuffleAddressList": false }, resolver state { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to CONNECTING{"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/balancer_wrapper.go:195 [core] [Channel #1 SubChannel #2]Subchannel created {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:364 [core] [Channel #1]Channel exiting idle mode {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity change to CONNECTING {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.024Z info grpc@v1.72.1/clientconn.go:1343 [core] [Channel #1 SubChannel #2]Subchannel picks a new address "center-collector.opentelemetry.svc:4317" to connect {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.024Z info grpc@v1.72.1/server.go:690 [core] [Server #3]Server created {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.024Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4317"} 2025-06-14T15:31:35.025Z info grpc@v1.72.1/server.go:886 [core] [Server #3 ListenSocket #4]ListenSocket created {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.025Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4318"} 2025-06-14T15:31:35.026Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {"resource": {}} 2025-06-14T15:31:35.034Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity change to READY {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.034Z info pickfirstleaf/pickfirstleaf.go:197 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc000bc6090] SubConn 0xc0008e1db0 reported connectivity state READY and the health listener is disabled. Transitioning SubConn to READY. {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.034Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to READY {"resource": {}, "grpc_log": true} root@k8s01:~/helm/opentelemetry# kubectl logs java-demo-5cdd74d47-vmqqx -c otc-container 2025-06-14T15:31:35.013Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "otlp 2025-06-14T15:31:35.014Z info builders/builders.go:26 Development component. May change in the future. {"resource": {aces"} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Beta component. May change in the future. {"resource": {}, "oteles", "otelcol.signal": "traces"} 2025-06-14T15:31:35.014Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "otlp 2025-06-14T15:31:35.014Z debug otlpreceiver@v0.127.0/otlp.go:58 created signal-agnostic logger {"resource": {}, "otel 2025-06-14T15:31:35.021Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127.0", 2025-06-14T15:31:35.021Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:176 [core] original dial target is: "center-collector.opentelemetr 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:459 [core] [Channel #1]Channel created {"resource": {}, "grpc 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:207 [core] [Channel #1]parsed dial target is: resolver.Target{URL:ector.opentelemetry.svc:4317", RawPath:"", OmitHost:false, ForceQuery:false, RawQuery:"", Fragment:"", RawFragment:""}} {"resource": { 2025-06-14T15:31:35.021Z info grpc@v1.72.1/clientconn.go:208 [core] [Channel #1]Channel authority set to "center-collector. 2025-06-14T15:31:35.022Z info grpc@v1.72.1/resolver_wrapper.go:210 [core] [Channel #1]Resolver state updated: { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } (resolver returned new addresses) {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.022Z info grpc@v1.72.1/balancer_wrapper.go:122 [core] [Channel #1]Channel switches to new LB policy " 2025-06-14T15:31:35.023Z info gracefulswitch/gracefulswitch.go:194 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc000bc6090] "shuffleAddressList": false }, resolver state { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to CONNECTING 2025-06-14T15:31:35.023Z info grpc@v1.72.1/balancer_wrapper.go:195 [core] [Channel #1 SubChannel #2]Subchannel created 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:364 [core] [Channel #1]Channel exiting idle mode {"resource": { 2025-06-14T15:31:35.023Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity chang 2025-06-14T15:31:35.024Z info grpc@v1.72.1/clientconn.go:1343 [core] [Channel #1 SubChannel #2]Subchannel picks a new addres 2025-06-14T15:31:35.024Z info grpc@v1.72.1/server.go:690 [core] [Server #3]Server created {"resource": {}, "grpc 2025-06-14T15:31:35.024Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol.comp 2025-06-14T15:31:35.025Z info grpc@v1.72.1/server.go:886 [core] [Server #3 ListenSocket #4]ListenSocket created {"reso 2025-06-14T15:31:35.025Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol.comp 2025-06-14T15:31:35.026Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {"reso 2025-06-14T15:31:35.034Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity chang 2025-06-14T15:31:35.034Z info pickfirstleaf/pickfirstleaf.go:197 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc000bc6090]ansitioning SubConn to READY. {"resource": {}, "grpc_log": true} 2025-06-14T15:31:35.034Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to READY {"reso #查看collector 日志,已经收到 traces 数据 root@k8s01:~/helm/opentelemetry# kubectl get pod -n opentelemetry NAME READY STATUS RESTARTS AGE center-collector-78f7bbdf45-j798s 1/1 Running 0 3h24m root@k8s01:~/helm/opentelemetry# kubectl get -n opentelemetry pods NAME READY STATUS RESTARTS AGE center-collector-78f7bbdf45-j798s 1/1 Running 0 3h25m root@k8s01:~/helm/opentelemetry# kubectl logs -n opentelemetry center-collector-78f7bbdf45-j798s 2025-06-14T12:09:21.290Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T12:09:21.291Z info builders/builders.go:26 Development component. May change in the future. {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T12:09:21.294Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127.0", "NumCPU": 8} 2025-06-14T12:09:21.294Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T12:09:21.294Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4317"} 2025-06-14T12:09:21.295Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol.component.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4318"} 2025-06-14T12:09:21.295Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {"resource": {}} root@k8s01:~/helm/opentelemetry# 2、python应用自动埋点与 java 应用类似,python 应用同样也支持自动埋点, OpenTelemetry 提供了 opentelemetry-instrument CLI 工具,在启动 Python 应用时通过 sitecustomize 或环境变量注入自动 instrumentation。 我们先创建一个java-instrumentation 资源apiVersion: opentelemetry.io/v1alpha1 kind: Instrumentation # 声明资源类型为 Instrumentation(用于语言自动注入) metadata: name: python-instrumentation # Instrumentation 资源的名称(可以被 Deployment 等引用) namespace: opentelemetry spec: propagators: # 指定用于 trace 上下文传播的方式,支持多种格式 - tracecontext # W3C Trace Context(最通用的跨服务追踪格式) - baggage # 传播用户定义的上下文键值对 - b3 # Zipkin 的 B3 header(用于兼容 Zipkin 环境) sampler: # 定义采样策略(决定是否收集 trace) type: always_on # 始终采样所有请求(适合测试或调试环境) python: image: registry.cn-guangzhou.aliyuncs.com/xingcangku/autoinstrumentation-python:latest env: - name: OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED # 启用日志的自动检测 value: "true" - name: OTEL_PYTHON_LOG_CORRELATION # 在日志中启用跟踪上下文注入 value: "true" - name: OTEL_EXPORTER_OTLP_ENDPOINT value: http://center-collector.opentelemetry.svc:4318^Croot@k8s01:~/helm/opentelemetry# cat new-python-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: python-demo spec: selector: matchLabels: app: python-demo template: metadata: labels: app: python-demo annotations: instrumentation.opentelemetry.io/inject-python: "opentelemetry/python-instrumentation" # 填写 Instrumentation 资源的名称 sidecar.opentelemetry.io/inject: "opentelemetry/sidecar" # 注入一个 sidecar 模式的 OpenTelemetry Collector spec: containers: - name: pyhton-demo image: registry.cn-guangzhou.aliyuncs.com/xingcangku/python-demoapp:latest imagePullPolicy: IfNotPresent resources: limits: memory: "500Mi" cpu: "200m" ports: - containerPort: 5000 oot@k8s03:~# kubectl get pods NAME READY STATUS RESTARTS AGE java-demo-5559f949b9-74p68 2/2 Running 0 2m14s java-demo-5559f949b9-kwgpc 0/2 Terminating 0 14m my-sonarqube-postgresql-0 1/1 Running 8 (2d22h ago) 9d my-sonarqube-sonarqube-0 0/1 Pending 0 6d7h python-demo-599fc7f8d6-lbhnr 2/2 Running 0 20m redis-5ff4857944-v2vz5 1/1 Running 5 (2d22h ago) 6d3h root@k8s03:~# kubectl logs python-demo-599fc7f8d6-lbhnr -c otc-container 2025-06-14T15:57:12.951Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T15:57:12.952Z info builders/builders.go:26 Development component. May change in the future. {"resource{}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T15:57:12.952Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "p", "otelcol.component.kind": "exporter", "otelcol.signal": "traces"} 2025-06-14T15:57:12.952Z debug builders/builders.go:24 Beta component. May change in the future. {"resource": {}, "lcol.component.id": "batch", "otelcol.component.kind": "processor", "otelcol.pipeline.id": "traces", "otelcol.signal": "traces"} 2025-06-14T15:57:12.952Z debug builders/builders.go:24 Stable component. {"resource": {}, "otelcol.component.id": "p", "otelcol.component.kind": "receiver", "otelcol.signal": "traces"} 2025-06-14T15:57:12.952Z debug otlpreceiver@v0.127.0/otlp.go:58 created signal-agnostic logger {"resource": {}, "lcol.component.id": "otlp", "otelcol.component.kind": "receiver"} 2025-06-14T15:57:12.953Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127, "NumCPU": 8} 2025-06-14T15:57:12.953Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T15:57:12.953Z info grpc@v1.72.1/clientconn.go:176 [core] original dial target is: "center-collector.opentelery.svc:4317" {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:459 [core] [Channel #1]Channel created {"resource": {}, "c_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:207 [core] [Channel #1]parsed dial target is: resolver.Target{:url.URL{Scheme:"passthrough", Opaque:"", User:(*url.Userinfo)(nil), Host:"", Path:"/center-collector.opentelemetry.svc:4317", Rawh:"", OmitHost:false, ForceQuery:false, RawQuery:"", Fragment:"", RawFragment:""}} {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:208 [core] [Channel #1]Channel authority set to "center-collec.opentelemetry.svc:4317" {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/resolver_wrapper.go:210 [core] [Channel #1]Resolver state updated: { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } (resolver returned new addresses) {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/balancer_wrapper.go:122 [core] [Channel #1]Channel switches to new LB poli"pick_first" {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info gracefulswitch/gracefulswitch.go:194 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc00046e] Received new config { "shuffleAddressList": false }, resolver state { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Endpoints": [ { "Addresses": [ { "Addr": "center-collector.opentelemetry.svc:4317", "ServerName": "", "Attributes": null, "BalancerAttributes": null, "Metadata": null } ], "Attributes": null } ], "ServiceConfig": null, "Attributes": null } {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to CONNECTI"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/balancer_wrapper.go:195 [core] [Channel #1 SubChannel #2]Subchannel create"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:364 [core] [Channel #1]Channel exiting idle mode {"resource{}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity cge to CONNECTING {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/clientconn.go:1343 [core] [Channel #1 SubChannel #2]Subchannel picks a new adss "center-collector.opentelemetry.svc:4317" to connect {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.954Z info grpc@v1.72.1/server.go:690 [core] [Server #3]Server created {"resource": {}, "c_log": true} 2025-06-14T15:57:12.954Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol.ponent.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4317"} 2025-06-14T15:57:12.954Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol.ponent.id": "otlp", "otelcol.component.kind": "receiver", "endpoint": "0.0.0.0:4318"} 2025-06-14T15:57:12.954Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {"ource": {}} 2025-06-14T15:57:12.955Z info grpc@v1.72.1/server.go:886 [core] [Server #3 ListenSocket #4]ListenSocket created {"ource": {}, "grpc_log": true} 2025-06-14T15:57:12.962Z info grpc@v1.72.1/clientconn.go:1224 [core] [Channel #1 SubChannel #2]Subchannel Connectivity cge to READY {"resource": {}, "grpc_log": true} 2025-06-14T15:57:12.962Z info pickfirstleaf/pickfirstleaf.go:197 [pick-first-leaf-lb] [pick-first-leaf-lb 0xc00046e] SubConn 0xc0005fccd0 reported connectivity state READY and the health listener is disabled. Transitioning SubConn to READY. {"ource": {}, "grpc_log": true} 2025-06-14T15:57:12.962Z info grpc@v1.72.1/clientconn.go:563 [core] [Channel #1]Channel Connectivity change to READY {"ource": {}, "grpc_log": true} root@k8s03:~# root@k8s03:~# kubectl logs -n opentelemetry center-collector-78f7bbdf45-j798s 2025-06-14T12:09:21.290Z info service@v0.127.0/service.go:199 Setting up own telemetry... {"resource": {}} 2025-06-14T12:09:21.291Z info builders/builders.go:26 Development component. May change in the future. {"resourceaces"} 2025-06-14T12:09:21.294Z info service@v0.127.0/service.go:266 Starting otelcol... {"resource": {}, "Version": "0.127 2025-06-14T12:09:21.294Z info extensions/extensions.go:41 Starting extensions... {"resource": {}} 2025-06-14T12:09:21.294Z info otlpreceiver@v0.127.0/otlp.go:116 Starting GRPC server {"resource": {}, "otelcol. 2025-06-14T12:09:21.295Z info otlpreceiver@v0.127.0/otlp.go:173 Starting HTTP server {"resource": {}, "otelcol. 2025-06-14T12:09:21.295Z info service@v0.127.0/service.go:289 Everything is ready. Begin running and processing data. {" 2025-06-14T16:05:11.811Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor 2025-06-14T16:05:16.636Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor 2025-06-14T16:05:26.894Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor 2025-06-14T16:18:11.294Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor 2025-06-14T16:18:21.350Z info Traces {"resource": {}, "otelcol.component.id": "debug", "otelcol.component.kind": "expor root@k8s03:~#
2025年06月14日
22 阅读
0 评论
0 点赞
1
2