OpenTelemetry部署

axing
2025-06-14 / 0 评论 / 2 阅读 / 正在检测是否收录...
建议使用 OpenTelemetry Operator 来部署,因为它可以帮助我们轻松部署和管理 OpenTelemetry 收集器,还可以自动检测应用程序。具体可参考文档https://opentelemetry.io/docs/platforms/kubernetes/operator/

一、部署cert-manager

因为 Operator 使用了 Admission Webhook 通过 HTTP 回调机制对资源进行校验/修改。Kubernetes 要求 Webhook 服务必须使用 TLS,因此 Operator 需要为其 webhook server 签发证书,所以需要先安装cert-manager。
# wget https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
# kubectl apply -f cert-manager.yaml
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get -n cert-manager pod
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-7bd494778-gs44k               1/1     Running   0          37s
cert-manager-cainjector-76474c8c48-w9r5p   1/1     Running   0          37s
cert-manager-webhook-6797c49f67-thvcz      1/1     Running   0          37s
root@k8s01:~/helm/opentelemetry/cert-manager# 

二、部署Operator

在 Kubernetes 上使用 OpenTelemetry,主要就是部署 OpenTelemetry 收集器。
# wget https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml
# kubectl apply -f opentelemetry-operator.yaml
# kubectl get pod -n opentelemetry-operator-system 
NAME                                                         READY   STATUS    RESTARTS   AGE
opentelemetry-operator-controller-manager-6d94c5db75-cz957   2/2     Running   0          74s
# kubectl get crd |grep opentelemetry
instrumentations.opentelemetry.io           2025-04-21T09:48:53Z
opampbridges.opentelemetry.io               2025-04-21T09:48:54Z
opentelemetrycollectors.opentelemetry.io    2025-04-21T09:48:54Z
targetallocators.opentelemetry.io           2025-04-21T09:48:54Z
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl apply -f opentelemetry-operator.yaml
namespace/opentelemetry-operator-system created
customresourcedefinition.apiextensions.k8s.io/instrumentations.opentelemetry.io created
customresourcedefinition.apiextensions.k8s.io/opampbridges.opentelemetry.io created
customresourcedefinition.apiextensions.k8s.io/opentelemetrycollectors.opentelemetry.io created
customresourcedefinition.apiextensions.k8s.io/targetallocators.opentelemetry.io created
serviceaccount/opentelemetry-operator-controller-manager created
role.rbac.authorization.k8s.io/opentelemetry-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/opentelemetry-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/opentelemetry-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/opentelemetry-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/opentelemetry-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/opentelemetry-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/opentelemetry-operator-proxy-rolebinding created
service/opentelemetry-operator-controller-manager-metrics-service created
service/opentelemetry-operator-webhook-service created
deployment.apps/opentelemetry-operator-controller-manager created
Warning: spec.privateKey.rotationPolicy: In cert-manager >= v1.18.0, the default value changed from `Never` to `Always`.
certificate.cert-manager.io/opentelemetry-operator-serving-cert created
issuer.cert-manager.io/opentelemetry-operator-selfsigned-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/opentelemetry-operator-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/opentelemetry-operator-validating-webhook-configuration created
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get pods -n opentelemetry-operator-system
NAME                                                        READY   STATUS    RESTARTS   AGE
opentelemetry-operator-controller-manager-f78fc55f7-xtjk2   2/2     Running   0          107s
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get crd |grep opentelemetry
instrumentations.opentelemetry.io          2025-06-14T11:30:01Z
opampbridges.opentelemetry.io              2025-06-14T11:30:01Z
opentelemetrycollectors.opentelemetry.io   2025-06-14T11:30:02Z
targetallocators.opentelemetry.io          2025-06-14T11:30:02Z

三、部署Collector(中心)

接下来我们部署一个精简版的 OpenTelemetry Collector,用于接收 OTLP 格式的 trace 数据,通过 gRPC 或 HTTP 协议接入,经过内存控制与批处理后,打印到日志中以供调试使用。 
root@k8s01:~/helm/opentelemetry/cert-manager# cat center-collector.yaml 
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
# 元数据定义部分
metadata:
  name: center        # Collector 的名称为 center
  namespace: opentelemetry
# 具体的配置内容
spec:
  image: registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0
  replicas: 1           # 设置副本数量为1
  config:               # 定义 Collector 配置
    receivers:          # 接收器,用于接收遥测数据(如 trace、metrics、logs)
      otlp:             # 配置 OTLP(OpenTelemetry Protocol)接收器
        protocols:      # 启用哪些协议来接收数据
          grpc: 
            endpoint: 0.0.0.0:4317      # 启用 gRPC 协议
          http: 
            endpoint: 0.0.0.0:4318      # 启用 HTTP 协议

    processors:         # 处理器,用于处理收集到的数据
      batch: {}         # 批处理器,用于将数据分批发送,提高效率

    exporters:          # 导出器,用于将处理后的数据发送到后端系统
      debug: {}         # 使用 debug 导出器,将数据打印到终端(通常用于测试或调试)

    service:            # 服务配置部分
      pipelines:        # 定义处理管道
        traces:         # 定义 trace 类型的管道
          receivers: [otlp]                      # 接收器为 OTLP
          processors: [batch]                    # 使用批处理器
          exporters: [debug]                     # 将数据打印到终端
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get pod -n opentelemetry
NAME                                READY   STATUS        RESTARTS   AGE
center-collector-78f7bbdf45-j798s   1/1     Running       0          43s
center-collector-7b7b8b9b97-qwhdr   0/1     Terminating   0          12m
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get svc -n opentelemetry  
NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
center-collector              ClusterIP   10.105.241.233   <none>        4317/TCP,4318/TCP   49s
center-collector-headless     ClusterIP   None             <none>        4317/TCP,4318/TCP   49s
center-collector-monitoring   ClusterIP   10.96.61.65      <none>        8888/TCP            49s
root@k8s01:~/helm/opentelemetry/cert-manager# 

四、部署Collector(代理)

我们使用 Sidecar 模式部署 OpenTelemetry 代理。该代理会将应用程序的追踪发送到我们刚刚部署的中心OpenTelemetry 收集器。
root@k8s01:~/helm/opentelemetry/cert-manager# cat center-collector.yaml 
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
# 元数据定义部分
metadata:
  name: center        # Collector 的名称为 center
  namespace: opentelemetry
# 具体的配置内容
spec:
  image: registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0
  replicas: 1           # 设置副本数量为1
  config:               # 定义 Collector 配置
    receivers:          # 接收器,用于接收遥测数据(如 trace、metrics、logs)
      otlp:             # 配置 OTLP(OpenTelemetry Protocol)接收器
        protocols:      # 启用哪些协议来接收数据
          grpc: 
            endpoint: 0.0.0.0:4317      # 启用 gRPC 协议
          http: 
            endpoint: 0.0.0.0:4318      # 启用 HTTP 协议

    processors:         # 处理器,用于处理收集到的数据
      batch: {}         # 批处理器,用于将数据分批发送,提高效率

    exporters:          # 导出器,用于将处理后的数据发送到后端系统
      debug: {}         # 使用 debug 导出器,将数据打印到终端(通常用于测试或调试)

    service:            # 服务配置部分
      pipelines:        # 定义处理管道
        traces:         # 定义 trace 类型的管道
          receivers: [otlp]                      # 接收器为 OTLP
          processors: [batch]                    # 使用批处理器
          exporters: [debug]                     # 将数据打印到终端
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get pod -n opentelemetry
NAME                                READY   STATUS        RESTARTS   AGE
center-collector-78f7bbdf45-j798s   1/1     Running       0          43s
center-collector-7b7b8b9b97-qwhdr   0/1     Terminating   0          12m
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get svc -n opentelemetry  
NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
center-collector              ClusterIP   10.105.241.233   <none>        4317/TCP,4318/TCP   49s
center-collector-headless     ClusterIP   None             <none>        4317/TCP,4318/TCP   49s
center-collector-monitoring   ClusterIP   10.96.61.65      <none>        8888/TCP            49s
root@k8s01:~/helm/opentelemetry/cert-manager# vi sidecar-collector.yaml
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl apply -f sidecar-collector.yaml 
opentelemetrycollector.opentelemetry.io/sidecar created
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get opentelemetrycollectors -n opentelemetry 
NAME      MODE         VERSION   READY   AGE    IMAGE                                                                                   MANAGEMENT
center    deployment   0.127.0   1/1     3m3s   registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0   managed
sidecar   sidecar      0.127.0           7s                                                                                             managed
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get opentelemetrycollectors -n opentelemetry 
NAME      MODE         VERSION   READY   AGE    IMAGE                                                                                 
center    deployment   0.127.0   1/1     3m8s   registry.cn-guangzhou.aliyuncs.com/xingcangku/opentelemetry-collector-0.127.0:0.127.0 
sidecar   sidecar      0.127.0           12s                                                                                          
root@k8s01:~/helm/opentelemetry/cert-manager# kubectl get pod -n opentelemetry 
NAME                                READY   STATUS        RESTARTS   AGE
center-collector-78f7bbdf45-j798s   1/1     Running       0          3m31s
center-collector-7b7b8b9b97-qwhdr   0/1     Terminating   0          15m
sidecar 代理依赖于应用程序启动,因此现在创建后并不会立即启动,需要我们创建一个应用程序并使用这个 sidecar 模式的 collector。
0

评论 (0)

取消