首页
导航
统计
留言
更多
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
Search
1
Ubuntu安装 kubeadm 部署k8s 1.30
219 阅读
2
kubeadm 部署k8s 1.30
134 阅读
3
rockylinux 9.3详细安装drbd
131 阅读
4
rockylinux 9.3详细安装drbd+keepalived
122 阅读
5
ceshi
82 阅读
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
zabbix
登录
/
注册
Search
标签搜索
k8s
linux
docker
drbd+keepalivde
ansible
dcoker
webhook
星
累计撰写
118
篇文章
累计收到
940
条评论
首页
栏目
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
zabbix
页面
导航
统计
留言
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
搜索到
78
篇与
的结果
2025-07-26
k8s部署最新harbor
一、创建资源root@k8s-master-01:~# mkdir harbor root@k8s-master-01:~# cd harbor/ root@k8s-master-01:~/harbor# ls root@k8s-master-01:~/harbor# helm repo add harbor https://helm.goharbor.io "harbor" has been added to your repositories root@k8s-master-01:~/harbor# helm repo list NAME URL nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner harbor https://helm.goharbor.io root@k8s-master-01:~/harbor# helm pull harbor/harbor root@k8s-master-01:~/harbor# ls harbor-1.17.1.tgz root@k8s-master-01:~/harbor# tar -zxvf harbor-1.17.1.tgz root@k8s-master-01:~/harbor# ls harbor harbor-1.17.1.tgzkubectl create ns harbor二、渲染&修改yaml文件helm template my-harbor ./test.yaml--- # Source: harbor/templates/core/core-secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-harbor-core namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: secretKey: "bm90LWEtc2VjdXJlLWtleQ==" secret: "Z3NncnBQWURsQ01hUjZlWg==" tls.key: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMWdRMlJMbHJ6bUQzM0ZkT2RvanNHU1hUaVNhSXhkam1maW9VWmJVNDIzSFB2bnRwCjBuRi9Vcm40Vlk4SWtSZU4vbGFjQVBCVm1BQys0czBCYkxXSytrbC9wVUxlMFZHTUI0UElCMFF5YW5hNW9wQm0KY25sb3pEcVVOTTVpMlBVMnFXUEFHWG1BenBhTmpuZlBycnlRS0ZCY0ZkOTM4NW9GSmJCdlhmZ00ycS9pU09MOQp1dzdBUDd1UUFhQ0xjMzYvSlFtb1N0ZE9tNWZtYU9idDZ5ZW9JaU50ZXNUaXF1eVN3Y0k5RFZqdXk4SDhuamp5CnNOWTdWZlBwaXhSemw0K1EzZVgxNU5Lb0NOWm4yL0lNb2x4aUk3Q3FnQTkzSU1aSkFvVy9zMnhqSVp4aDFVZ3oKRW93YkNsOXdjT1ZRR2g4eUV5a0MzRTlKQjZrTjhkY2pKT3N2VlFJREFRQUJBb0lCQVFDWEN2LzEvdHNjRzRteQo0NWRIeHhqQ0l0VXBqWjJuN0kyMzZ5RGNLMHRHYlF1T1J2R0hpWHl2dVBxUC85T3UrdTNHMi85Y0ZrS0NkYnhDCnV5YlBQMDBubWFuUnkrRVAzN3F4THd1RVBWaExsU0VzbnpiK2diczVyL29iVHJHcXAxMTlyUjNObk5nUWRXYlEKYnJTUGdSdElxSFpsSllNMTFMVGZSYWREcmFYOHpDeG1ZSkwxc3JtMHBGNXk4Q1E0SG1Ka0ZtMG0rVlMxSFk4NAptazR3OGE3SGpoZWZJSE15Z3dUREluREZ0VVFlaDNyTkRYMzI3cmw5c0NBV1NMZnpaZDBuUGc2K2lZVTE0amxmCnA1Mi8yd1E2Y3lMSFY4am1QTDNoRUVIWXJlNVp6cFEzM0sybVFWVVpTR2c2SFRIOGszdWxQS2c1UXRJTmc1K3MKR3A0aS9EREJBb0dCQVBWMVVNSjFWZ3RKTENYM1pRQVArZTBVbEhLa2dWNnZhS2R3d2tpU0tjbTBWa1g0NUlIUwpFQ3ZhaTEzWHpsNkQ0S0tlbHFRUHY2VHc0TXBRcE4xSGwwd25PVnBDTW01dGc5bmF1WDhEdlNpcGJQaVE4anFMCnhSTzhaa2NJaW15ZXdXYVUxRFVqQ281Qk5DMWd4bUV4RnFjdjVVMlVQM1RzWllmQVFTMm1NWWR4QW9HQkFOODEKTmRsRnFDWnZZT0NYMm1ud3pxZzlyQ1Y4YWFHMnQ1WjVKUkZTL2gvL2dyZFZNd25VWHZ6cXVsRmM4dHM1Q0Y2eAoxSDJwVTJFTnpJYjBCM0dKNTBCZnJBUUxwdjdXRThjaFBzUTZpRW0yeTlmLzVIeThTdVJWblRCU295NktpRk13Ci9ycTE1blBXVVJzZkF6aFYvUVgvMCtvSFFxdFVkNnY4bFVtMWRsd2xBb0dBQlhmYm1MbHNkVXZvQStDREM0RlAKbkF4OVVpQ0FFVS92RU92ZUtDZTVicGpwNHgwc1dnZ0gvREllTUxVQ0QvRDRMQ2RFUzl0ZDlacTRKMG1zb3BGWgp1WVNXTG9DVEJ3ckJpVFRxTlA0c1ZKK1JvZWY0dlgwbm9zenJxbUZ5VkFFbFpkZWk4cHdaUEJvUHc0TUlhRm5qCm0wM2gyZHlYblU4MjQ5TlFvR2UzYXNFQ2dZQjdrdDcwSWg5YzRBN1hhTnJRQ2pTdmFpMXpOM1RYeGV2UUQ5UFkKeW9UTXZFM25KL0V3d1BXeHVsWmFrMFlVM25kbXpiY2h0dXZsY0psS0liSTVScXJUdGVQcS9YUi80NDloa0dOSwppa2xIM2o3dW44b2swSzM1eWZoVGQzekdXSVh1NE5JMkZseTJ4dkZ5UFhJdjcxTTh6Z3pKcFNsZzUwdTEyUW5oCm0rZ2lUUUtCZ0R3dHBnMjNkWEh2WFBtOEt4ZmhseElGOVB3SkF4TnF2SDNNT0NRMnJCYmE2cWhseWdSa2UrSVMKeXFUWFlmSjVISFdtYTZDZ1ZMOG9nSnA2emZiS2FvOFg5VSsyOUNuaUpvK2RsM1VtTVR6VDFXcFdlRS9xWTk2LwoySVFVQVR0U1EyU2Z6YUFaQm5JL29EK1FlcTBlOEszUXZnZlc3S2Y0bzdsSnlJUGRKS2NCCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==" tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJRENDQWdpZ0F3SUJBZ0lSQUpMU3p5cTVFc05aQzV6TDZURmhpWFF3RFFZSktvWklodmNOQVFFTEJRQXcKR2pFWU1CWUdBMVVFQXhNUGFHRnlZbTl5TFhSdmEyVnVMV05oTUI0WERUSTFNRGN5TmpFME1UUXhNbG9YRFRJMgpNRGN5TmpFME1UUXhNbG93R2pFWU1CWUdBMVVFQXhNUGFHRnlZbTl5TFhSdmEyVnVMV05oTUlJQklqQU5CZ2txCmhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBMWdRMlJMbHJ6bUQzM0ZkT2RvanNHU1hUaVNhSXhkam0KZmlvVVpiVTQyM0hQdm50cDBuRi9Vcm40Vlk4SWtSZU4vbGFjQVBCVm1BQys0czBCYkxXSytrbC9wVUxlMFZHTQpCNFBJQjBReWFuYTVvcEJtY25sb3pEcVVOTTVpMlBVMnFXUEFHWG1BenBhTmpuZlBycnlRS0ZCY0ZkOTM4NW9GCkpiQnZYZmdNMnEvaVNPTDl1dzdBUDd1UUFhQ0xjMzYvSlFtb1N0ZE9tNWZtYU9idDZ5ZW9JaU50ZXNUaXF1eVMKd2NJOURWanV5OEg4bmpqeXNOWTdWZlBwaXhSemw0K1EzZVgxNU5Lb0NOWm4yL0lNb2x4aUk3Q3FnQTkzSU1aSgpBb1cvczJ4aklaeGgxVWd6RW93YkNsOXdjT1ZRR2g4eUV5a0MzRTlKQjZrTjhkY2pKT3N2VlFJREFRQUJvMkV3Clh6QU9CZ05WSFE4QkFmOEVCQU1DQXFRd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRkJ3TUMKTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRkhJcFJZZklNZkd5NkV1cU9iVSs0bGdhYVJOSApNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJDK3plNi9yUjQ3WmJZUGN4dU00L2FXdUtkeWZpMUhhLzlGNitGCk4rMmxKT1JOTjRYeS9KT2VQWmE0NlQxYzd2OFFNNFpVMHhBbnlqdXdRN1E5WE1GcmlucEF6NVRYV1ZZcW44dWIKNlNBd2YrT01qVTVhWGM3VVZtdzJoMzdBc0svTFRYOGo3NmtMUXVyVGVNOHdFTjBDbXI5NlF2Rnk5d2d2ZThlTgpiZldac1A3c0FwYklBeDdOVmc3RUl2czQxdDgvY2dxQWVvaGpYa1UyQVZkdFNLbzh0TFU5ZzNVdTdUNGtWWXpLCm9IQzJiQ3lpbkRPRFZIOHVtcHBCbVRubEg0TDRTR0lnSGFWcVZoOUhoOGFjTmgvbE1ST0dtU1RpVC9DNDRkUS8KcXlZWWxrcXQ3Ymk0dkhjaXZJZXBzckIvdkZ1ZXlQSzZGbGNTMjEvN3BVS0FUOU1YCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" HARBOR_ADMIN_PASSWORD: "SGFyYm9yMTIzNDU=" POSTGRESQL_PASSWORD: "Y2hhbmdlaXQ=" REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk" CSRF_KEY: "dXRzODI2MDNCZGJyVWFBRlhudFNqaWpSbUZPVUR5Mmg=" --- # Source: harbor/templates/database/database-secret.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-database" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: POSTGRES_PASSWORD: "Y2hhbmdlaXQ=" --- # Source: harbor/templates/jobservice/jobservice-secrets.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-jobservice" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: JOBSERVICE_SECRET: "Z3M3T2k4dTBTUk1GRzVlTg==" REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk" --- # Source: harbor/templates/registry/registry-secret.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-registry" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: REGISTRY_HTTP_SECRET: "U0FnY21vVFNQVjlIdnlTRw==" REGISTRY_REDIS_PASSWORD: "" --- # Source: harbor/templates/registry/registry-secret.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-registry-htpasswd" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: REGISTRY_HTPASSWD: "aGFyYm9yX3JlZ2lzdHJ5X3VzZXI6JDJhJDEwJGNJZTk1ek9aQnp4RG1aMTQzem1HUmVWUkFmc0VMazZ0azNObWZXZmZjYkZtOU1ja1lrbnYy" --- # Source: harbor/templates/registry/registryctl-secret.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-registryctl" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: --- # Source: harbor/templates/trivy/trivy-secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-harbor-trivy namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: redisURL: cmVkaXM6Ly9yZWxlYXNlLW5hbWUtaGFyYm9yLXJlZGlzOjYzNzkvNT9pZGxlX3RpbWVvdXRfc2Vjb25kcz0zMA== gitHubToken: "" --- # Source: harbor/templates/core/core-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-harbor-core namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: app.conf: |+ appname = Harbor runmode = prod enablegzip = true [prod] httpport = 8080 PORT: "8080" DATABASE_TYPE: "postgresql" POSTGRESQL_HOST: "release-name-harbor-database" POSTGRESQL_PORT: "5432" POSTGRESQL_USERNAME: "postgres" POSTGRESQL_DATABASE: "registry" POSTGRESQL_SSLMODE: "disable" POSTGRESQL_MAX_IDLE_CONNS: "100" POSTGRESQL_MAX_OPEN_CONNS: "900" EXT_ENDPOINT: "http://192.168.3.160:30002" CORE_URL: "http://release-name-harbor-core:80" JOBSERVICE_URL: "http://release-name-harbor-jobservice" REGISTRY_URL: "http://release-name-harbor-registry:5000" TOKEN_SERVICE_URL: "http://release-name-harbor-core:80/service/token" CORE_LOCAL_URL: "http://127.0.0.1:8080" WITH_TRIVY: "true" TRIVY_ADAPTER_URL: "http://release-name-harbor-trivy:8080" REGISTRY_STORAGE_PROVIDER_NAME: "filesystem" LOG_LEVEL: "info" CONFIG_PATH: "/etc/core/app.conf" CHART_CACHE_DRIVER: "redis" _REDIS_URL_CORE: "redis://release-name-harbor-redis:6379/0?idle_timeout_seconds=30" _REDIS_URL_REG: "redis://release-name-harbor-redis:6379/2?idle_timeout_seconds=30" PORTAL_URL: "http://release-name-harbor-portal" REGISTRY_CONTROLLER_URL: "http://release-name-harbor-registry:8080" REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user" HTTP_PROXY: "" HTTPS_PROXY: "" NO_PROXY: "release-name-harbor-core,release-name-harbor-jobservice,release-name-harbor-database,release-name-harbor-registry,release-name-harbor-portal,release-name-harbor-trivy,release-name-harbor-exporter,127.0.0.1,localhost,.local,.internal" PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE: "docker-hub,harbor,azure-acr,aws-ecr,google-gcr,quay,docker-registry,github-ghcr,jfrog-artifactory" QUOTA_UPDATE_PROVIDER: "db" --- # Source: harbor/templates/jobservice/jobservice-cm-env.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-jobservice-env" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: CORE_URL: "http://release-name-harbor-core:80" TOKEN_SERVICE_URL: "http://release-name-harbor-core:80/service/token" REGISTRY_URL: "http://release-name-harbor-registry:5000" REGISTRY_CONTROLLER_URL: "http://release-name-harbor-registry:8080" REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user" JOBSERVICE_WEBHOOK_JOB_MAX_RETRY: "3" JOBSERVICE_WEBHOOK_JOB_HTTP_CLIENT_TIMEOUT: "3" LOG_LEVEL: "info" HTTP_PROXY: "" HTTPS_PROXY: "" NO_PROXY: "release-name-harbor-core,release-name-harbor-jobservice,release-name-harbor-database,release-name-harbor-registry,release-name-harbor-portal,release-name-harbor-trivy,release-name-harbor-exporter,127.0.0.1,localhost,.local,.internal" --- # Source: harbor/templates/jobservice/jobservice-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-jobservice" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: config.yml: |+ #Server listening port protocol: "http" port: 8080 worker_pool: workers: 10 backend: "redis" redis_pool: redis_url: "redis://release-name-harbor-redis:6379/1" namespace: "harbor_job_service_namespace" idle_timeout_second: 3600 job_loggers: - name: "FILE" level: INFO settings: # Customized settings of logger base_dir: "/var/log/jobs" sweeper: duration: 14 #days settings: # Customized settings of sweeper work_dir: "/var/log/jobs" metric: enabled: false path: /metrics port: 8001 #Loggers for the job service loggers: - name: "STD_OUTPUT" level: INFO reaper: # the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24 max_update_hours: 24 # the max time for execution in running state without new task created max_dangling_hours: 168 --- # Source: harbor/templates/nginx/configmap-http.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-harbor-nginx namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: nginx.conf: |+ worker_processes auto; pid /tmp/nginx.pid; events { worker_connections 3096; use epoll; multi_accept on; } http { client_body_temp_path /tmp/client_body_temp; proxy_temp_path /tmp/proxy_temp; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; tcp_nodelay on; # this is necessary for us to be able to disable request buffering in all cases proxy_http_version 1.1; upstream core { server "release-name-harbor-core:80"; } upstream portal { server release-name-harbor-portal:80; } log_format timed_combined '[$time_local]:$remote_addr - ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time $upstream_response_time $pipe'; access_log /dev/stdout timed_combined; map $http_x_forwarded_proto $x_forwarded_proto { default $http_x_forwarded_proto; "" $scheme; } server { listen 8080; listen [::]:8080; server_tokens off; # disable any limits to avoid HTTP 413 for large image uploads client_max_body_size 0; # Add extra headers add_header X-Frame-Options DENY; add_header Content-Security-Policy "frame-ancestors 'none'"; location / { proxy_pass http://portal/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; } location /api/ { proxy_pass http://core/api/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; } location /c/ { proxy_pass http://core/c/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; } location /v1/ { return 404; } location /v2/ { proxy_pass http://core/v2/; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; proxy_send_timeout 900; proxy_read_timeout 900; } location /service/ { proxy_pass http://core/service/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; } location /service/notifications { return 404; } } } --- # Source: harbor/templates/portal/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-portal" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: nginx.conf: |+ worker_processes auto; pid /tmp/nginx.pid; events { worker_connections 1024; } http { client_body_temp_path /tmp/client_body_temp; proxy_temp_path /tmp/proxy_temp; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; server { listen 8080; listen [::]:8080; server_name localhost; root /usr/share/nginx/html; index index.html index.htm; include /etc/nginx/mime.types; gzip on; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; location /devcenter-api-2.0 { try_files $uri $uri/ /swagger-ui-index.html; } location / { try_files $uri $uri/ /index.html; } location = /index.html { add_header Cache-Control "no-store, no-cache, must-revalidate"; } } } --- # Source: harbor/templates/registry/registry-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-registry" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: config.yml: |+ version: 0.1 log: level: info fields: service: registry storage: filesystem: rootdirectory: /storage cache: layerinfo: redis maintenance: uploadpurging: enabled: true age: 168h interval: 24h dryrun: false delete: enabled: true redirect: disable: false redis: addr: release-name-harbor-redis:6379 db: 2 readtimeout: 10s writetimeout: 10s dialtimeout: 10s enableTLS: false pool: maxidle: 100 maxactive: 500 idletimeout: 60s http: addr: :5000 relativeurls: false # set via environment variable # secret: placeholder debug: addr: localhost:5001 auth: htpasswd: realm: harbor-registry-basic-realm path: /etc/registry/passwd validation: disabled: true compatibility: schema1: enabled: true ctl-config.yml: |+ --- protocol: "http" port: 8080 log_level: info registry_config: "/etc/registry/config.yml" --- # Source: harbor/templates/registry/registryctl-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-registryctl" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: --- # Source: harbor/templates/jobservice/jobservice-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: release-name-harbor-jobservice namespace: "harbor" annotations: helm.sh/resource-policy: keep labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: jobservice app.kubernetes.io/component: jobservice spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: nfs-sc --- # Source: harbor/templates/registry/registry-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: release-name-harbor-registry namespace: "harbor" annotations: helm.sh/resource-policy: keep labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: registry app.kubernetes.io/component: registry spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: nfs-sc --- # Source: harbor/templates/core/core-svc.yaml apiVersion: v1 kind: Service metadata: name: release-name-harbor-core namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - name: http-web port: 80 targetPort: 8080 selector: release: release-name app: "harbor" component: core --- # Source: harbor/templates/database/database-svc.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-database" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - port: 5432 selector: release: release-name app: "harbor" component: database --- # Source: harbor/templates/jobservice/jobservice-svc.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-jobservice" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - name: http-jobservice port: 80 targetPort: 8080 selector: release: release-name app: "harbor" component: jobservice --- # Source: harbor/templates/nginx/service.yaml apiVersion: v1 kind: Service metadata: name: harbor labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: type: NodePort ports: - name: http port: 80 targetPort: 8080 nodePort: 30002 selector: release: release-name app: "harbor" component: nginx --- # Source: harbor/templates/portal/service.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-portal" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - port: 80 targetPort: 8080 selector: release: release-name app: "harbor" component: portal --- # Source: harbor/templates/redis/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-harbor-redis namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - port: 6379 selector: release: release-name app: "harbor" component: redis --- # Source: harbor/templates/registry/registry-svc.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-registry" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - name: http-registry port: 5000 - name: http-controller port: 8080 selector: release: release-name app: "harbor" component: registry --- # Source: harbor/templates/trivy/trivy-svc.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-trivy" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - name: http-trivy protocol: TCP port: 8080 selector: release: release-name app: "harbor" component: trivy --- # Source: harbor/templates/core/core-dpl.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-harbor-core namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: core app.kubernetes.io/component: core spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: release: release-name app: "harbor" component: core template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: core app.kubernetes.io/component: core annotations: checksum/configmap: bf9940a91f31ccd8db1c6b0aa6a6cdbd27483d0ef0400b0478b766dba1e8778f checksum/secret: 2d768cea8bf3f359707036a5230c42ab503ee35988f494ed8e36cf09fbd7f04b checksum/secret-jobservice: c3bcac00f13ee5b6d0346fbcbabe8a495318a0b4f860f1d0d594652e0c3cfcdf spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false terminationGracePeriodSeconds: 120 containers: - name: core image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-core:v2.13.0 imagePullPolicy: IfNotPresent startupProbe: httpGet: path: /api/v2.0/ping scheme: HTTP port: 8080 failureThreshold: 360 initialDelaySeconds: 10 periodSeconds: 10 livenessProbe: httpGet: path: /api/v2.0/ping scheme: HTTP port: 8080 failureThreshold: 2 periodSeconds: 10 readinessProbe: httpGet: path: /api/v2.0/ping scheme: HTTP port: 8080 failureThreshold: 2 periodSeconds: 10 envFrom: - configMapRef: name: "release-name-harbor-core" - secretRef: name: "release-name-harbor-core" env: - name: CORE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-core key: secret - name: JOBSERVICE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-jobservice key: JOBSERVICE_SECRET securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault ports: - containerPort: 8080 volumeMounts: - name: config mountPath: /etc/core/app.conf subPath: app.conf - name: secret-key mountPath: /etc/core/key subPath: key - name: token-service-private-key mountPath: /etc/core/private_key.pem subPath: tls.key - name: psc mountPath: /etc/core/token volumes: - name: config configMap: name: release-name-harbor-core items: - key: app.conf path: app.conf - name: secret-key secret: secretName: release-name-harbor-core items: - key: secretKey path: key - name: token-service-private-key secret: secretName: release-name-harbor-core - name: psc emptyDir: {} --- # Source: harbor/templates/jobservice/jobservice-dpl.yaml apiVersion: apps/v1 kind: Deployment metadata: name: "release-name-harbor-jobservice" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: jobservice app.kubernetes.io/component: jobservice spec: replicas: 1 revisionHistoryLimit: 10 strategy: type: RollingUpdate selector: matchLabels: release: release-name app: "harbor" component: jobservice template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: jobservice app.kubernetes.io/component: jobservice annotations: checksum/configmap: 13935e266ee9ce33b7c4d1c769e78a85da9b0519505029a9b6098a497d9a1220 checksum/configmap-env: 9773f3ab781f37f25b82e63ed4e8cad53fbd5b6103a82128e4d793265eed1a1d checksum/secret: dc0d310c734ec96a18f24f108a124db429517b46f48067f10f45b653925286ea checksum/secret-core: 515f10ad6f291e46fa3c7e570e5fc27dd32089185e1306e0943e667ad083d334 spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false terminationGracePeriodSeconds: 120 containers: - name: jobservice image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-jobservice:v2.13.0 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /api/v1/stats scheme: HTTP port: 8080 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: path: /api/v1/stats scheme: HTTP port: 8080 initialDelaySeconds: 20 periodSeconds: 10 env: - name: CORE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-core key: secret securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault envFrom: - configMapRef: name: "release-name-harbor-jobservice-env" - secretRef: name: "release-name-harbor-jobservice" ports: - containerPort: 8080 volumeMounts: - name: jobservice-config mountPath: /etc/jobservice/config.yml subPath: config.yml - name: job-logs mountPath: /var/log/jobs subPath: volumes: - name: jobservice-config configMap: name: "release-name-harbor-jobservice" - name: job-logs persistentVolumeClaim: claimName: release-name-harbor-jobservice --- # Source: harbor/templates/nginx/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-harbor-nginx namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: nginx app.kubernetes.io/component: nginx spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: release: release-name app: "harbor" component: nginx template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: nginx app.kubernetes.io/component: nginx annotations: checksum/configmap: a9da1570c68479a856aa8cba7fa5ca3cc7f57eb28fb7180ea8630e1e96fbbcb0 spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false containers: - name: nginx image: "registry.cn-guangzhou.aliyuncs.com/xingcangku/nginx-photon:v2.13.0" imagePullPolicy: "IfNotPresent" livenessProbe: httpGet: scheme: HTTP path: / port: 8080 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: scheme: HTTP path: / port: 8080 initialDelaySeconds: 1 periodSeconds: 10 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault ports: - containerPort: 8080 volumeMounts: - name: config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: config configMap: name: release-name-harbor-nginx --- # Source: harbor/templates/portal/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: "release-name-harbor-portal" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: portal app.kubernetes.io/component: portal spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: release: release-name app: "harbor" component: portal template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: portal app.kubernetes.io/component: portal annotations: checksum/configmap: 92a534063aacac0294c6aefc269663fd4b65e2f3aabd23e05a7485cbb28cdc72 spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false containers: - name: portal image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-portal:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault livenessProbe: httpGet: path: / scheme: HTTP port: 8080 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: path: / scheme: HTTP port: 8080 initialDelaySeconds: 1 periodSeconds: 10 ports: - containerPort: 8080 volumeMounts: - name: portal-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: portal-config configMap: name: "release-name-harbor-portal" --- # Source: harbor/templates/registry/registry-dpl.yaml apiVersion: apps/v1 kind: Deployment metadata: name: "release-name-harbor-registry" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: registry app.kubernetes.io/component: registry spec: replicas: 1 revisionHistoryLimit: 10 strategy: type: RollingUpdate selector: matchLabels: release: release-name app: "harbor" component: registry template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: registry app.kubernetes.io/component: registry annotations: checksum/configmap: e7cdc1d01e8e65c3e6a380a112730c8030cc0634a401cc2eda63a84d2098d2d0 checksum/secret: 0c29e3fdc2300f19ecf0d055e2d25e75bd856adf5dfb4eda338c647d90bdfca0 checksum/secret-jobservice: 4dd5bbe6b81b66aa7b11a43a7470bf3f04cbc1650f51e2b5ace05c9dc2c81151 checksum/secret-core: 8d2c730b5b3fa7401c1f5e78b4eac13f3b02a968843ce8cb652d795e5e810692 spec: securityContext: runAsUser: 10000 fsGroup: 10000 fsGroupChangePolicy: OnRootMismatch automountServiceAccountToken: false terminationGracePeriodSeconds: 120 containers: - name: registry image: registry.cn-guangzhou.aliyuncs.com/xingcangku/registry-photon:v2.13.0 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: / scheme: HTTP port: 5000 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: path: / scheme: HTTP port: 5000 initialDelaySeconds: 1 periodSeconds: 10 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault envFrom: - secretRef: name: "release-name-harbor-registry" env: ports: - containerPort: 5000 - containerPort: 5001 volumeMounts: - name: registry-data mountPath: /storage subPath: - name: registry-htpasswd mountPath: /etc/registry/passwd subPath: passwd - name: registry-config mountPath: /etc/registry/config.yml subPath: config.yml - name: registryctl image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-registryctl:v2.13.0 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /api/health scheme: HTTP port: 8080 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: path: /api/health scheme: HTTP port: 8080 initialDelaySeconds: 1 periodSeconds: 10 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault envFrom: - configMapRef: name: "release-name-harbor-registryctl" - secretRef: name: "release-name-harbor-registry" - secretRef: name: "release-name-harbor-registryctl" env: - name: CORE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-core key: secret - name: JOBSERVICE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-jobservice key: JOBSERVICE_SECRET ports: - containerPort: 8080 volumeMounts: - name: registry-data mountPath: /storage subPath: - name: registry-config mountPath: /etc/registry/config.yml subPath: config.yml - name: registry-config mountPath: /etc/registryctl/config.yml subPath: ctl-config.yml volumes: - name: registry-htpasswd secret: secretName: release-name-harbor-registry-htpasswd items: - key: REGISTRY_HTPASSWD path: passwd - name: registry-config configMap: name: "release-name-harbor-registry" - name: registry-data persistentVolumeClaim: claimName: release-name-harbor-registry --- # Source: harbor/templates/database/database-ss.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: "release-name-harbor-database" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: database app.kubernetes.io/component: database spec: replicas: 1 serviceName: "release-name-harbor-database" selector: matchLabels: release: release-name app: "harbor" component: database template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: database app.kubernetes.io/component: database annotations: checksum/secret: 4ae67a99eb6eba38dcf86bbd000a763abf20cbb3cd0e2c11d2780167980b7c08 spec: securityContext: runAsUser: 999 fsGroup: 999 automountServiceAccountToken: false terminationGracePeriodSeconds: 120 initContainers: # with "fsGroup" set, each time a volume is mounted, Kubernetes must recursively chown() and chmod() all the files and directories inside the volume # this causes the postgresql reports the "data directory /var/lib/postgresql/data/pgdata has group or world access" issue when using some CSIs e.g. Ceph # use this init container to correct the permission # as "fsGroup" applied before the init container running, the container has enough permission to execute the command - name: "data-permissions-ensurer" image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-db:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault command: ["/bin/sh"] args: ["-c", "chmod -R 700 /var/lib/postgresql/data/pgdata || true"] volumeMounts: - name: database-data mountPath: /var/lib/postgresql/data subPath: containers: - name: database image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-db:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault livenessProbe: exec: command: - /docker-healthcheck.sh initialDelaySeconds: 300 periodSeconds: 10 timeoutSeconds: 1 readinessProbe: exec: command: - /docker-healthcheck.sh initialDelaySeconds: 1 periodSeconds: 10 timeoutSeconds: 1 envFrom: - secretRef: name: "release-name-harbor-database" env: # put the data into a sub directory to avoid the permission issue in k8s with restricted psp enabled # more detail refer to https://github.com/goharbor/harbor-helm/issues/756 - name: PGDATA value: "/var/lib/postgresql/data/pgdata" volumeMounts: - name: database-data mountPath: /var/lib/postgresql/data subPath: - name: shm-volume mountPath: /dev/shm volumes: - name: shm-volume emptyDir: medium: Memory sizeLimit: 512Mi volumeClaimTemplates: - metadata: name: "database-data" labels: heritage: Helm release: release-name chart: harbor app: "harbor" annotations: spec: accessModes: ["ReadWriteMany"] storageClassName: "nfs-sc" resources: requests: storage: "1Gi" --- # Source: harbor/templates/redis/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-harbor-redis namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: redis app.kubernetes.io/component: redis spec: replicas: 1 serviceName: release-name-harbor-redis selector: matchLabels: release: release-name app: "harbor" component: redis template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: redis app.kubernetes.io/component: redis spec: securityContext: runAsUser: 999 fsGroup: 999 automountServiceAccountToken: false terminationGracePeriodSeconds: 120 containers: - name: redis image: registry.cn-guangzhou.aliyuncs.com/xingcangku/redis-photon:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault livenessProbe: tcpSocket: port: 6379 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: tcpSocket: port: 6379 initialDelaySeconds: 1 periodSeconds: 10 volumeMounts: - name: data mountPath: /var/lib/redis subPath: volumeClaimTemplates: - metadata: name: data labels: heritage: Helm release: release-name chart: harbor app: "harbor" annotations: spec: accessModes: ["ReadWriteMany"] storageClassName: "nfs-sc" resources: requests: storage: "1Gi" --- # Source: harbor/templates/trivy/trivy-sts.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-harbor-trivy namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: trivy app.kubernetes.io/component: trivy spec: replicas: 1 serviceName: release-name-harbor-trivy selector: matchLabels: release: release-name app: "harbor" component: trivy template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: trivy app.kubernetes.io/component: trivy annotations: checksum/secret: 44be12495ce86a4d9182302ace8a923cf60e791c072dddc10aab3dc17a54309f spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false containers: - name: trivy image: registry.cn-guangzhou.aliyuncs.com/xingcangku/trivy-adapter-photon:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault env: - name: HTTP_PROXY value: "" - name: HTTPS_PROXY value: "" - name: NO_PROXY value: "release-name-harbor-core,release-name-harbor-jobservice,release-name-harbor-database,release-name-harbor-registry,release-name-harbor-portal,release-name-harbor-trivy,release-name-harbor-exporter,127.0.0.1,localhost,.local,.internal" - name: "SCANNER_LOG_LEVEL" value: "info" - name: "SCANNER_TRIVY_CACHE_DIR" value: "/home/scanner/.cache/trivy" - name: "SCANNER_TRIVY_REPORTS_DIR" value: "/home/scanner/.cache/reports" - name: "SCANNER_TRIVY_DEBUG_MODE" value: "false" - name: "SCANNER_TRIVY_VULN_TYPE" value: "os,library" - name: "SCANNER_TRIVY_TIMEOUT" value: "5m0s" - name: "SCANNER_TRIVY_GITHUB_TOKEN" valueFrom: secretKeyRef: name: release-name-harbor-trivy key: gitHubToken - name: "SCANNER_TRIVY_SEVERITY" value: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL" - name: "SCANNER_TRIVY_IGNORE_UNFIXED" value: "false" - name: "SCANNER_TRIVY_SKIP_UPDATE" value: "false" - name: "SCANNER_TRIVY_SKIP_JAVA_DB_UPDATE" value: "false" - name: "SCANNER_TRIVY_OFFLINE_SCAN" value: "false" - name: "SCANNER_TRIVY_SECURITY_CHECKS" value: "vuln" - name: "SCANNER_TRIVY_INSECURE" value: "false" - name: SCANNER_API_SERVER_ADDR value: ":8080" - name: "SCANNER_REDIS_URL" valueFrom: secretKeyRef: name: release-name-harbor-trivy key: redisURL - name: "SCANNER_STORE_REDIS_URL" valueFrom: secretKeyRef: name: release-name-harbor-trivy key: redisURL - name: "SCANNER_JOB_QUEUE_REDIS_URL" valueFrom: secretKeyRef: name: release-name-harbor-trivy key: redisURL ports: - name: api-server containerPort: 8080 volumeMounts: - name: data mountPath: /home/scanner/.cache subPath: readOnly: false livenessProbe: httpGet: scheme: HTTP path: /probe/healthy port: api-server initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 10 readinessProbe: httpGet: scheme: HTTP path: /probe/ready port: api-server initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 resources: limits: cpu: 1 memory: 1Gi requests: cpu: 200m memory: 512Mi volumeClaimTemplates: - metadata: name: data labels: heritage: Helm release: release-name chart: harbor app: "harbor" annotations: spec: accessModes: ["ReadWriteMany"] storageClassName: "nfs-sc" resources: requests: storage: "5Gi"root@k8s-master-01:~/harbor# kubectl apply -f test.yaml -n harbor secret/release-name-harbor-core created secret/release-name-harbor-database created secret/release-name-harbor-jobservice created secret/release-name-harbor-registry created secret/release-name-harbor-registry-htpasswd created secret/release-name-harbor-registryctl created secret/release-name-harbor-trivy created configmap/release-name-harbor-core created configmap/release-name-harbor-jobservice-env created configmap/release-name-harbor-jobservice created configmap/release-name-harbor-nginx created configmap/release-name-harbor-portal created configmap/release-name-harbor-registry created configmap/release-name-harbor-registryctl created persistentvolumeclaim/release-name-harbor-jobservice created persistentvolumeclaim/release-name-harbor-registry created service/release-name-harbor-core created service/release-name-harbor-database created service/release-name-harbor-jobservice created service/harbor created service/release-name-harbor-portal created service/release-name-harbor-redis created service/release-name-harbor-registry created service/release-name-harbor-trivy created deployment.apps/release-name-harbor-core created deployment.apps/release-name-harbor-jobservice created deployment.apps/release-name-harbor-nginx created deployment.apps/release-name-harbor-portal created deployment.apps/release-name-harbor-registry created statefulset.apps/release-name-harbor-database created statefulset.apps/release-name-harbor-redis created statefulset.apps/release-name-harbor-trivy created root@k8s-master-01:~/harbor# kubectl get all -n harbor NAME READY STATUS RESTARTS AGE pod/release-name-harbor-core-849974d76-f4wqp 1/1 Running 0 83s pod/release-name-harbor-database-0 1/1 Running 0 83s pod/release-name-harbor-jobservice-75f59fcb64-29sp8 1/1 Running 3 (57s ago) 83s pod/release-name-harbor-nginx-b67dcbfc6-wlvtv 1/1 Running 0 83s pod/release-name-harbor-portal-59b9cfd58c-l4wpj 1/1 Running 0 83s pod/release-name-harbor-redis-0 1/1 Running 0 83s pod/release-name-harbor-registry-659f59fcb5-wj8zm 2/2 Running 0 83s pod/release-name-harbor-trivy-0 1/1 Running 0 83s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/harbor NodePort 10.102.128.215 <none> 80:30002/TCP 83s service/release-name-harbor-core ClusterIP 10.108.245.194 <none> 80/TCP 83s service/release-name-harbor-database ClusterIP 10.97.69.58 <none> 5432/TCP 83s service/release-name-harbor-jobservice ClusterIP 10.106.125.16 <none> 80/TCP 83s service/release-name-harbor-portal ClusterIP 10.101.153.231 <none> 80/TCP 83s service/release-name-harbor-redis ClusterIP 10.101.144.182 <none> 6379/TCP 83s service/release-name-harbor-registry ClusterIP 10.97.84.52 <none> 5000/TCP,8080/TCP 83s service/release-name-harbor-trivy ClusterIP 10.106.250.151 <none> 8080/TCP 83s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/release-name-harbor-core 1/1 1 1 83s deployment.apps/release-name-harbor-jobservice 1/1 1 1 83s deployment.apps/release-name-harbor-nginx 1/1 1 1 83s deployment.apps/release-name-harbor-portal 1/1 1 1 83s deployment.apps/release-name-harbor-registry 1/1 1 1 83s NAME DESIRED CURRENT READY AGE replicaset.apps/release-name-harbor-core-849974d76 1 1 1 83s replicaset.apps/release-name-harbor-jobservice-75f59fcb64 1 1 1 83s replicaset.apps/release-name-harbor-nginx-b67dcbfc6 1 1 1 83s replicaset.apps/release-name-harbor-portal-59b9cfd58c 1 1 1 83s replicaset.apps/release-name-harbor-registry-659f59fcb5 1 1 1 83s NAME READY AGE statefulset.apps/release-name-harbor-database 1/1 83s statefulset.apps/release-name-harbor-redis 1/1 83s statefulset.apps/release-name-harbor-trivy 1/1 83s账号:admin 密码:Harbor1234
2025年07月26日
6 阅读
0 评论
0 点赞
2025-07-26
k8s部署gitlab
一、创建资源 1.1 pvccat > gitlab-pvc.yaml << EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gitlab-data-pvc namespace: cicd spec: storageClassName: nfs-sc accessModes: - ReadWriteOnce resources: requests: storage: 50Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gitlab-config-pvc namespace: cicd spec: storageClassName: nfs-sc accessModes: - ReadWriteOnce resources: requests: storage: 5Gi EOF1.2 deploymentcat > gitlab-deployment.yaml << EOF apiVersion: apps/v1 kind: Deployment metadata: name: gitlab namespace: cicd spec: selector: matchLabels: app: gitlab replicas: 1 template: metadata: labels: app: gitlab spec: containers: - name: gitlab image: registry.cn-guangzhou.aliyuncs.com/xingcangku/gitlab-gitlab-ce-16.11.1-ce.0:16.11.1-ce.0 env: - name: GITLAB_SKIP_UNMIGRATED_DATA_CHECK value: "true" - name: GITLAB_OMNIBUS_CONFIG value: | external_url = 'http://gitlab.local.com/' prometheus['enable'] = false alertmanager['enable'] = false gitlab_rails['time_zone'] = 'Asia/Shanghai' gitlab_rails['gitlab_email_enabled'] = false gitlab_rails['smtp_enable'] = false gitlab_rails['gravatar_plain_url'] = 'http://gravatar.loli.net/avatar/%{hash}?s=%{size}&d=identicon' gitlab_rails['gravatar_ssl_url'] = 'https://gravatar.loli.net/avatar/%{hash}?s=%{size}&d=identicon' nginx['worker_processes'] = 2 postgresql['max_connections'] = 100 postgresql['shared_buffers'] = "128MB" ports: - containerPort: 80 name: http - containerPort: 443 name: https - containerPort: 22 name: ssh readinessProbe: exec: command: ["sh", "-c", "curl -s http://127.0.0.1/-/health"] livenessProbe: exec: command: ["sh", "-c", "curl -s http://127.0.0.1/-/health"] timeoutSeconds: 5 failureThreshold: 3 periodSeconds: 60 startupProbe: exec: command: ["sh", "-c", "curl -s http://127.0.0.1/-/health"] failureThreshold: 20 periodSeconds: 120 resources: requests: memory: "4Gi" cpu: "2" limits: memory: "8Gi" cpu: "4" volumeMounts: - name: data mountPath: /var/opt/gitlab - name: config mountPath: /etc/gitlab - name: log mountPath: /var/log/gitlab - mountPath: /dev/shm name: cache-volume volumes: - name: data persistentVolumeClaim: claimName: gitlab-data-pvc - name: config persistentVolumeClaim: claimName: gitlab-config-pvc - name: log emptyDir: {} - name: cache-volume emptyDir: medium: Memory sizeLimit: 256Mi EOF1.3 SVCcat > gitlab-svc.yaml << EOF apiVersion: v1 kind: Service metadata: name: gitlab-svc namespace: cicd spec: type: NodePort # 修改服务类型为 NodePort selector: app: gitlab ports: - port: 80 targetPort: 80 name: http nodePort: 30080 # 添加 NodePort 端口映射 (范围 30000-32767) - port: 443 targetPort: 443 name: https nodePort: 30443 # 添加 NodePort 端口映射 - port: 22 targetPort: 22 name: ssh nodePort: 30022 # 添加 NodePort 端口映射 EOF二、访问验证root@k8s-01:~/gitlab# kubectl get all -n cicd NAME READY STATUS RESTARTS AGE pod/gitlab-75dcff8b46-bl5mm 1/1 Running 0 10m pod/jenkins-c884498c6-jt5rd 1/1 Running 0 13m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/gitlab-svc NodePort 10.101.0.24 <none> 80:30080/TCP,443:30443/TCP,22:30022/TCP 10m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/gitlab 1/1 1 1 10m deployment.apps/jenkins 1/1 1 1 13m NAME DESIRED CURRENT READY AGE replicaset.apps/gitlab-75dcff8b46 1 1 1 10m replicaset.apps/jenkins-c884498c6 1 1 1 13m 验证访问客户端新增hosts记录 192.168.3.160 gitlab.local.com 账号默认:root 密码需要去容器里面的 这里路径查看cat /etc/gitlab/initial_root_password root@k8s-master-01:~/gitlab# kubectl exec -it -n cicd gitlab-6fb47c476-vb6wf -- bash root@gitlab-6fb47c476-vb6wf:/# cat /etc/gitlab/initial_root_password # WARNING: This value is valid only in the following conditions # 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run). # 2. Password hasn't been changed manually, either via UI or via command line. # # If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password. Password: 8cF7BzixYvRbvtDI1sQjxr+PDMQ1sohG7a+WEiX42bY= # NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.
2025年07月26日
6 阅读
0 评论
0 点赞
2025-07-24
k8s中安装nfs
一、本地部署 NFS 服务端(以 Ubuntu 为例) 1.1 安装 NFS 服务端sudo apt update sudo apt install nfs-kernel-server1.2 创建共享目录并配置权限sudo mkdir -p /srv/nfs/k8s-pv # 共享目录 sudo chown nobody:nogroup /srv/nfs/k8s-pv # 设置权限(允许匿名访问) sudo chmod 777 /srv/nfs/k8s-pv1.3 配置NFS导出目录#编辑 /etc/exports: sudo vim /etc/exports #添加以下内容(允许所有客户端访问): /srv/nfs/k8s-pv *(rw,sync,no_subtree_check,no_root_squash) #参数说明: #rw:读写权限 #sync:同步写入磁盘 #no_root_squash:允许 root 用户访问 #no_subtree_check:禁用子树检查(提高性能)1.4 应用配置并启动服务sudo exportfs -ra # 重新加载配置 sudo systemctl restart nfs-kernel-server sudo systemctl enable nfs-kernel-server1.5 验证 NFS 共享showmount -e localhost # 应显示共享目录二、在所有Kubernetes节点安装NFS客户端 2.1 根据节点系统类型执行# Ubuntu/Debian 节点 sudo apt-get update sudo apt-get install nfs-common -y # CentOS/RHEL 节点 sudo yum install nfs-utils -y2.2 验证NFS服务器可访问性#在节点执行测试(替换您的NFS服务器IP和路径 mkdir -p /mnt/nfs-test mount -t nfs 192.168.3.160:/srv/nfs/k8s-pv /mnt/nfs-test umount /mnt/nfs-test三、在 Kubernetes 中部署 NFS StorageClass 3.1 安装 NFS 客户端驱动#使用 nfs-subdir-external-provisioner 动态创建 PV helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner helm install nfs-sc nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=<NFS_SERVER_IP> \ # 替换为 NFS 服务端 IP --set nfs.path=/srv/nfs/k8s-pv \ --set storageClass.name=nfs-sc \ --set storageClass.defaultClass=true #如果拉取不下来就把文件下载到本地然后改yaml文件镜像helm install nfs-sc nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=192.168.30.180 \ --set nfs.path=/srv/nfs/k8s-pv \ --set storageClass.name=nfs-sc \ --set storageClass.defaultClass=true \ --set image.repository=registry.cn-guangzhou.aliyuncs.com/xingcangku/nfs-subdir-external-provisioner \ --set image.tag=v4.0.2 \ --set image.pullPolicy=IfNotPresent3.2 验证安装root@k8s-master-01:~/helm/nfs-subdir-external-provisioner# kubectl get storageclass # 应看到 nfs-sc 且为默认 NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-sc cluster.local/nfs-provisioner-nfs-subdir-external-provisioner Delete Immediate true 7m21s root@k8s-master-01:~/helm/nfs-subdir-external-provisioner# kubectl get pods -l app=nfs-subdir-external-provisioner # 检查 Pod 状态 NAME READY STATUS RESTARTS AGE nfs-provisioner-nfs-subdir-external-provisioner-6cbb5bbf-57tqn 1/1 Running 0 7m24s
2025年07月24日
5 阅读
0 评论
0 点赞
2025-06-20
基于OpenTelemetry+Grafana可观测性实践
一、方案介绍OpenTelemetry + Prometheus + Loki + Tempo + Grafana 是一套现代化、云备份的可安装性解决方案组合,涵盖Trace(追踪追踪)、Log(日志)、Metrics(指标)三大核心维度,为微服务架构中的应用提供统一的可安装性平台。二、组件介绍三、系统架构四、部署示例应用4.1 应用介绍https://opentelemetry.io/docs/demo/kubernetes-deployment/ 官方为大家写了一个opentelemetry-demo。 这个项目模拟了一个微服务版本的电子商城,主要包含了以下一些项目:4.2 部署应用4.2.1获取图表包# helm repo open-telemetry https://open-telemetry.github.io 添加/opentelemetry-helm-charts # helm pull open-telemetry/opentelemetry-demo --untar # cd opentelemetry-demo # ls Chart.lock Chart.yaml 示例 grafana-dashboards README.md UPGRADING.md values.yaml 图表 ci flagd 产品模板values.schema.json4.2.2 自定义图表包,默认图表包集成了opentelemetry-collector、prometheus、grafana、opensearch、jaeger组件,我们先将其取消# vim 值.yaml 默认: # 评估所有组件的环境变量列表 环境: -名称:OTEL_COLLECTOR_NAME 值:center-collector.opentelemetry.svc opentelemetry-收集器: 已启用:false 耶格尔: 已启用:false 普罗米修斯: 已启用:false 格拉法纳: 已启用:false 开放搜索: 已启用:false4.2.3安装示例应用# helm install demo .-f values.yaml -所有服务渴望通过前置代理获得:http://localhost:8080 通过运行以下命令: kubectl --namespace 默认端口转发 svc/frontend-proxy 8080 :8080 通过端口转发暴露frontend-proxy服务后,这些路径上可以使用以下服务: 网上商店 http://localhost:8080/ Jaeger 用户界面 http://localhost:8080/jaeger/ui/ Grafana http://localhost:8080/grafana/ 负载生成器 UI http://localhost:8080/loadgen/ 功能标志UI http://localhost:8080/feature/ # kubectl 获取 pod 名称 就绪状态 重启时间 Accounting-79cdcf89df-h8nnc 1 /1 运动 0 2分15秒 ad-dc6768b6-lvzcq 1 /1 跑步 0 2分14秒 cart-65c89fcdd7-8tcwp 1 /1 运动 0 2分15秒 checkout-7c45459f67-xvft2 1 /1 运动 0 2分13秒 currency-65dd8c8f6-pxxbb 1 /1 跑步 0 2分15秒 email-5659b8d84f-9ljr9 1 /1 运动 0 2分15秒 flagd-57fdd95655-xrmsk 2 /2 运动 0 2分14秒 欺诈检测-7db9cbbd4d-znxq6 1 /1 运动 0 2分15秒 frontend-6bd764b6b9-gmstv 1 /1 跑步 0 2分15秒 frontend-proxy-56977d5ddb-cl87k 1 /1 跑步 0 2分15秒 image-provider-54b56c68b8-gdgnv 1 /1 跑步 0 2分15秒 kafka-976bc899f-79vd7 1 /1 运动 0 2分14秒 load-generator-79dd9d8d58-hcw8c 1 /1 运行 0 2分15秒 payment-6d9748df64-46zwt 1/1 正在播放 0 2分15秒 产品目录-658d99b4d4-xpczv 1/1 运行 0 2m13s quote-5dfbb544f5-6r8gr 1/1 播放 0 2分14秒 推荐-764b6c5cf8-lnkm6 1/1 播放 0 2分14秒 Shipping-5f65469746-zdr2g 1/1 运行 0 2分15秒 valkey-cart-85ccb5db-kr74s 1/1 运动 0 2分15秒 # kubectl 获取服务 名称类型 供应商 IP 外部 IP 端口年龄 广告 ClusterIP 10.103.72.85 <无> 8080/TCP 2分19秒 购物车 ClusterIP 10.106.118.178 <无> 8080/TCP 2分19秒 检出 ClusterIP 10.109.56.238 <无> 8080/TCP 2m19s 货币 ClusterIP 10.96.112.137 <无> 8080/TCP 2m19s 电子邮件 ClusterIP 10.103.214.222 <无> 8080/TCP 2分19秒 flagd ClusterIP 10.101.48.231 <无> 8013/TCP,8016/TCP,4000/TCP 2分19秒 前 ClusterIP 10.103.70.199 <无> 8080/TCP 2m19s 增强代理 ClusterIP 10.106.13.80 <无> 8080/TCP 2分19秒 镜像提供者 ClusterIP 10.109.69.146 <无> 8081/TCP 2m19s kafka ClusterIP 10.104.9.210 <无> 9092/TCP,9093/TCP 2分19秒 kubernetes ClusterIP 10.96.0.1 <无> 443/TCP 176d 负载生成器 ClusterIP 10.106.97.167 <none> 8089/TCP 2m19s 付款 ClusterIP 10.102.143.196 <无> 8080/TCP 2m19s 产品目录 ClusterIP 10.109.219.138 <无> 8080/TCP 2m19s 引用 ClusterIP 10.111.139.80 <无> 8080/TCP 2m19s 建议 ClusterIP 10.97.118.12 <无> 8080/TCP 2m19s 货物运输IP 10.107.102.160 <无> 8080/TCP 2m19s valkey-cart ClusterIP 10.104.34.233 <无> 6379/TCP 2分19秒4.2.4 接下来创建 ingress 资源,引入 frontend-proxy 服务 8080 端口api版本:traefik.io/v1alpha1 种类:IngressRoute 元数据: 名称: 练习 规格: 入口点: - 网络 路线: - 匹配:主持人(`demo.cuiliangblog.cn`) 种类:规则 服务: - 名称:前置代理 端口:80804.2.5创建完成ingress资源后添加主机解析并访问验证。4.3配置Ingress输出以 ingress 为例,从 Traefik v2.6 开始,Traefik 初步支持使用 OpenTelemetry 协议导出数据追踪(traces),这使得你可以将 Traefik 的数据发送到兼容 OTel 的湖南。Traefik 部署可参考文档:https://www.cuiliangblog.cn/detail/section/140101250, 访问配置参考文档:https://doc.traefik.io/traefik/observability/access-logs/#opentelemetry# vim 值.yaml 实验性:#实验性功能配置 otlpLogs: true # 日志导出otlp格式 extraArguments: # 自定义启动参数 —“--experimental.otlpLogs=true” —“--accesslog.otlp=true” -“--accesslog.otlp.grpc=true” “--accesslog.otlp.grpc.endpoint=center-collector.opentelemetry.svc:4317” –“--accesslog.otlp.grpc.insecure=true” 指标: # 指标 addInternals: true # 追踪内部流量 otlp: enabled: true #导出otlp格式 grpc: # 使用grpc协议 端点:“center-collector.opentelemetry.svc:4317”#OpenTelemetry地址 insecure: true # 跳过证书 追踪:#仓库追踪 addInternals: true # 追踪内部流量(如重定向) otlp: enabled: true #导出otlp格式 grpc: # 使用grpc协议 端点:“center-collector.opentelemetry.svc:4317”#OpenTelemetry地址 insecure: true # 跳过证书五、MinIO部署5.1配置MinIO对象存储5.1.1配置minIO[root@k8s-master minio]# cat > minio.yaml << EOF kind: PersistentVolumeClaim apiVersion: v1 metadata: name: minio-pvc namespace: minio spec: storageClassName: nfs-client accessModes: - ReadWriteOnce resources: requests: storage: 50Gi --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: minio name: minio namespace: minio spec: selector: matchLabels: app: minio template: metadata: labels: app: minio spec: containers: - name: minio image: quay.io/minio/minio:latest command: - /bin/bash - -c args: - minio server /data --console-address :9090 volumeMounts: - mountPath: /data name: data ports: - containerPort: 9090 name: console - containerPort: 9000 name: api env: - name: MINIO_ROOT_USER # 指定用户名 value: "admin" - name: MINIO_ROOT_PASSWORD # 指定密码,最少8位置 value: "minioadmin" volumes: - name: data persistentVolumeClaim: claimName: minio-pvc --- apiVersion: v1 kind: Service metadata: name: minio-service namespace: minio spec: type: NodePort selector: app: minio ports: - name: console port: 9090 protocol: TCP targetPort: 9090 nodePort: 30300 - name: api port: 9000 protocol: TCP targetPort: 9000 nodePort: 30200 EOF [root@k8s-master minio]# kubectl apply -f minio.yaml deployment.apps/minio created service/minio-service created5.1.2使用NodePort方式访问网页[root@k8s-master minio]# kubectl get pod -n minio NAME READY STATUS RESTARTS AGE minio-86577f8755-l65mf 1/1 Running 0 11m [root@k8s-master minio]# kubectl get svc -n minio NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE minio-service NodePort 10.102.223.132 <none> 9090:30300/TCP,9000:30200/TCP 10m访问k8s节点ip:30300,默认用户名密码都是admin5.1.3使用ingress方式访问[root@k8s-master minio]# cat minio-ingress.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: minio-console namespace: minio spec: entryPoints: - web routes: - match: Host(`minio.test.com`) # 域名 kind: Rule services: - name: minio-service # 与svc的name一致 port: 9090 # 与svc的port一致 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: minio-api namespace: minio spec: entryPoints: - web routes: - match: Host(`minio-api.test.com`) # 域名 kind: Rule services: - name: minio-service # 与svc的name一致 port: 9000 # 与svc的port一致 [root@k8s-master minio]# kubectl apply -f minio-ingress.yaml ingressroute.traefik.containo.us/minio-console created ingressroute.traefik.containo.us/minio-api created添加hosts记录192.168.10.10 minio.test.com访问域名即可5.2helmminIO 部署集群minIO 集群方式部署使用operator或者helm。如果是一套 k8s 集群部署方式 minio 推荐 shiyonghelm 方式部署,operator 更适合多套 minio 集群多机场场景使用。 helmminIO部署参考文档:https://artifacthub.io/packages/helm/bitnami/minio。5.2.1资源角色规划使用分散方式部署高可用的minIO负载时,驱动器总数至少是4个,以保证纠错码。我们可以在k8s-work1和k8s-work2上的data1和data2路径存放minIO数据,使用本地pv方式持久化数据。# 创建数据存放路径 [root@k8s-work1 ~]# mkdir -p /data1/minio [root@k8s-work1 ~]# mkdir -p /data2/minio [root@k8s-work2 ~]# mkdir -p /data1/minio [root@k8s-work2 ~]# mkdir -p /data2/minio5.2.2下载helm包[root@k8s-master ~]# helm repo add bitnami https://charts.bitnami.com/bitnami [root@k8s-master ~]# helm search repo minio NAME CHART VERSION APP VERSION DESCRIPTION bitnami/minio 14.1.4 2024.3.30 MinIO(R) is an object storage server, compatibl... [root@k8s-master ~]# helm pull bitnami/minio --untar [root@k8s-master ~]# cd minio root@k8s01:~/helm/minio/minio-demo# ls minio minio-17.0.5.tgz root@k8s01:~/helm/minio/minio-demo# cd minio/ root@k8s01:~/helm/minio/minio-demo/minio# ls Chart.lock Chart.yaml ingress.yaml pv.yaml storageClass.yaml values.yaml charts demo.yaml pvc.yaml README.md templates values.yaml.bak 5.2.3创建scprovisioner 字段定义为 no-provisioner,这是尚不支持动态预配置动态生成 PV,所以我们需要提前手动创建 PV。volumeBindingMode 因为关系 定义为 WaitForFirstConsumer,是本地持久卷里一个非常重要的特性,即:延迟绑定。延迟绑定就是在我们提交 PVC 文件时,StorageClass 为我们延迟绑定 PV 与 PVC 的对应。root@k8s01:~/helm/minio/minio-demo/minio# cat storageClass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer5.2.4创建pvroot@k8s01:~/helm/minio/minio-demo/minio# cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: minio-pv1 labels: app: minio-0 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage # storageClass名称,与前面创建的storageClass保持一致 local: path: /data1/minio # 本地存储路径 nodeAffinity: # 调度至work1节点 required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s01 --- apiVersion: v1 kind: PersistentVolume metadata: name: minio-pv2 labels: app: minio-1 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage local: path: /data2/minio nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s01 --- apiVersion: v1 kind: PersistentVolume metadata: name: minio-pv3 labels: app: minio-2 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage local: path: /data1/minio nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s02 --- apiVersion: v1 kind: PersistentVolume metadata: name: minio-pv4 labels: app: minio-3 spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce storageClassName: local-storage local: path: /data2/minio nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - k8s02 root@k8s01:~/helm/minio/minio-demo/minio# kubectl get pv | grep minio minio-pv1 10Gi RWO Retain Bound minio/data-0-minio-demo-1 local-storage 10d minio-pv2 10Gi RWO Retain Bound minio/data-1-minio-demo-1 local-storage 10d minio-pv3 10Gi RWO Retain Bound minio/data-0-minio-demo-0 local-storage 10d minio-pv4 10Gi RWO Retain Bound minio/data-1-minio-demo-0 local-storage 10d5.2.5创建pvc创建的时候注意pvc的名字的构成:pvc的名字 = volume_name-statefulset_name-序号,然后通过selector标签选择,强制将pvc与pv绑定。root@k8s01:~/helm/minio/minio-demo/minio# cat pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-minio-0 namespace: minio spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: app: minio-0 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-minio-1 namespace: minio spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: app: minio-1 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-minio-2 namespace: minio spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: app: minio-2 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-minio-3 namespace: minio spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: local-storage selector: matchLabels: app: minio-3root@k8s01:~/helm/minio/minio-demo/minio# kubectl get pvc -n minio NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-0-minio-demo-0 Bound minio-pv3 10Gi RWO local-storage 10d data-0-minio-demo-1 Bound minio-pv1 10Gi RWO local-storage 10d data-1-minio-demo-0 Bound minio-pv4 10Gi RWO local-storage 10d data-1-minio-demo-1 Bound minio-pv2 10Gi RWO local-storage 10d data-minio-0 Pending local-storage 10d 5.2.6 修改配置68 image: 69 registry: docker.io 70 repository: bitnami/minio 71 tag: 2024.3.30-debian-12-r0 104 mode: distributed # 集群模式,单节点为standalone,分布式集群为distributed 197 statefulset: 215 replicaCount: 2 # 节点数 218 zones: 1 # 区域数,1个即可 221 drivesPerNode: 2 # 每个节点数据目录数.2节点×2目录组成4节点的mimio集群 558 #podAnnotations: {} # 导出Prometheus指标 559 podAnnotations: 560 prometheus.io/scrape: "true" 561 prometheus.io/path: "/minio/v2/metrics/cluster" 562 prometheus.io/port: "9000" 1049 persistence: 1052 enabled: true 1060 storageClass: "local-storage" 1063 mountPath: /bitnami/minio/data 1066 accessModes: 1067 - ReadWriteOnce 1070 size: 10Gi 1073 annotations: {} 1076 existingClaim: ""5.2.7 部署miniOkubectl create ns minioroot@k8s01:~/helm/minio/minio-demo/minio# cat demo.yaml --- # Source: minio/templates/console/networkpolicy.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: minio-demo-console namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2.0.1 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: podSelector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console app.kubernetes.io/part-of: minio policyTypes: - Ingress - Egress egress: - {} ingress: # Allow inbound connections - ports: - port: 9090 --- # Source: minio/templates/networkpolicy.yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: podSelector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio policyTypes: - Ingress - Egress egress: - {} ingress: # Allow inbound connections - ports: - port: 9000 --- # Source: minio/templates/console/pdb.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: minio-demo-console namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2.0.1 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: maxUnavailable: 1 selector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console app.kubernetes.io/part-of: minio --- # Source: minio/templates/pdb.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: maxUnavailable: 1 selector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio --- # Source: minio/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/part-of: minio automountServiceAccountToken: false secrets: - name: minio-demo --- # Source: minio/templates/secrets.yaml apiVersion: v1 kind: Secret metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio type: Opaque data: root-user: "YWRtaW4=" root-password: "OGZHWWlrY3lpNA==" --- # Source: minio/templates/console/service.yaml apiVersion: v1 kind: Service metadata: name: minio-demo-console namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2.0.1 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: type: ClusterIP ports: - name: http port: 9090 targetPort: http nodePort: null selector: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console app.kubernetes.io/part-of: minio --- # Source: minio/templates/headless-svc.yaml apiVersion: v1 kind: Service metadata: name: minio-demo-headless namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: type: ClusterIP clusterIP: None ports: - name: tcp-api port: 9000 targetPort: api publishNotReadyAddresses: true selector: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio --- # Source: minio/templates/service.yaml apiVersion: v1 kind: Service metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: type: ClusterIP ports: - name: tcp-api port: 9000 targetPort: api nodePort: null selector: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio --- # Source: minio/templates/console/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: minio-demo-console namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2.0.1 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: replicas: 1 strategy: type: RollingUpdate selector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console app.kubernetes.io/part-of: minio template: metadata: labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: console app.kubernetes.io/part-of: minio spec: serviceAccountName: minio-demo automountServiceAccountToken: false affinity: podAffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: console topologyKey: kubernetes.io/hostname weight: 1 nodeAffinity: securityContext: fsGroup: 1001 fsGroupChangePolicy: Always supplementalGroups: [] sysctls: [] containers: - name: console image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-bitnami-minio-object-browser:2.0.1-debian-12-r2 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 1001 runAsNonRoot: true runAsUser: 1001 seLinuxOptions: {} seccompProfile: type: RuntimeDefault args: - server - --host - "0.0.0.0" - --port - "9090" env: - name: CONSOLE_MINIO_SERVER value: "http://minio-demo:9000" resources: limits: cpu: 150m ephemeral-storage: 2Gi memory: 192Mi requests: cpu: 100m ephemeral-storage: 50Mi memory: 128Mi ports: - name: http containerPort: 9090 livenessProbe: failureThreshold: 5 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 tcpSocket: port: http readinessProbe: failureThreshold: 5 initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 5 httpGet: path: /minio port: http volumeMounts: - name: empty-dir mountPath: /tmp subPath: tmp-dir - name: empty-dir mountPath: /.console subPath: app-console-dir volumes: - name: empty-dir emptyDir: {} --- # Source: minio/templates/application.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: minio-demo namespace: "minio" labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio spec: selector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio podManagementPolicy: Parallel replicas: 2 serviceName: minio-demo-headless updateStrategy: type: RollingUpdate template: metadata: labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: minio app.kubernetes.io/version: 2025.5.24 helm.sh/chart: minio-17.0.5 app.kubernetes.io/component: minio app.kubernetes.io/part-of: minio annotations: checksum/credentials-secret: b06d639ea8d96eecf600100351306b11b3607d0ae288f01fe3489b67b6cc4873 prometheus.io/path: /minio/v2/metrics/cluster prometheus.io/port: "9000" prometheus.io/scrape: "true" spec: serviceAccountName: minio-demo affinity: podAffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio app.kubernetes.io/component: minio topologyKey: kubernetes.io/hostname weight: 1 nodeAffinity: automountServiceAccountToken: false securityContext: fsGroup: 1001 fsGroupChangePolicy: OnRootMismatch supplementalGroups: [] sysctls: [] initContainers: containers: - name: minio image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-bitnami-minio:2025.5.24-debian-12-r6 imagePullPolicy: "IfNotPresent" securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 1001 runAsNonRoot: true runAsUser: 1001 seLinuxOptions: {} seccompProfile: type: RuntimeDefault env: - name: BITNAMI_DEBUG value: "false" - name: MINIO_DISTRIBUTED_MODE_ENABLED value: "yes" - name: MINIO_DISTRIBUTED_NODES value: "minio-demo-{0...1}.minio-demo-headless.minio.svc.cluster.local:9000/bitnami/minio/data-{0...1}" - name: MINIO_SCHEME value: "http" - name: MINIO_FORCE_NEW_KEYS value: "no" - name: MINIO_ROOT_USER_FILE value: /opt/bitnami/minio/secrets/root-user - name: MINIO_ROOT_PASSWORD_FILE value: /opt/bitnami/minio/secrets/root-password - name: MINIO_SKIP_CLIENT value: "yes" - name: MINIO_API_PORT_NUMBER value: "9000" - name: MINIO_BROWSER value: "off" - name: MINIO_PROMETHEUS_AUTH_TYPE value: "public" - name: MINIO_DATA_DIR value: "/bitnami/minio/data-0" ports: - name: api containerPort: 9000 livenessProbe: httpGet: path: /minio/health/live port: api scheme: "HTTP" initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: tcpSocket: port: api initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 5 resources: limits: cpu: 375m ephemeral-storage: 2Gi memory: 384Mi requests: cpu: 250m ephemeral-storage: 50Mi memory: 256Mi volumeMounts: - name: empty-dir mountPath: /tmp subPath: tmp-dir - name: empty-dir mountPath: /opt/bitnami/minio/tmp subPath: app-tmp-dir - name: empty-dir mountPath: /.mc subPath: app-mc-dir - name: minio-credentials mountPath: /opt/bitnami/minio/secrets/ - name: data-0 mountPath: /bitnami/minio/data-0 - name: data-1 mountPath: /bitnami/minio/data-1 volumes: - name: empty-dir emptyDir: {} - name: minio-credentials secret: secretName: minio-demo volumeClaimTemplates: - metadata: name: data-0 labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "10Gi" storageClassName: local-storage - metadata: name: data-1 labels: app.kubernetes.io/instance: minio-demo app.kubernetes.io/name: minio spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "10Gi" storageClassName: local-storage 5.2.8查看资源信息root@k8s01:~/helm/minio/minio-demo/minio# kubectl get all -n minio NAME READY STATUS RESTARTS AGE pod/minio-demo-0 1/1 Running 10 (5h27m ago) 10d pod/minio-demo-1 1/1 Running 10 (5h27m ago) 27h pod/minio-demo-console-7b586c5f9c-l8hnc 1/1 Running 9 (5h27m ago) 10d NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/minio-demo ClusterIP 10.97.92.61 <none> 9000/TCP 10d service/minio-demo-console ClusterIP 10.101.127.112 <none> 9090/TCP 10d service/minio-demo-headless ClusterIP None <none> 9000/TCP 10d NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/minio-demo-console 1/1 1 1 10d NAME DESIRED CURRENT READY AGE replicaset.apps/minio-demo-console-7b586c5f9c 1 1 1 10d NAME READY AGE statefulset.apps/minio-demo 2/2 10d 5.2.9创建ingress资源#以ingrss-nginx为例: # cat > ingress.yaml << EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: minio-ingreess namespace: minio annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: minio.local.com http: paths: - path: / pathType: Prefix backend: service: name: minio port: number: 9001 EOF#以traefik为例: root@k8s01:~/helm/minio/minio-demo/minio# cat ingress.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: minio-console namespace: minio spec: entryPoints: - web routes: - match: Host(`minio.local.com`) kind: Rule services: - name: minio-demo-console # 修正为 Console Service 名称 port: 9090 # 修正为 Console 端口 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: minio-api namespace: minio spec: entryPoints: - web routes: - match: Host(`minio-api.local.com`) kind: Rule services: - name: minio-demo # 保持 API Service 名称 port: 9000 # 保持 API 端口5.2.10获取用户名密码# 获取用户名和密码 [root@k8s-master minio]# kubectl get secret --namespace minio minio -o jsonpath="{.data.root-user}" | base64 -d admin [root@k8s-master minio]# kubectl get secret --namespace minio minio -o jsonpath="{.data.root-password}" | base64 -d HWLLGMhgkp5.2.11访问web管理页5.3operator部署minIO企业版需要收费六、部署 Prometheus如果已安装metrics-server需要先卸载,否则冲突https://axzys.cn/index.php/archives/423/七、部署Thanos监控[可选]Thanos 很好的弥补了 Prometheus 在持久化存储和 多个 prometheus 集群之间跨集群查询方面的不足的问题。具体可参考文档https://thanos.io/, 部署参考文档:https://github.com/thanos-io/kube-thanos,本实例使用 receive 模式部署。 如果需要使用 sidecar 模式部署,可参考文档:https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/platform/thanos.mdhttps://www.cuiliangblog.cn/detail/section/215968508八、部署 Grafanahttps://axzys.cn/index.php/archives/423/九、部署 OpenTelemetryhttps://www.cuiliangblog.cn/detail/section/215947486root@k8s01:~/helm/opentelemetry/cert-manager# cat new-center-collector.yaml apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector # 元数据定义部分 metadata: name: center # Collector 的名称为 center namespace: opentelemetry # 具体的配置内容 spec: replicas: 1 # 设置副本数量为1 # image: otel/opentelemetry-collector-contrib:latest # 使用支持 elasticsearch 的镜像 image: registry.cn-guangzhou.aliyuncs.com/xingcangku/otel-opentelemetry-collector-contrib-latest:latest config: # 定义 Collector 配置 receivers: # 接收器,用于接收遥测数据(如 trace、metrics、logs) otlp: # 配置 OTLP(OpenTelemetry Protocol)接收器 protocols: # 启用哪些协议来接收数据 grpc: endpoint: 0.0.0.0:4317 # 启用 gRPC 协议 http: endpoint: 0.0.0.0:4318 # 启用 HTTP 协议 processors: # 处理器,用于处理收集到的数据 batch: {} # 批处理器,用于将数据分批发送,提高效率 exporters: # 导出器,用于将处理后的数据发送到后端系统 debug: {} # 使用 debug 导出器,将数据打印到终端(通常用于测试或调试) otlp: # 数据发送到tempo的grpc端口 endpoint: "tempo:4317" tls: # 跳过证书验证 insecure: true prometheus: endpoint: "0.0.0.0:9464" # prometheus指标暴露端口 loki: endpoint: http://loki-gateway.loki.svc/loki/api/v1/push headers: X-Scope-OrgID: "fake" # 与Grafana配置一致 labels: attributes: # 从日志属性提取 k8s.pod.name: "pod" k8s.container.name: "container" k8s.namespace.name: "namespace" app: "application" # 映射应用中设置的标签 resource: # 从SDK资源属性提取 service.name: "service" service: # 服务配置部分 telemetry: logs: level: "debug" # 设置 Collector 自身日志等级为 debug(方便观察日志) pipelines: # 定义处理管道 traces: # 定义 trace 类型的管道 receivers: [otlp] # 接收器为 OTLP processors: [batch] # 使用批处理器 exporters: [otlp] # 将数据导出到OTLP metrics: # 定义 metrics 类型的管道 receivers: [otlp] # 接收器为 OTLP processors: [batch] # 使用批处理器 exporters: [prometheus] # 将数据导出到prometheus logs: receivers: [otlp] processors: [batch] # 使用批处理器 exporters: [loki] 十、部署 Tempo 10.1Tempo 介绍Grafana Tempo是一个开源、易于使用的大规模分布式跟踪后端。Tempo具有成本效益,仅需要对象存储即可运行,并且与Grafana,Prometheus和Loki深度集成,Tempo可以与任何开源跟踪协议一起使用,包括Jaeger、Zipkin和OpenTelemetry。它仅支持键/值查找,并且旨在与用于发现的日志和度量标准(示例性)协同工作。https://axzys.cn/index.php/archives/418/十一、部署Loki日志收集 11.1 loki 介绍 11.1.1组件功能Loki架构十分简单,由以下三个部分组成: Loki 是主服务器,负责存储日志和处理查询 。 promtail 是代理,负责收集日志并将其发送给 loki 。 Grafana 用于 UI 展示。 只要在应用程序服务器上安装promtail来收集日志然后发送给Loki存储,就可以在Grafana UI界面通过添加Loki为数据源进行日志查询11.1.2系统架构Distributor(接收日志入口):负责接收客户端发送的日志,进行标签解析、预处理、分片计算,转发给 Ingester。 Ingester(日志暂存处理):处理 Distributor 发送的日志,缓存到内存,定期刷写到对象存储或本地。支持查询时返回缓存数据。 Querier(日志查询器):负责处理来自 Grafana 或其他客户端的查询请求,并从 Ingester 和 Store 中读取数据。 Index:boltdb-shipper 模式的 Index 提供者 在分布式部署中,读取和缓存 index 数据,避免 S3 等远程存储频繁请求。 Chunks 是Loki 中一种核心的数据结构和存储形式,主要由 ingester 负责生成和管理。它不是像 distributor、querier 那样的可部署服务,但在 Loki 架构和存储中极其关键。11.1.3 部署 lokiloki 也分为整体式 、微服务式、可扩展式三种部署模式,具体可参考文档https://grafana.com/docs/loki/latest/setup/install/helm/concepts/,此处以可扩展式为例: loki 使用 minio 对象存储配置可参考文档:https://blog.min.io/how-to-grafana-loki-minio/# helm repo add grafana https://grafana.github.io/helm-charts "grafana" has been added to your repositories # helm pull grafana/loki --untar # ls charts Chart.yaml README.md requirements.lock requirements.yaml templates values.yaml--- # Source: loki/templates/backend/poddisruptionbudget-backend.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-backend namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: backend spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend maxUnavailable: 1 --- # Source: loki/templates/chunks-cache/poddisruptionbudget-chunks-cache.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-memcached-chunks-cache namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: memcached-chunks-cache spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: memcached-chunks-cache maxUnavailable: 1 --- # Source: loki/templates/read/poddisruptionbudget-read.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-read namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: read spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read maxUnavailable: 1 --- # Source: loki/templates/results-cache/poddisruptionbudget-results-cache.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-memcached-results-cache namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: memcached-results-cache spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: memcached-results-cache maxUnavailable: 1 --- # Source: loki/templates/write/poddisruptionbudget-write.yaml apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: loki-write namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: write spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write maxUnavailable: 1 --- # Source: loki/templates/loki-canary/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: loki-canary namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: canary automountServiceAccountToken: true --- # Source: loki/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: loki namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" automountServiceAccountToken: true --- # Source: loki/templates/config.yaml apiVersion: v1 kind: ConfigMap metadata: name: loki namespace: loki data: config.yaml: | auth_enabled: true bloom_build: builder: planner_address: loki-backend-headless.loki.svc.cluster.local:9095 enabled: false bloom_gateway: client: addresses: dnssrvnoa+_grpc._tcp.loki-backend-headless.loki.svc.cluster.local enabled: false chunk_store_config: chunk_cache_config: background: writeback_buffer: 500000 writeback_goroutines: 1 writeback_size_limit: 500MB memcached: batch_size: 4 parallelism: 5 memcached_client: addresses: dnssrvnoa+_memcached-client._tcp.loki-chunks-cache.loki.svc consistent_hash: true max_idle_conns: 72 timeout: 2000ms common: compactor_address: 'http://loki-backend:3100' path_prefix: /var/loki replication_factor: 3 frontend: scheduler_address: "" tail_proxy_url: "" frontend_worker: scheduler_address: "" index_gateway: mode: simple limits_config: max_cache_freshness_per_query: 10m query_timeout: 300s reject_old_samples: true reject_old_samples_max_age: 168h split_queries_by_interval: 15m volume_enabled: true memberlist: join_members: - loki-memberlist pattern_ingester: enabled: false query_range: align_queries_with_step: true cache_results: true results_cache: cache: background: writeback_buffer: 500000 writeback_goroutines: 1 writeback_size_limit: 500MB memcached_client: addresses: dnssrvnoa+_memcached-client._tcp.loki-results-cache.loki.svc consistent_hash: true timeout: 500ms update_interval: 1m ruler: storage: s3: access_key_id: admin bucketnames: null endpoint: minio-demo.minio.svc:9000 insecure: true s3: s3://admin:8fGYikcyi4@minio-demo.minio.svc:9000/loki s3forcepathstyle: true secret_access_key: 8fGYikcyi4 type: s3 wal: dir: /var/loki/ruler-wal runtime_config: file: /etc/loki/runtime-config/runtime-config.yaml schema_config: configs: - from: "2024-04-01" index: period: 24h prefix: index_ object_store: s3 schema: v13 store: tsdb server: grpc_listen_port: 9095 http_listen_port: 3100 http_server_read_timeout: 600s http_server_write_timeout: 600s storage_config: aws: access_key_id: admin secret_access_key: 8fGYikcyi4 region: "" endpoint: minio-demo.minio.svc:9000 insecure: true s3forcepathstyle: true bucketnames: loki bloom_shipper: working_directory: /var/loki/data/bloomshipper boltdb_shipper: index_gateway_client: server_address: dns+loki-backend-headless.loki.svc.cluster.local:9095 hedging: at: 250ms max_per_second: 20 up_to: 3 tsdb_shipper: index_gateway_client: server_address: dns+loki-backend-headless.loki.svc.cluster.local:9095 tracing: enabled: false --- # Source: loki/templates/gateway/configmap-gateway.yaml apiVersion: v1 kind: ConfigMap metadata: name: loki-gateway namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: gateway data: nginx.conf: | worker_processes 5; ## loki: 1 error_log /dev/stderr; pid /tmp/nginx.pid; worker_rlimit_nofile 8192; events { worker_connections 4096; ## loki: 1024 } http { client_body_temp_path /tmp/client_temp; proxy_temp_path /tmp/proxy_temp_path; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; client_max_body_size 4M; proxy_read_timeout 600; ## 10 minutes proxy_send_timeout 600; proxy_connect_timeout 600; proxy_http_version 1.1; #loki_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] $status ' '"$request" $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /dev/stderr main; sendfile on; tcp_nopush on; resolver kube-dns.kube-system.svc.cluster.local.; server { listen 8080; listen [::]:8080; location = / { return 200 'OK'; auth_basic off; } ######################################################## # Configure backend targets location ^~ /ui { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # Distributor location = /api/prom/push { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1/push { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location = /distributor/ring { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location = /otlp/v1/logs { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # Ingester location = /flush { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location ^~ /ingester/ { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } location = /ingester { internal; # to suppress 301 } # Ring location = /ring { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # MemberListKV location = /memberlist { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # Ruler location = /ruler/ring { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /api/prom/rules { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location ^~ /api/prom/rules/ { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1/rules { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location ^~ /loki/api/v1/rules/ { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /prometheus/api/v1/alerts { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /prometheus/api/v1/rules { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } # Compactor location = /compactor/ring { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1/delete { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1/cache/generation_numbers { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } # IndexGateway location = /indexgateway/ring { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } # QueryScheduler location = /scheduler/ring { proxy_pass http://loki-backend.loki.svc.cluster.local:3100$request_uri; } # Config location = /config { proxy_pass http://loki-write.loki.svc.cluster.local:3100$request_uri; } # QueryFrontend, Querier location = /api/prom/tail { proxy_pass http://loki-read.loki.svc.cluster.local:3100$request_uri; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location = /loki/api/v1/tail { proxy_pass http://loki-read.loki.svc.cluster.local:3100$request_uri; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location ^~ /api/prom/ { proxy_pass http://loki-read.loki.svc.cluster.local:3100$request_uri; } location = /api/prom { internal; # to suppress 301 } # if the X-Query-Tags header is empty, set a noop= without a value as empty values are not logged set $query_tags $http_x_query_tags; if ($query_tags !~* '') { set $query_tags "noop="; } location ^~ /loki/api/v1/ { # pass custom headers set by Grafana as X-Query-Tags which are logged as key/value pairs in metrics.go log messages proxy_set_header X-Query-Tags "${query_tags},user=${http_x_grafana_user},dashboard_id=${http_x_dashboard_uid},dashboard_title=${http_x_dashboard_title},panel_id=${http_x_panel_id},panel_title=${http_x_panel_title},source_rule_uid=${http_x_rule_uid},rule_name=${http_x_rule_name},rule_folder=${http_x_rule_folder},rule_version=${http_x_rule_version},rule_source=${http_x_rule_source},rule_type=${http_x_rule_type}"; proxy_pass http://loki-read.loki.svc.cluster.local:3100$request_uri; } location = /loki/api/v1 { internal; # to suppress 301 } } } --- # Source: loki/templates/runtime-configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: loki-runtime namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" data: runtime-config.yaml: | {} --- # Source: loki/templates/backend/clusterrole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" name: loki-clusterrole rules: - apiGroups: [""] # "" indicates the core API group resources: ["configmaps", "secrets"] verbs: ["get", "watch", "list"] --- # Source: loki/templates/backend/clusterrolebinding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: loki-clusterrolebinding labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" subjects: - kind: ServiceAccount name: loki namespace: loki roleRef: kind: ClusterRole name: loki-clusterrole apiGroup: rbac.authorization.k8s.io --- # Source: loki/templates/backend/query-scheduler-discovery.yaml apiVersion: v1 kind: Service metadata: name: loki-query-scheduler-discovery namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP clusterIP: None publishNotReadyAddresses: true ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend --- # Source: loki/templates/backend/service-backend-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-backend-headless namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend variant: headless prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP clusterIP: None ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP appProtocol: tcp selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend --- # Source: loki/templates/backend/service-backend.yaml apiVersion: v1 kind: Service metadata: name: loki-backend namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: backend annotations: spec: type: ClusterIP ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend --- # Source: loki/templates/chunks-cache/service-chunks-cache-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-chunks-cache labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: "memcached-chunks-cache" annotations: {} namespace: "loki" spec: type: ClusterIP clusterIP: None ports: - name: memcached-client port: 11211 targetPort: 11211 - name: http-metrics port: 9150 targetPort: 9150 selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-chunks-cache" --- # Source: loki/templates/gateway/service-gateway.yaml apiVersion: v1 kind: Service metadata: name: loki-gateway namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: gateway prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP ports: - name: http-metrics port: 80 targetPort: http-metrics protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: gateway --- # Source: loki/templates/loki-canary/service.yaml apiVersion: v1 kind: Service metadata: name: loki-canary namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: canary annotations: spec: type: ClusterIP ports: - name: http-metrics port: 3500 targetPort: http-metrics protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: canary --- # Source: loki/templates/read/service-read-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-read-headless namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read variant: headless prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP clusterIP: None ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP appProtocol: tcp selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read --- # Source: loki/templates/read/service-read.yaml apiVersion: v1 kind: Service metadata: name: loki-read namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: read annotations: spec: type: ClusterIP ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read --- # Source: loki/templates/results-cache/service-results-cache-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-results-cache labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: "memcached-results-cache" annotations: {} namespace: "loki" spec: type: ClusterIP clusterIP: None ports: - name: memcached-client port: 11211 targetPort: 11211 - name: http-metrics port: 9150 targetPort: 9150 selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-results-cache" --- # Source: loki/templates/service-memberlist.yaml apiVersion: v1 kind: Service metadata: name: loki-memberlist namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" annotations: spec: type: ClusterIP clusterIP: None ports: - name: tcp port: 7946 targetPort: http-memberlist protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/part-of: memberlist --- # Source: loki/templates/write/service-write-headless.yaml apiVersion: v1 kind: Service metadata: name: loki-write-headless namespace: loki labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write variant: headless prometheus.io/service-monitor: "false" annotations: spec: type: ClusterIP clusterIP: None ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP appProtocol: tcp selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write --- # Source: loki/templates/write/service-write.yaml apiVersion: v1 kind: Service metadata: name: loki-write namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: write annotations: spec: type: ClusterIP ports: - name: http-metrics port: 3100 targetPort: http-metrics protocol: TCP - name: grpc port: 9095 targetPort: grpc protocol: TCP selector: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write --- # Source: loki/templates/loki-canary/daemonset.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: loki-canary namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: canary spec: selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: canary updateStrategy: rollingUpdate: maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: canary spec: serviceAccountName: loki-canary securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 containers: - name: loki-canary image: registry.cn-guangzhou.aliyuncs.com/xingcangku/grafana-loki-canary-3.5.0:3.5.0 imagePullPolicy: IfNotPresent args: - -addr=loki-gateway.loki.svc.cluster.local.:80 - -labelname=pod - -labelvalue=$(POD_NAME) - -user=self-monitoring - -tenant-id=self-monitoring - -pass= - -push=true securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true volumeMounts: ports: - name: http-metrics containerPort: 3500 protocol: TCP env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name readinessProbe: httpGet: path: /metrics port: http-metrics initialDelaySeconds: 15 timeoutSeconds: 1 volumes: --- # Source: loki/templates/gateway/deployment-gateway-nginx.yaml apiVersion: apps/v1 kind: Deployment metadata: name: loki-gateway namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: gateway spec: replicas: 1 strategy: type: RollingUpdate revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: gateway template: metadata: annotations: checksum/config: 440a9cd2e87de46e0aad42617818d58f1e2daacb1ae594bad1663931faa44ebc labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: gateway spec: serviceAccountName: loki enableServiceLinks: true securityContext: fsGroup: 101 runAsGroup: 101 runAsNonRoot: true runAsUser: 101 terminationGracePeriodSeconds: 30 containers: - name: nginx image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-nginxinc-nginx-unprivileged-1.28-alpine:1.28-alpine imagePullPolicy: IfNotPresent ports: - name: http-metrics containerPort: 8080 protocol: TCP readinessProbe: httpGet: path: / port: http-metrics initialDelaySeconds: 15 timeoutSeconds: 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true volumeMounts: - name: config mountPath: /etc/nginx - name: tmp mountPath: /tmp - name: docker-entrypoint-d-override mountPath: /docker-entrypoint.d resources: {} affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: gateway topologyKey: kubernetes.io/hostname volumes: - name: config configMap: name: loki-gateway - name: tmp emptyDir: {} - name: docker-entrypoint-d-override emptyDir: {} --- # Source: loki/templates/read/deployment-read.yaml apiVersion: apps/v1 kind: Deployment metadata: name: loki-read namespace: loki labels: app.kubernetes.io/part-of: memberlist helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: read spec: replicas: 3 strategy: rollingUpdate: maxSurge: 0 maxUnavailable: 1 revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read template: metadata: annotations: checksum/config: 1616415aaf41d5dec62fea8a013eab1aa2a559579f5f72299f7041e5cd6ea4c7 labels: app.kubernetes.io/part-of: memberlist app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: read spec: serviceAccountName: loki automountServiceAccountToken: true securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 terminationGracePeriodSeconds: 30 containers: - name: loki image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-grafana-loki-3.5.0:3.5.0 imagePullPolicy: IfNotPresent args: - -config.file=/etc/loki/config/config.yaml - -target=read - -legacy-read-mode=false - -common.compactor-grpc-address=loki-backend.loki.svc.cluster.local:9095 ports: - name: http-metrics containerPort: 3100 protocol: TCP - name: grpc containerPort: 9095 protocol: TCP - name: http-memberlist containerPort: 7946 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true readinessProbe: httpGet: path: /ready port: http-metrics initialDelaySeconds: 30 timeoutSeconds: 1 volumeMounts: - name: config mountPath: /etc/loki/config - name: runtime-config mountPath: /etc/loki/runtime-config - name: tmp mountPath: /tmp - name: data mountPath: /var/loki resources: {} affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: read topologyKey: kubernetes.io/hostname volumes: - name: tmp emptyDir: {} - name: data emptyDir: {} - name: config configMap: name: loki items: - key: "config.yaml" path: "config.yaml" - name: runtime-config configMap: name: loki-runtime --- # Source: loki/templates/backend/statefulset-backend.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: loki-backend namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: backend app.kubernetes.io/part-of: memberlist spec: replicas: 3 podManagementPolicy: Parallel updateStrategy: rollingUpdate: partition: 0 serviceName: loki-backend-headless revisionHistoryLimit: 10 persistentVolumeClaimRetentionPolicy: whenDeleted: Delete whenScaled: Delete selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: backend template: metadata: annotations: checksum/config: 1616415aaf41d5dec62fea8a013eab1aa2a559579f5f72299f7041e5cd6ea4c7 labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: backend app.kubernetes.io/part-of: memberlist spec: serviceAccountName: loki automountServiceAccountToken: true securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 terminationGracePeriodSeconds: 300 containers: - name: loki-sc-rules image: "registry.cn-guangzhou.aliyuncs.com/xingcangku/kiwigrid-k8s-sidecar-1.30.3:1.30.3" imagePullPolicy: IfNotPresent env: - name: METHOD value: WATCH - name: LABEL value: "loki_rule" - name: FOLDER value: "/rules" - name: RESOURCE value: "both" - name: WATCH_SERVER_TIMEOUT value: "60" - name: WATCH_CLIENT_TIMEOUT value: "60" - name: LOG_LEVEL value: "INFO" securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true volumeMounts: - name: sc-rules-volume mountPath: "/rules" - name: loki image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-grafana-loki-3.5.0:3.5.0 imagePullPolicy: IfNotPresent args: - -config.file=/etc/loki/config/config.yaml - -target=backend - -legacy-read-mode=false ports: - name: http-metrics containerPort: 3100 protocol: TCP - name: grpc containerPort: 9095 protocol: TCP - name: http-memberlist containerPort: 7946 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true readinessProbe: httpGet: path: /ready port: http-metrics initialDelaySeconds: 30 timeoutSeconds: 1 volumeMounts: - name: config mountPath: /etc/loki/config - name: runtime-config mountPath: /etc/loki/runtime-config - name: tmp mountPath: /tmp - name: data mountPath: /var/loki - name: sc-rules-volume mountPath: "/rules" resources: {} affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: backend topologyKey: kubernetes.io/hostname volumes: - name: tmp emptyDir: {} - name: config configMap: name: loki items: - key: "config.yaml" path: "config.yaml" - name: runtime-config configMap: name: loki-runtime - name: sc-rules-volume emptyDir: {} volumeClaimTemplates: - metadata: name: data spec: storageClassName: "ceph-cephfs" # 显式指定存储类 accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- # Source: loki/templates/chunks-cache/statefulset-chunks-cache.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: loki-chunks-cache labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: "memcached-chunks-cache" name: "memcached-chunks-cache" annotations: {} namespace: "loki" spec: podManagementPolicy: Parallel replicas: 1 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-chunks-cache" name: "memcached-chunks-cache" updateStrategy: type: RollingUpdate serviceName: loki-chunks-cache template: metadata: labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-chunks-cache" name: "memcached-chunks-cache" annotations: spec: serviceAccountName: loki securityContext: fsGroup: 11211 runAsGroup: 11211 runAsNonRoot: true runAsUser: 11211 initContainers: [] nodeSelector: {} affinity: {} topologySpreadConstraints: [] tolerations: [] terminationGracePeriodSeconds: 60 containers: - name: memcached image: registry.cn-guangzhou.aliyuncs.com/xingcangku/memcached-1.6.38-alpine:1.6.38-alpine imagePullPolicy: IfNotPresent resources: limits: memory: 4096Mi requests: cpu: 500m memory: 2048Mi ports: - containerPort: 11211 name: client args: - -m 4096 - --extended=modern,track_sizes - -I 5m - -c 16384 - -v - -u 11211 env: envFrom: securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true - name: exporter image: registry.cn-guangzhou.aliyuncs.com/xingcangku/prom-memcached-exporter-v0.15.2:v0.15.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9150 name: http-metrics args: - "--memcached.address=localhost:11211" - "--web.listen-address=0.0.0.0:9150" resources: limits: {} requests: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true --- # Source: loki/templates/results-cache/statefulset-results-cache.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: loki-results-cache labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: "memcached-results-cache" name: "memcached-results-cache" annotations: {} namespace: "loki" spec: podManagementPolicy: Parallel replicas: 1 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-results-cache" name: "memcached-results-cache" updateStrategy: type: RollingUpdate serviceName: loki-results-cache template: metadata: labels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: "memcached-results-cache" name: "memcached-results-cache" annotations: spec: serviceAccountName: loki securityContext: fsGroup: 11211 runAsGroup: 11211 runAsNonRoot: true runAsUser: 11211 initContainers: [] nodeSelector: {} affinity: {} topologySpreadConstraints: [] tolerations: [] terminationGracePeriodSeconds: 60 containers: - name: memcached image: registry.cn-guangzhou.aliyuncs.com/xingcangku/memcached-1.6.38-alpine:1.6.38-alpine imagePullPolicy: IfNotPresent resources: limits: memory: 1229Mi requests: cpu: 500m memory: 1229Mi ports: - containerPort: 11211 name: client args: - -m 1024 - --extended=modern,track_sizes - -I 5m - -c 16384 - -v - -u 11211 env: envFrom: securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true - name: exporter image: registry.cn-guangzhou.aliyuncs.com/xingcangku/prom-memcached-exporter-v0.15.2:v0.15.2 imagePullPolicy: IfNotPresent ports: - containerPort: 9150 name: http-metrics args: - "--memcached.address=localhost:11211" - "--web.listen-address=0.0.0.0:9150" resources: limits: {} requests: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true --- # Source: loki/templates/write/statefulset-write.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: loki-write namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: write app.kubernetes.io/part-of: memberlist spec: replicas: 3 podManagementPolicy: Parallel updateStrategy: rollingUpdate: partition: 0 serviceName: loki-write-headless revisionHistoryLimit: 10 selector: matchLabels: app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/component: write template: metadata: annotations: checksum/config: 1616415aaf41d5dec62fea8a013eab1aa2a559579f5f72299f7041e5cd6ea4c7 labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: write app.kubernetes.io/part-of: memberlist spec: serviceAccountName: loki automountServiceAccountToken: true enableServiceLinks: true securityContext: fsGroup: 10001 runAsGroup: 10001 runAsNonRoot: true runAsUser: 10001 terminationGracePeriodSeconds: 300 containers: - name: loki image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-grafana-loki-3.5.0:3.5.0 imagePullPolicy: IfNotPresent args: - -config.file=/etc/loki/config/config.yaml - -target=write ports: - name: http-metrics containerPort: 3100 protocol: TCP - name: grpc containerPort: 9095 protocol: TCP - name: http-memberlist containerPort: 7946 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true readinessProbe: httpGet: path: /ready port: http-metrics initialDelaySeconds: 30 timeoutSeconds: 1 volumeMounts: - name: config mountPath: /etc/loki/config - name: runtime-config mountPath: /etc/loki/runtime-config - name: data mountPath: /var/loki resources: {} affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app.kubernetes.io/component: write topologyKey: kubernetes.io/hostname volumes: - name: config configMap: name: loki items: - key: "config.yaml" path: "config.yaml" - name: runtime-config configMap: name: loki-runtime volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: "10Gi" --- # Source: loki/templates/tests/test-canary.yaml apiVersion: v1 kind: Pod metadata: name: "loki-helm-test" namespace: loki labels: helm.sh/chart: loki-6.30.1 app.kubernetes.io/name: loki app.kubernetes.io/instance: loki app.kubernetes.io/version: "3.5.0" app.kubernetes.io/component: helm-test annotations: "helm.sh/hook": test spec: containers: - name: loki-helm-test image: registry.cn-guangzhou.aliyuncs.com/xingcangku/docker.io-grafana-loki-helm-test-ewelch-distributed-helm-chart-1:ewelch-distributed-helm-chart-17db5ee env: - name: CANARY_SERVICE_ADDRESS value: "http://loki-canary:3500/metrics" - name: CANARY_PROMETHEUS_ADDRESS value: "" - name: CANARY_TEST_TIMEOUT value: "1m" args: - -test.v restartPolicy: Never root@k8s01:~/helm/loki/loki# kubectl get pod -n loki NAME READY STATUS RESTARTS AGE loki-backend-0 2/2 Running 2 (6h13m ago) 30h loki-backend-1 2/2 Running 2 (6h13m ago) 30h loki-backend-2 2/2 Running 2 (6h13m ago) 30h loki-canary-62z48 1/1 Running 1 (6h13m ago) 30h loki-canary-lg62j 1/1 Running 1 (6h13m ago) 30h loki-canary-nrph4 1/1 Running 1 (6h13m ago) 30h loki-chunks-cache-0 2/2 Running 0 6h12m loki-gateway-75d8cf9754-nwpdw 1/1 Running 13 (6h12m ago) 30h loki-read-dc7bdc98-8kzwk 1/1 Running 1 (6h13m ago) 30h loki-read-dc7bdc98-lmzcd 1/1 Running 1 (6h13m ago) 30h loki-read-dc7bdc98-nrz5h 1/1 Running 1 (6h13m ago) 30h loki-results-cache-0 2/2 Running 2 (6h13m ago) 30h loki-write-0 1/1 Running 1 (6h13m ago) 30h loki-write-1 1/1 Running 1 (6h13m ago) 30h loki-write-2 1/1 Running 1 (6h13m ago) 30h root@k8s01:~/helm/loki/loki# kubectl get svc -n loki NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE loki-backend ClusterIP 10.101.131.151 <none> 3100/TCP,9095/TCP 30h loki-backend-headless ClusterIP None <none> 3100/TCP,9095/TCP 30h loki-canary ClusterIP 10.109.131.175 <none> 3500/TCP 30h loki-chunks-cache ClusterIP None <none> 11211/TCP,9150/TCP 30h loki-gateway ClusterIP 10.98.126.160 <none> 80/TCP 30h loki-memberlist ClusterIP None <none> 7946/TCP 30h loki-query-scheduler-discovery ClusterIP None <none> 3100/TCP,9095/TCP 30h loki-read ClusterIP 10.103.248.164 <none> 3100/TCP,9095/TCP 30h loki-read-headless ClusterIP None <none> 3100/TCP,9095/TCP 30h loki-results-cache ClusterIP None <none> 11211/TCP,9150/TCP 30h loki-write ClusterIP 10.108.223.18 <none> 3100/TCP,9095/TCP 30h loki-write-headless ClusterIP None <none> 3100/TCP,9095/TCP 30h code here...
2025年06月20日
14 阅读
0 评论
0 点赞
2025-06-18
部署Prometheus监控
一、组件说明#如果已安装metrics-server需要先卸载,否则冲突 1. MetricServer:是kubernetes集群资源使用情况的聚合器,收集数据给kubernetes集群内使用,如kubectl,hpa,scheduler等。 2. PrometheusOperator:是一个系统监测和警报工具箱,用来存储监控数据。 3. NodeExporter:用于各node的关键度量指标状态数据。 4. KubeStateMetrics:收集kubernetes集群内资源对象数据,制定告警规则。 5. Prometheus:采用pull方式收集apiserver,scheduler,controller-manager,kubelet组件数据,通过http协议传输。 6. Grafana:是可视化数据统计和监控平台。二、安装部署项目地址:https://github.com/prometheus-operator/kube-prometheus三、版本选择可参考官方文档https://github.com/prometheus-operator/kube-prometheus?tab=readme-ov-file#compatibility,例如 k8s 版本为 1.30,推荐的 kube-Prometheus 版本为release-0.14四、克隆项目至本地git clone -b release-0.13 https://github.com/prometheus-operator/kube-prometheus.git五、创建资源对象#如果是国内 要改镜像地址 [root@master1 k8s-install]# kubectl create namespace monitoring [root@master1 k8s-install]# cd kube-prometheus/ [root@master1 kube-prometheus]# kubectl apply --server-side -f manifests/setup [root@master1 kube-prometheus]# kubectl wait \ --for condition=Established \ --all CustomResourceDefinition \ --namespace=monitoring [root@master1 kube-prometheus]# kubectl apply -f manifests/root@k8s01:~/helm/prometheus/kube-prometheus# kubectl apply --server-side -f manifests/setup customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/prometheusagents.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com serverside-applied customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com serverside-applied namespace/monitoring serverside-applied root@k8s01:~/helm/prometheus/kube-prometheus# kubectl wait \ > --for condition=Established \ > --all CustomResourceDefinitions \ > --namespace=monitoring customresourcedefinition.apiextensions.k8s.io/alertmanagerconfigs.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io condition met customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io condition met customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io condition met customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io condition met customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/ingressrouteudps.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/instrumentations.opentelemetry.io condition met customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io condition met customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/middlewaretcps.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/opampbridges.opentelemetry.io condition met customresourcedefinition.apiextensions.k8s.io/opentelemetrycollectors.opentelemetry.io condition met customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io condition met customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/policybindings.sts.min.io condition met customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/prometheusagents.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/scrapeconfigs.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/serverstransports.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/serverstransporttcps.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/targetallocators.opentelemetry.io condition met customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com condition met customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/tlsstores.traefik.io condition met customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.io condition met root@k8s01:~/helm/prometheus/kube-prometheus# kubectl apply -f manifests/ alertmanager.monitoring.coreos.com/main created networkpolicy.networking.k8s.io/alertmanager-main created poddisruptionbudget.policy/alertmanager-main created prometheusrule.monitoring.coreos.com/alertmanager-main-rules created secret/alertmanager-main created service/alertmanager-main created serviceaccount/alertmanager-main created servicemonitor.monitoring.coreos.com/alertmanager-main created clusterrole.rbac.authorization.k8s.io/blackbox-exporter created clusterrolebinding.rbac.authorization.k8s.io/blackbox-exporter created configmap/blackbox-exporter-configuration created deployment.apps/blackbox-exporter created networkpolicy.networking.k8s.io/blackbox-exporter created service/blackbox-exporter created serviceaccount/blackbox-exporter created servicemonitor.monitoring.coreos.com/blackbox-exporter created secret/grafana-config created secret/grafana-datasources created configmap/grafana-dashboard-alertmanager-overview created configmap/grafana-dashboard-apiserver created configmap/grafana-dashboard-cluster-total created configmap/grafana-dashboard-controller-manager created configmap/grafana-dashboard-grafana-overview created configmap/grafana-dashboard-k8s-resources-cluster created configmap/grafana-dashboard-k8s-resources-multicluster created configmap/grafana-dashboard-k8s-resources-namespace created configmap/grafana-dashboard-k8s-resources-node created configmap/grafana-dashboard-k8s-resources-pod created configmap/grafana-dashboard-k8s-resources-workload created configmap/grafana-dashboard-k8s-resources-workloads-namespace created configmap/grafana-dashboard-kubelet created configmap/grafana-dashboard-namespace-by-pod created configmap/grafana-dashboard-namespace-by-workload created configmap/grafana-dashboard-node-cluster-rsrc-use created configmap/grafana-dashboard-node-rsrc-use created configmap/grafana-dashboard-nodes-aix created configmap/grafana-dashboard-nodes-darwin created configmap/grafana-dashboard-nodes created configmap/grafana-dashboard-persistentvolumesusage created configmap/grafana-dashboard-pod-total created configmap/grafana-dashboard-prometheus-remote-write created configmap/grafana-dashboard-prometheus created configmap/grafana-dashboard-proxy created configmap/grafana-dashboard-scheduler created configmap/grafana-dashboard-workload-total created configmap/grafana-dashboards created deployment.apps/grafana created networkpolicy.networking.k8s.io/grafana created prometheusrule.monitoring.coreos.com/grafana-rules created service/grafana created serviceaccount/grafana created servicemonitor.monitoring.coreos.com/grafana created prometheusrule.monitoring.coreos.com/kube-prometheus-rules created clusterrole.rbac.authorization.k8s.io/kube-state-metrics created clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created deployment.apps/kube-state-metrics created networkpolicy.networking.k8s.io/kube-state-metrics created prometheusrule.monitoring.coreos.com/kube-state-metrics-rules created service/kube-state-metrics created serviceaccount/kube-state-metrics created servicemonitor.monitoring.coreos.com/kube-state-metrics created prometheusrule.monitoring.coreos.com/kubernetes-monitoring-rules created servicemonitor.monitoring.coreos.com/kube-apiserver created servicemonitor.monitoring.coreos.com/coredns created servicemonitor.monitoring.coreos.com/kube-controller-manager created servicemonitor.monitoring.coreos.com/kube-scheduler created servicemonitor.monitoring.coreos.com/kubelet created clusterrole.rbac.authorization.k8s.io/node-exporter created clusterrolebinding.rbac.authorization.k8s.io/node-exporter created daemonset.apps/node-exporter created networkpolicy.networking.k8s.io/node-exporter created prometheusrule.monitoring.coreos.com/node-exporter-rules created service/node-exporter created serviceaccount/node-exporter created servicemonitor.monitoring.coreos.com/node-exporter created clusterrole.rbac.authorization.k8s.io/prometheus-k8s created clusterrolebinding.rbac.authorization.k8s.io/prometheus-k8s created networkpolicy.networking.k8s.io/prometheus-k8s created poddisruptionbudget.policy/prometheus-k8s created prometheus.monitoring.coreos.com/k8s created prometheusrule.monitoring.coreos.com/prometheus-k8s-prometheus-rules created rolebinding.rbac.authorization.k8s.io/prometheus-k8s-config created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created rolebinding.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s-config created role.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s created role.rbac.authorization.k8s.io/prometheus-k8s created service/prometheus-k8s created serviceaccount/prometheus-k8s created servicemonitor.monitoring.coreos.com/prometheus-k8s created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created clusterrole.rbac.authorization.k8s.io/prometheus-adapter created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/prometheus-adapter created clusterrolebinding.rbac.authorization.k8s.io/resource-metrics:system:auth-delegator created clusterrole.rbac.authorization.k8s.io/resource-metrics-server-resources created configmap/adapter-config created deployment.apps/prometheus-adapter created networkpolicy.networking.k8s.io/prometheus-adapter created poddisruptionbudget.policy/prometheus-adapter created rolebinding.rbac.authorization.k8s.io/resource-metrics-auth-reader created service/prometheus-adapter created serviceaccount/prometheus-adapter created servicemonitor.monitoring.coreos.com/prometheus-adapter created clusterrole.rbac.authorization.k8s.io/prometheus-operator created clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created deployment.apps/prometheus-operator created networkpolicy.networking.k8s.io/prometheus-operator created prometheusrule.monitoring.coreos.com/prometheus-operator-rules created service/prometheus-operator created serviceaccount/prometheus-operator created servicemonitor.monitoring.coreos.com/prometheus-operator created root@k8s01:~/helm/prometheus/kube-prometheus# 六、验证查看#查看pod状态 root@k8s03:~# kubectl get pod -n monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 2/2 Running 0 50m alertmanager-main-1 2/2 Running 0 50m alertmanager-main-2 2/2 Running 0 50m blackbox-exporter-57bb665766-d9kwj 3/3 Running 0 50m grafana-fdf8c48f-f6cck 1/1 Running 0 50m kube-state-metrics-5ffdd9685c-hg5hc 3/3 Running 0 50m node-exporter-8l29v 2/2 Running 0 31m node-exporter-gdclz 2/2 Running 0 28m node-exporter-j5r76 2/2 Running 0 50m prometheus-adapter-7945bdf5d7-dh75k 1/1 Running 0 50m prometheus-adapter-7945bdf5d7-nbp94 1/1 Running 0 50m prometheus-k8s-0 2/2 Running 0 50m prometheus-k8s-1 2/2 Running 0 50m prometheus-operator-85c5ffc677-jk8c9 2/2 Running 0 50m #查看top信息 root@k8s03:~# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s01 3277m 40% 6500Mi 66% k8s02 6872m 85% 4037Mi 36% k8s03 362m 4% 6407Mi 65% 七、新增ingress资源#以ingress-nginx为例 [root@master1 manifests]# cat ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: alertmanager namespace: monitoring annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: alertmanager.local.com http: paths: - path: / pathType: Prefix backend: service: name: alertmanager-main port: number: 9093 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grafana namespace: monitoring annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: grafana.local.com http: paths: - path: / pathType: Prefix backend: service: name: grafana port: number: 3000 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: prometheus namespace: monitoring annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: prometheus.local.com http: paths: - path: / pathType: Prefix backend: service: name: prometheus-k8s port: number: 9090#以traefik为例: [root@master1 manifests]# cat ingress.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: alertmanager namespace: monitoring spec: entryPoints: - web routes: - match: Host(`alertmanager.local.com`) kind: Rule services: - name: alertmanager-main port: 9093 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: grafana namespace: monitoring spec: entryPoints: - web routes: - match: Host(`grafana.local.com`) kind: Rule services: - name: grafana port: 3000 --- apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: prometheus namespace: monitoring spec: entryPoints: - web routes: - match: Host(`prometheus.local.com`) kind: Rule services: - name: prometheus-k8s port: 9090 [root@master1 manifests]# kubectl apply -f ingress.yaml ingressroute.traefik.containo.us/alertmanager created ingressroute.traefik.containo.us/grafana created ingressroute.traefik.containo.us/prometheus created八、web访问验证#新增hosts解析记录 win notepad $env:windir\System32\drivers\etc\hosts 192.168.3.200 alertmanager.local.com 192.168.3.200 prometheus.local.com 192.168.3.200 grafana.local.com访问http://alertmanager.local.com:30080 ,查看当前激活的告警访问http://prometheus.local.com/targets:30080,查看targets已全部up访问http://grafana.local.com:30080/login,默认用户名和密码是admin/admin查看数据源,以为我们自动配置Prometheus数据源 九、targets异常处理查看targets可发现有两个监控任务没有对应的instance,这和serviceMonitor资源对象有关root@k8s01:~/helm/prometheus/kube-prometheus# cat prometheus-kubeControllerManagerService.yaml apiVersion: v1 kind: Service metadata: namespace: kube-system name: kube-controller-manager labels: app.kubernetes.io/name: kube-controller-manager spec: selector: component: kube-controller-manager type: ClusterIP ports: - name: https-metrics port: 10257 targetPort: 10257 protocol: TCP #新建prometheus-kubeControllerManagerService.yaml并apply创建资源 apiVersion: v1 kind: Service metadata: namespace: kube-system name: kube-controller-manager labels: app.kubernetes.io/name: kube-controller-manager spec: selector: component: kube-controller-manager type: ClusterIP ports: - name: https-metrics port: 10257 targetPort: 10257 protocol: TCP 如果出现下面图中的情况的话 需要修改配置 root@k8s-01:~# sudo ss -lntp | grep 10257 LISTEN 0 4096 127.0.0.1:10257 0.0.0.0:* users:(("kube-controller",pid=168489,fd=3)) root@k8s-01:~# cat /etc/kubernetes/manifests/kube-controller-manager.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager - --allocate-node-cidrs=true - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf - --bind-address=0.0.0.0 - --client-ca-file=/etc/kubernetes/pki/ca.crt - --cluster-cidr=10.244.0.0/16 - --cluster-name=kubernetes - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key - --controllers=*,bootstrapsigner,tokencleaner - --kubeconfig=/etc/kubernetes/controller-manager.conf - --leader-elect=true - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --root-ca-file=/etc/kubernetes/pki/ca.crt - --service-account-private-key-file=/etc/kubernetes/pki/sa.key - --service-cluster-ip-range=10.96.0.0/12 - --use-service-account-credentials=true image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.27.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10257 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-controller-manager resources: requests: cpu: 200m startupProbe: failureThreshold: 24 httpGet: host: 127.0.0.1 path: /healthz port: 10257 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /etc/pki name: etc-pki readOnly: true - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec name: flexvolume-dir - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/kubernetes/controller-manager.conf name: kubeconfig readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true hostNetwork: true priority: 2000001000 priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/pki type: DirectoryOrCreate name: etc-pki - hostPath: path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec type: DirectoryOrCreate name: flexvolume-dir - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /etc/kubernetes/controller-manager.conf type: FileOrCreate name: kubeconfig - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {} root@k8s-01:~# cd /etc/kubernetes/manifests/ root@k8s-01:/etc/kubernetes/manifests# ls etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-controller-manager.yaml.bak kube-scheduler.yaml root@k8s-01:/etc/kubernetes/manifests# mv kube-controller-manager.yaml.bak /root root@k8s-01:/etc/kubernetes/manifests# ls etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml root@k8s-01:/etc/kubernetes/manifests# sudo ss -lntp | grep 10257 LISTEN 0 4096 *:10257 *:* users:(("kube-controller",pid=169372,fd=3))
2025年06月18日
9 阅读
1 评论
0 点赞
1
...
7
8
9
...
16