首页
导航
统计
留言
更多
壁纸
直播
关于
推荐
星的魔法
星的导航页
星的云盘
谷歌一下
腾讯视频
Search
1
Ubuntu安装 kubeadm 部署k8s 1.30
140 阅读
2
kubeadm 部署k8s 1.30
100 阅读
3
rockylinux 9.3详细安装drbd
94 阅读
4
rockylinux 9.3详细安装drbd+keepalived
94 阅读
5
ceshi
64 阅读
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
登录
/
注册
Search
标签搜索
k8s
linux
docker
drbd+keepalivde
ansible
dcoker
webhook
星
累计撰写
66
篇文章
累计收到
935
条评论
首页
栏目
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
页面
导航
统计
留言
壁纸
直播
关于
推荐
星的魔法
星的导航页
星的云盘
谷歌一下
腾讯视频
搜索到
30
篇与
的结果
2025-06-05
sonarqube部署安装
一、部署1、下载下载地址:https://github.com/SonarSource/helm-chart-sonarqube/releases/download/sonarqube-2025.3.0-sonarqube-dce-2025.3.0/sonarqube-2025.3.0.tgz 解压:tar zxav sonarqube-2025.3.0.tgz 把模板输出: helm template my-sonarqube . > test.yaml 2、yaml文件--- # Source: sonarqube/charts/postgresql/templates/secrets.yaml apiVersion: v1 kind: Secret metadata: name: my-sonarqube-postgresql labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm namespace: default type: Opaque data: postgresql-postgres-password: "Tlp4MmJXa3hKbA==" postgresql-password: "c29uYXJQYXNz" --- # Source: sonarqube/templates/secret.yaml --- apiVersion: v1 kind: Secret metadata: name: my-sonarqube-sonarqube-monitoring-passcode labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm type: Opaque data: SONAR_WEB_SYSTEMPASSCODE: "MzMwNzA1OTVBYmNA" --- # Source: sonarqube/templates/secret.yaml --- apiVersion: v1 kind: Secret metadata: name: my-sonarqube-sonarqube-http-proxies labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm type: Opaque stringData: PLUGINS-HTTP-PROXY: "" PLUGINS-HTTPS-PROXY: "" PLUGINS-NO-PROXY: "" PROMETHEUS-EXPORTER-HTTP-PROXY: "" PROMETHEUS-EXPORTER-HTTPS-PROXY: "" PROMETHEUS-EXPORTER-NO-PROXY: "" --- # Source: sonarqube/templates/config.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-config labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: sonar.properties: | --- # Source: sonarqube/templates/init-fs.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-init-fs labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: init_fs.sh: |- chown -R 1000:0 /opt/sonarqube/data chown -R 1000:0 /opt/sonarqube/temp chown -R 1000:0 /opt/sonarqube/logs --- # Source: sonarqube/templates/init-sysctl.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-init-sysctl labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: init_sysctl.sh: |- set -o errexit set -o xtrace vmMaxMapCount=524288 if [[ "$(sysctl -n vm.max_map_count)" -lt $vmMaxMapCount ]]; then sysctl -w vm.max_map_count=$vmMaxMapCount if [[ "$(sysctl -n vm.max_map_count)" -lt $vmMaxMapCount ]]; then echo "Failed to set initSysctl.vmMaxMapCount"; exit 1 fi fi fsFileMax=131072 if [[ "$(sysctl -n fs.file-max)" -lt $fsFileMax ]]; then sysctl -w fs.file-max=$fsFileMax if [[ "$(sysctl -n fs.file-max)" -lt $fsFileMax ]]; then echo "Failed to set initSysctl.fsFileMax"; exit 1 fi fi nofile=131072 if [[ "$(ulimit -n)" != "unlimited" ]]; then if [[ "$(ulimit -n)" -lt $nofile ]]; then ulimit -n $nofile if [[ "$(ulimit -n)" -lt $nofile ]]; then echo "Failed to set initSysctl.nofile"; exit 1 fi fi fi nproc=8192 if [[ "$(ulimit -u)" != "unlimited" ]]; then if [[ "$(ulimit -u)" -lt $nproc ]]; then ulimit -u $nproc if [[ "$(ulimit -u)" -lt $nproc ]]; then echo "Failed to set initSysctl.nproc"; exit 1 fi fi fi --- # Source: sonarqube/templates/install-plugins.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-install-plugins labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: install_plugins.sh: |- --- # Source: sonarqube/templates/jdbc-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-jdbc-config labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: SONAR_JDBC_USERNAME: "sonarUser" SONAR_JDBC_URL: "jdbc:postgresql://my-sonarqube-postgresql:5432/sonarDB" --- # Source: sonarqube/templates/prometheus-ce-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-prometheus-ce-config labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: prometheus-ce-config.yaml: |- rules: - pattern: .* --- # Source: sonarqube/templates/prometheus-config.yaml apiVersion: v1 kind: ConfigMap metadata: name: my-sonarqube-sonarqube-prometheus-config labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm data: prometheus-config.yaml: |- rules: - pattern: .* --- # Source: sonarqube/templates/pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-sonarqube-sonarqube labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" storageClassName: "ceph-cephfs" --- # Source: sonarqube/charts/postgresql/templates/svc-headless.yaml apiVersion: v1 kind: Service metadata: name: my-sonarqube-postgresql-headless labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm # Use this annotation in addition to the actual publishNotReadyAddresses # field below because the annotation will stop being respected soon but the # field is broken in some versions of Kubernetes: # https://github.com/kubernetes/kubernetes/issues/58662 service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" namespace: default spec: type: ClusterIP clusterIP: None # We want all pods in the StatefulSet to have their addresses published for # the sake of the other Postgresql pods even before they're ready, since they # have to be able to talk to each other in order to become ready. publishNotReadyAddresses: true ports: - name: tcp-postgresql port: 5432 targetPort: tcp-postgresql selector: app.kubernetes.io/name: postgresql app.kubernetes.io/instance: my-sonarqube --- # Source: sonarqube/charts/postgresql/templates/svc.yaml apiVersion: v1 kind: Service metadata: name: my-sonarqube-postgresql labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm annotations: namespace: default spec: type: ClusterIP ports: - name: tcp-postgresql port: 5432 targetPort: tcp-postgresql selector: app.kubernetes.io/name: postgresql app.kubernetes.io/instance: my-sonarqube role: primary --- # Source: sonarqube/templates/service.yaml apiVersion: v1 kind: Service metadata: name: my-sonarqube-sonarqube labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm spec: type: ClusterIP ports: - port: 9000 targetPort: http protocol: TCP name: http selector: app: sonarqube release: my-sonarqube --- # Source: sonarqube/charts/postgresql/templates/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: my-sonarqube-postgresql labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: primary annotations: namespace: default spec: serviceName: my-sonarqube-postgresql-headless replicas: 1 updateStrategy: type: RollingUpdate selector: matchLabels: app.kubernetes.io/name: postgresql app.kubernetes.io/instance: my-sonarqube role: primary template: metadata: name: my-sonarqube-postgresql labels: app.kubernetes.io/name: postgresql helm.sh/chart: postgresql-10.15.0 app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm role: primary app.kubernetes.io/component: primary spec: affinity: podAffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: postgresql app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/component: primary namespaces: - "default" topologyKey: kubernetes.io/hostname weight: 1 nodeAffinity: securityContext: fsGroup: 1001 automountServiceAccountToken: false containers: - name: my-sonarqube-postgresql image: registry.cn-guangzhou.aliyuncs.com/xingcangku/bitnami-postgresql:11.14.0-debian-10-r22 imagePullPolicy: "IfNotPresent" resources: limits: cpu: 2 memory: 2Gi requests: cpu: 100m memory: 200Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsNonRoot: true runAsUser: 1001 seccompProfile: type: RuntimeDefault env: - name: BITNAMI_DEBUG value: "false" - name: POSTGRESQL_PORT_NUMBER value: "5432" - name: POSTGRESQL_VOLUME_DIR value: "/bitnami/postgresql" - name: PGDATA value: "/bitnami/postgresql/data" - name: POSTGRES_POSTGRES_PASSWORD valueFrom: secretKeyRef: name: my-sonarqube-postgresql key: postgresql-postgres-password - name: POSTGRES_USER value: "sonarUser" - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: my-sonarqube-postgresql key: postgresql-password - name: POSTGRES_DB value: "sonarDB" - name: POSTGRESQL_ENABLE_LDAP value: "no" - name: POSTGRESQL_ENABLE_TLS value: "no" - name: POSTGRESQL_LOG_HOSTNAME value: "false" - name: POSTGRESQL_LOG_CONNECTIONS value: "false" - name: POSTGRESQL_LOG_DISCONNECTIONS value: "false" - name: POSTGRESQL_PGAUDIT_LOG_CATALOG value: "off" - name: POSTGRESQL_CLIENT_MIN_MESSAGES value: "error" - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES value: "pgaudit" ports: - name: tcp-postgresql containerPort: 5432 livenessProbe: exec: command: - /bin/sh - -c - exec pg_isready -U "sonarUser" -d "dbname=sonarDB" -h 127.0.0.1 -p 5432 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 readinessProbe: exec: command: - /bin/sh - -c - -e - | exec pg_isready -U "sonarUser" -d "dbname=sonarDB" -h 127.0.0.1 -p 5432 [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 volumeMounts: - name: dshm mountPath: /dev/shm - name: data mountPath: /bitnami/postgresql subPath: volumes: - name: dshm emptyDir: medium: Memory volumeClaimTemplates: - metadata: name: data spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "2Gi" storageClassName: ceph-cephfs --- # Source: sonarqube/templates/sonarqube-sts.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: my-sonarqube-sonarqube labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm app.kubernetes.io/name: my-sonarqube app.kubernetes.io/instance: my-sonarqube app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: sonarqube app.kubernetes.io/component: my-sonarqube-sonarqube app.kubernetes.io/version: "25.5.0.107428-community" spec: replicas: 1 revisionHistoryLimit: 10 serviceName: my-sonarqube-sonarqube selector: matchLabels: app: sonarqube release: my-sonarqube template: metadata: annotations: checksum/config: 514ba5726581aabed2df14f0c3d95431e4f1150f3ee3c9790dae426c0b0effd3 checksum/init-fs: 2da6aac9b4e90ad2a2853245bcc71bf2b9a53bdf6db658a594551108671976e7 checksum/init-sysctl: a03f942e6089eda338af09ad886a4380f621c295548e9917a0e6113248ebb1aa checksum/plugins: 6b6fe750b5fb43bd030dbbe4e3ece53e5f37f595a480d504dd7e960bd5b9832a checksum/secret: 38377e36e39acacccf767e5fc68414a302d1868b7b9a99cb72e38f229023ca39 checksum/prometheus-config: c831c80bb8be92b75164340491b49ab104f5b865f53618ebcffe35fd03c4c034 checksum/prometheus-ce-config: a481713e44ccc5524e48597df39ba6f9a561fecd8b48fce7f6062602d8229613 labels: app: sonarqube release: my-sonarqube spec: automountServiceAccountToken: false securityContext: fsGroup: 0 initContainers: - name: "wait-for-db" image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsGroup: 0 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault command: ["/bin/bash", "-c"] args: ['set -o pipefail;for i in {1..200};do (echo > /dev/tcp/my-sonarqube-postgresql/5432) && exit 0; sleep 2;done; exit 1'] - name: init-sysctl image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent securityContext: privileged: true readOnlyRootFilesystem: true runAsUser: 0 command: ["/bin/bash", "-e", "/tmp/scripts/init_sysctl.sh"] volumeMounts: - name: init-sysctl mountPath: /tmp/scripts/ env: - name: SONAR_WEB_CONTEXT value: / - name: SONAR_WEB_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8000:/opt/sonarqube/conf/prometheus-config.yaml - name: SONAR_CE_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8001:/opt/sonarqube/conf/prometheus-ce-config.yaml - name: inject-prometheus-exporter image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsGroup: 0 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault command: ["/bin/sh", "-c"] args: ["curl -s 'https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.17.2/jmx_prometheus_javaagent-0.17.2.jar' --output /data/jmx_prometheus_javaagent.jar -v"] volumeMounts: - mountPath: /data name: sonarqube subPath: data env: - name: http_proxy valueFrom: secretKeyRef: name: my-sonarqube-sonarqube-http-proxies key: PROMETHEUS-EXPORTER-HTTP-PROXY - name: https_proxy valueFrom: secretKeyRef: name: my-sonarqube-sonarqube-http-proxies key: PROMETHEUS-EXPORTER-HTTPS-PROXY - name: no_proxy valueFrom: secretKeyRef: name: my-sonarqube-sonarqube-http-proxies key: PROMETHEUS-EXPORTER-NO-PROXY - name: SONAR_WEB_CONTEXT value: / - name: SONAR_WEB_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8000:/opt/sonarqube/conf/prometheus-config.yaml - name: SONAR_CE_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8001:/opt/sonarqube/conf/prometheus-ce-config.yaml - name: init-fs image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent securityContext: capabilities: add: - CHOWN drop: - ALL privileged: false readOnlyRootFilesystem: true runAsGroup: 0 runAsNonRoot: false runAsUser: 0 seccompProfile: type: RuntimeDefault command: ["sh", "-e", "/tmp/scripts/init_fs.sh"] volumeMounts: - name: init-fs mountPath: /tmp/scripts/ - mountPath: /opt/sonarqube/data name: sonarqube subPath: data - mountPath: /opt/sonarqube/temp name: sonarqube subPath: temp - mountPath: /opt/sonarqube/logs name: sonarqube subPath: logs - mountPath: /tmp name: tmp-dir - mountPath: /opt/sonarqube/extensions name: sonarqube subPath: extensions containers: - name: sonarqube image: registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community imagePullPolicy: IfNotPresent ports: - name: http containerPort: 9000 protocol: TCP - name: monitoring-web containerPort: 8000 protocol: TCP - name: monitoring-ce containerPort: 8001 protocol: TCP resources: limits: cpu: 800m ephemeral-storage: 512000M memory: 6144M requests: cpu: 400m ephemeral-storage: 1536M memory: 2048M env: - name: SONAR_HELM_CHART_VERSION value: 2025.3.0 - name: SONAR_JDBC_PASSWORD valueFrom: secretKeyRef: name: my-sonarqube-postgresql key: postgresql-password - name: SONAR_WEB_SYSTEMPASSCODE valueFrom: secretKeyRef: name: my-sonarqube-sonarqube-monitoring-passcode key: SONAR_WEB_SYSTEMPASSCODE - name: SONAR_WEB_CONTEXT value: / - name: SONAR_WEB_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8000:/opt/sonarqube/conf/prometheus-config.yaml - name: SONAR_CE_JAVAOPTS value: -javaagent:/opt/sonarqube/data/jmx_prometheus_javaagent.jar=8001:/opt/sonarqube/conf/prometheus-ce-config.yaml envFrom: - configMapRef: name: my-sonarqube-sonarqube-jdbc-config livenessProbe: exec: command: - sh - -c - | wget --no-proxy --quiet -O /dev/null --timeout=1 --header="X-Sonar-Passcode: $SONAR_WEB_SYSTEMPASSCODE" "http://localhost:9000/api/system/liveness" failureThreshold: 6 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 1 readinessProbe: exec: command: - sh - -c - | #!/bin/bash # A Sonarqube container is considered ready if the status is UP, DB_MIGRATION_NEEDED or DB_MIGRATION_RUNNING # status about migration are added to prevent the node to be kill while SonarQube is upgrading the database. if wget --no-proxy -qO- http://localhost:9000/api/system/status | grep -q -e '"status":"UP"' -e '"status":"DB_MIGRATION_NEEDED"' -e '"status":"DB_MIGRATION_RUNNING"'; then exit 0 fi exit 1 failureThreshold: 6 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 1 startupProbe: httpGet: scheme: HTTP path: /api/system/status port: http initialDelaySeconds: 30 periodSeconds: 10 failureThreshold: 24 timeoutSeconds: 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsGroup: 0 runAsNonRoot: true runAsUser: 1000 seccompProfile: type: RuntimeDefault volumeMounts: - mountPath: /opt/sonarqube/data name: sonarqube subPath: data - mountPath: /opt/sonarqube/temp name: sonarqube subPath: temp - mountPath: /opt/sonarqube/logs name: sonarqube subPath: logs - mountPath: /tmp name: tmp-dir - mountPath: /opt/sonarqube/extensions name: sonarqube subPath: extensions - mountPath: /opt/sonarqube/conf/prometheus-config.yaml subPath: prometheus-config.yaml name: prometheus-config - mountPath: /opt/sonarqube/conf/prometheus-ce-config.yaml subPath: prometheus-ce-config.yaml name: prometheus-ce-config serviceAccountName: default volumes: - name: init-sysctl configMap: name: my-sonarqube-sonarqube-init-sysctl items: - key: init_sysctl.sh path: init_sysctl.sh - name: init-fs configMap: name: my-sonarqube-sonarqube-init-fs items: - key: init_fs.sh path: init_fs.sh - name: prometheus-config configMap: name: my-sonarqube-sonarqube-prometheus-config items: - key: prometheus-config.yaml path: prometheus-config.yaml - name: prometheus-ce-config configMap: name: my-sonarqube-sonarqube-prometheus-ce-config items: - key: prometheus-ce-config.yaml path: prometheus-ce-config.yaml - name: sonarqube persistentVolumeClaim: claimName: my-sonarqube-sonarqube - name : tmp-dir emptyDir: {} --- # Source: sonarqube/templates/tests/sonarqube-test.yaml apiVersion: v1 kind: Pod metadata: name: "my-sonarqube-ui-test" annotations: "helm.sh/hook": test-success labels: app: sonarqube chart: sonarqube-2025.3.0 release: my-sonarqube heritage: Helm spec: automountServiceAccountToken: false containers: - name: my-sonarqube-ui-test image: "registry.cn-guangzhou.aliyuncs.com/xingcangku/sonarqube-community:25.5.0.107428-community" imagePullPolicy: IfNotPresent command: ['wget'] args: [ '--retry-connrefused', '--waitretry=1', '--timeout=5', '-t', '12', '-qO-', 'my-sonarqube-sonarqube:9000/api/system/status' ] resources: limits: cpu: 500m ephemeral-storage: 1000M memory: 200M requests: cpu: 500m ephemeral-storage: 100M memory: 200M restartPolicy: Never 3、安装kubectl apply -f test.yaml4、svcapiVersion: v1 kind: Service metadata: name: sonarqube-nodeport spec: type: NodePort ports: - port: 9000 targetPort: 9000 nodePort: 32309 selector: app: sonarqube release: my-sonarqube5、启动的时候慢是正常的。http://192.168.3.200:32309/6、修改hosts文件notepad C:\Windows\System32\drivers\etc\hosts 6、给其他业务pod添加root@k8s01:~/helm/sonarqube# kubectl get svc -n traefik | grep traefik traefik-crds LoadBalancer 10.101.202.240 <pending> 80:31080/TCP,443:32480/TCP 87mroot@k8s01:~/helm/sonarqube# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10d my-sonarqube-postgresql ClusterIP 10.102.103.88 <none> 5432/TCP 23h my-sonarqube-postgresql-headless ClusterIP None <none> 5432/TCP 23h my-sonarqube-sonarqube ClusterIP 10.107.136.0 <none> 9000/TCP 23h sonarqube-nodeport NodePort 10.106.168.209 <none> 9000:32309/TCP 22h test-app ClusterIP 10.101.249.224 <none> 80/TCP 6d23h apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: sonarqube-ingress namespace: default # 确保与 SonarQube 服务同命名空间 spec: entryPoints: - web # HTTP 入口(如需 HTTPS 使用 websecure) routes: - match: Host(`sonarqube.local.com`) kind: Rule services: - name: my-sonarqube-sonarqube # 使用 ClusterIP 服务 port: 9000这样可以实验出流量的两个走发 1、直接走业务pod本身的端口 2、先走traefik然后由它来分发给业务pod
2025年06月05日
2 阅读
0 评论
0 点赞
2025-05-30
k8s安装traefik与实操
一、安装traefik先安装traefik-crds、下面的--- # Source: traefik/templates/rbac/serviceaccount.yaml kind: ServiceAccount apiVersion: v1 metadata: name: traefik-release namespace: traefik labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm annotations: automountServiceAccountToken: false --- # Source: traefik/templates/rbac/clusterrole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-release-default labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm rules: - apiGroups: - "" resources: - configmaps - nodes - services verbs: - get - list - watch - apiGroups: - discovery.k8s.io resources: - endpointslices verbs: - list - watch - apiGroups: - "" resources: - secrets verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io resources: - ingressclasses - ingresses verbs: - get - list - watch - apiGroups: - extensions - networking.k8s.io resources: - ingresses/status verbs: - update - apiGroups: - traefik.io resources: - ingressroutes - ingressroutetcps - ingressrouteudps - middlewares - middlewaretcps - serverstransports - serverstransporttcps - tlsoptions - tlsstores - traefikservices verbs: - get - list - watch --- # Source: traefik/templates/rbac/clusterrolebinding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: traefik-release-default labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-release-default subjects: - kind: ServiceAccount name: traefik-release namespace: traefik --- # 添加PVC定义 apiVersion: v1 kind: PersistentVolumeClaim metadata: name: traefik-data-pvc namespace: traefik spec: accessModes: - ReadWriteMany # CephFS支持多节点读写 storageClassName: ceph-cephfs resources: requests: storage: 1Gi # 根据实际需求调整大小 --- # Source: traefik/templates/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: traefik-release namespace: traefik labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm annotations: spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 0 maxSurge: 1 minReadySeconds: 0 template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" prometheus.io/port: "9100" labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm spec: securityContext: runAsUser: 0 runAsGroup: 0 fsGroup: 0 capabilities: add: ["NET_BIND_SERVICE"] serviceAccountName: traefik-release automountServiceAccountToken: true terminationGracePeriodSeconds: 60 hostNetwork: false containers: - image: registry.cn-guangzhou.aliyuncs.com/xingcangku/traefik:v3.0.0 imagePullPolicy: IfNotPresent name: traefik-release resources: readinessProbe: httpGet: path: /ping port: 9000 scheme: HTTP failureThreshold: 1 initialDelaySeconds: 2 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 livenessProbe: httpGet: path: /ping port: 9000 scheme: HTTP failureThreshold: 3 initialDelaySeconds: 2 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 2 lifecycle: ports: - name: metrics containerPort: 9100 protocol: TCP - name: traefik containerPort: 9000 protocol: TCP - name: web containerPort: 8000 protocol: TCP - name: websecure containerPort: 8443 protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL - NET_BIND_SERVICE readOnlyRootFilesystem: false volumeMounts: - name: data mountPath: /data readOnly: false # 允许写入 args: - "--global.checknewversion" - "--global.sendanonymoususage" - "--entryPoints.metrics.address=:9100/tcp" - "--entryPoints.traefik.address=:9000/tcp" - "--entryPoints.web.address=:8000/tcp" - "--entryPoints.websecure.address=:8443/tcp" - "--api.dashboard=true" - "--ping=true" - "--metrics.prometheus=true" - "--metrics.prometheus.entrypoint=metrics" - "--providers.kubernetescrd" - "--providers.kubernetescrd.allowEmptyServices=true" - "--providers.kubernetesingress" - "--providers.kubernetesingress.allowEmptyServices=true" - "--providers.kubernetesingress.ingressendpoint.publishedservice=default/traefik-release" - "--entryPoints.websecure.http.tls=true" - "--log.level=INFO" env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumes: # 替换emptyDir为PVC - name: data persistentVolumeClaim: claimName: traefik-data-pvc securityContext: runAsGroup: 65532 runAsNonRoot: true runAsUser: 65532 --- # Source: traefik/templates/ingressclass.yaml apiVersion: networking.k8s.io/v1 kind: IngressClass metadata: annotations: ingressclass.kubernetes.io/is-default-class: "true" labels: app.kubernetes.io/name: traefik app.kubernetes.io/instance: traefik-release-default helm.sh/chart: traefik-35.4.0 app.kubernetes.io/managed-by: Helm name: traefik-release spec: controller: traefik.io/ingress-controller root@k8s01:~/helm/traefik/traefik-helm-chart-35.4.0/traefik# cat dashboard.yaml apiVersion: traefik.io/v1alpha1 kind: IngressRoute metadata: name: dashboard namespace: traefik spec: entryPoints: - web routes: - match: Host(`traefik.local.com`) kind: Rule services: - name: api@internal kind: TraefikService 二、测试traefikkubectl create ns test-ns kubectl -n test-ns create deployment test-app --image=registry.cn-guangzhou.aliyuncs.com/xingcangku/nginx-alpine:1.0 kubectl -n test-ns expose deployment test-app --port=80cat <<EOF | kubectl apply -f - > apiVersion: networking.k8s.io/v1 > kind: Ingress > metadata: > name: test-ingress > namespace: test-ns > spec: > ingressClassName: traefik > rules: > - http: > paths: > - path: /test > pathType: Prefix > backend: > service: > name: test-app > port: > number: 80 > EOFWEB_PORT=$(kubectl get svc -n traefik traefik -o jsonpath='{.spec.ports[?(@.name=="web")].nodePort}')curl -v http://$NODE_IP:$WEB_PORT/test * Trying 192.168.3.200:32305... * Connected to 192.168.3.200 (192.168.3.200) port 32305 (#0) > GET /test HTTP/1.1 > Host: 192.168.3.200:32305 > User-Agent: curl/7.81.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 404 Not Found < Content-Length: 153 < Content-Type: text/html < Date: Thu, 29 May 2025 18:06:51 GMT < Server: nginx/1.27.5 < <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.27.5</center> </body> </html> * Connection #0 to host 192.168.3.200 left intact#更新路径 cat <<EOF | kubectl apply -f - > apiVersion: networking.k8s.io/v1 > kind: Ingress > metadata: > name: test-ingress > namespace: test-ns > spec: > ingressClassName: traefik > rules: > - http: > paths: > - path: / > pathType: Prefix > backend: > service: > name: test-app > port: > number: 80 > EOF # 测试访问根路径 curl -v http://$NODE_IP:$WEB_PORT/ * Trying 192.168.3.200:32305... * Connected to 192.168.3.200 (192.168.3.200) port 32305 (#0) > GET / HTTP/1.1 > Host: 192.168.3.200:32305 > User-Agent: curl/7.81.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Accept-Ranges: bytes < Content-Length: 615 < Content-Type: text/html < Date: Thu, 29 May 2025 18:08:48 GMT < Etag: "67ffa8c6-267" < Last-Modified: Wed, 16 Apr 2025 12:55:34 GMT < Server: nginx/1.27.5 < <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> * Connection #0 to host 192.168.3.200 left intact#可以在集群内部访问 curl http://traefik-service.default.svc.cluster.localroot@k8s01:~/helm/traefik/test# kubectl get ingress -n test-ns NAME CLASS HOSTS ADDRESS PORTS AGE test-ingress traefik * 80 48m root@k8s01:~/helm/traefik/test# kubectl describe ingress test-ingress -n test-ns Name: test-ingress Labels: <none> Namespace: test-ns Address: Ingress Class: traefik Default backend: <default> Rules: Host Path Backends ---- ---- -------- * / test-app:80 (10.244.1.13:80) Annotations: <none> Events: <none>
2025年05月30日
1 阅读
0 评论
0 点赞
2025-05-27
记一次k8s事故
发生的原因:当时创建了 Jenkins的yaml文件,镜像是用k8s里面harbor的镜像(没起来)。另外一个创建用命令直接拉取着这个镜像从几MB一秒的拉取速度变成后面的几百KB第一个问题 当时没拉起来是因为需要修改配置文件 因为harbor是用http的,应该是和当时文件里面的另外一个配置冲突了。然后我手动删除这个没起来的pod。发现一直卡主。然后重新修改了yaml文件改pod名字。重启apply。导致k8s一直卡主,etcd卡主。kube-apiserver.yaml 文件发现不见了!!!一直尝试修复都未成功,只能重装k8s。重装后,有一个配置没添加kubectl get node k8s01 -o jsonpath='{.spec.podCIDR}'# 预期输出:10.244.0.0/24导致网络插件Flannel,一直没起来导致系统的coredn也没能起来。在vim /etc/kubernetes/manifests/kube-controller-manager.yaml中添加spec: containers: - command: - kube-controller-manager - --allocate-node-cidrs=true # 确保启用 - --cluster-cidr=10.244.0.0/16 # 与 Flannel 的 Network 配置一致 - --node-cidr-mask-size=24 # 可选,默认24 # 其他原有参数...# 1. 临时移除清单文件 重启 kube-controller-manager mv /etc/kubernetes/manifests/kube-controller-manager.yaml /tmp/ sleep 10 # 等待 Pod 终止 # 2. 恢复清单文件 mv /tmp/kube-controller-manager.yaml /etc/kubernetes/manifests/总结:/etc/kubernetes/manifests/ 这个目录下的文件 记得备份!!!!!别轻易重新系统对k8s的配置操作要谨慎这次折腾了一晚上,k8s里面的ceph和Jenkins和harbor估计都没了。。。对于ceph放k8s里面容器化部署还是需要谨慎,就比如说这次如果是裸机部署的ceph,重装了k8s,数据应该不会丢失的。
2025年05月27日
2 阅读
1 评论
0 点赞
2025-05-25
Jenkins邮箱配置
一、安装插件在jenkins的插件管理中安装Email Extension插件二、配置邮件相关参数依次点击manage jenkins——>system,找到jenkins Location项,填写系统管理员邮件地址。配置邮件服务器相关参数,然后点击通过发送测试邮件测试配置,填写收件人邮箱号。配置Extended E-mail Notification配置,内容如下登录收件人邮件,看到有测试邮件。三、自动风格任务配置 3.1修改任务配置构建后操作内容3.2构建测试3.2.1点击立即构建,查看收件人邮箱四、流水线任务配置 4.1修改pipeline添加邮件发送在项目根目录编写email.html,并推送至项目仓库。邮件模板如下所示:<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>${ENV, var="JOB_NAME"}-第${BUILD_NUMBER}次构建日志</title> </head> <body leftmargin="8" marginwidth="0" topmargin="8" marginheight="4" offset="0"> <table width="95%" cellpadding="0" cellspacing="0" style="font-size: 11pt; font-family: Tahoma, Arial, Helvetica, sans-serif"> <tr> 本邮件由系统自动发出,无需回复!<br/> 各位同事,大家好,以下为${PROJECT_NAME }项目构建信息</br> <td><font color="#CC0000">构建结果 - ${BUILD_STATUS}</font></td> </tr> <tr> <td><br /> <b><font color="#0B610B">构建信息</font></b> <hr size="2" width="100%" align="center" /></td> </tr> <tr> <td> <ul> <li>项目名称 : ${PROJECT_NAME}</li> <li>构建编号 : 第${BUILD_NUMBER}次构建</li> <li>触发原因: ${CAUSE}</li> <li>构建状态: ${BUILD_STATUS}</li> <li>构建日志: <a href="${BUILD_URL}console">${BUILD_URL}console</a></li> <li>构建Url : <a href="${BUILD_URL}">${BUILD_URL}</a></li> <li>工作目录 : <a href="${PROJECT_URL}ws">${PROJECT_URL}ws</a></li> <li>项目Url : <a href="${PROJECT_URL}">${PROJECT_URL}</a></li> </ul> <h4><font color="#0B610B">失败用例</font></h4> <hr size="2" width="100%" /> $FAILED_TESTS<br/> <h4><font color="#0B610B">最近提交(#$SVN_REVISION)</font></h4> <hr size="2" width="100%" /> <ul> ${CHANGES_SINCE_LAST_SUCCESS, reverse=true, format="%c", changesFormat="<li>%d [%a] %m</li>"} </ul> 详细提交: <a href="${PROJECT_URL}changes">${PROJECT_URL}changes</a><br/> </td> </tr> </table> </body> </html> 4.2修改pipeline添加邮件发送pipeline { agent any stages { stage('拉取代码') { steps { checkout scmGit(branches: [[name: '*/${branch}']], extensions: [], userRemoteConfigs: [[credentialsId: 'gitee-cuiliang0302', url: 'https://gitee.com/cuiliang0302/sprint_boot_demo.git']]) } } stage('编译构建') { steps { sh 'mvn clean package' } } stage('部署运行') { steps { sh 'nohup java -jar target/SpringBootDemo-0.0.1-SNAPSHOT.jar &' sh 'sleep 10' } } } post { always { emailext( subject: '构建通知:${PROJECT_NAME} - Build # ${BUILD_NUMBER} - ${BUILD_STATUS}!', body: '${FILE,path="email.html"}', to: 'cuiliang0302@qq.com' ) } } }4.3构建测试
2025年05月25日
2 阅读
0 评论
0 点赞
2025-05-22
Gitlab安装
一、gpg.key-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBF5dI2sBEACyGx5isuXqEV2zJGIx8rlJFCGw6A9g5Zk/9Hj50UpXNuOXlvQl 7vq91m2CAh88Jad7OiMHIJJhX3ZJEOf/pUx/16QKumsaEyBk9CegxUG9jAQXsjL3 WLyP0/l27UzNrOAFB+IUGjsoP+32gsSPiF5P485mirIJNojIAFzDQl3Uo4FbvqYU 9AIRk5kV4nEYz1aKXAovIUsyqrztMtwlAG2xqdwVpGD2A4/w8I143qPGjjhEQmf4 /EeS4CP9ztyLAx+01t2Acwa7Bygsb5KQPuT25UlevuxdDy/Rd5Zn/Lzwr2GQqjUs 6GbM0t1HYjh57e4V+p0qMf6jxXfrDCbehgzFvGS0cx/d7hWHm5sXZIt3gxpjBQU2 8MQWtrR8Y3nTBkCHwOKsXdsdD+YHxTq/yuvxl1Bcyshp29cGWv1es3wn2Z6i9tWe asGfVewJZiXFSEqSBGguEmLyCAZcWgXvHOV2kc66wG4d4TGIxmoo9GBqEtBftCVH MGDHt7zeg2hg6EIsx8/nj1duO5nBnbnik5iG8Xv46e/aw2p4DfTdfxHpjvyJudyN +UI5eSuuuXhyTZWedd5K1Q3+0CmACJ39t/NA6g7cZaw3boFKw3fTWIgOVTvC3y5v d7wsuyGUk9xNhHLcu6HjB4VPGzcTwQWMFf6+I4qGAUykU5mjTJchQeqmQwARAQAB tEJHaXRMYWIgQi5WLiAocGFja2FnZSByZXBvc2l0b3J5IHNpZ25pbmcga2V5KSA8 cGFja2FnZXNAZ2l0bGFiLmNvbT6JAlQEEwEKAD4CGwMFCwkIBwIGFQoJCAsCBBYC AwECHgECF4AWIQT2QD9lRKOIY9qgtuA/AWGKUTEvPwUCZd+UbQUJC0TYAgAKCRA/ AWGKUTEvPzeVEACDxFTCWdSe6S6sWhRTRCx4c/NF1WGHx2IUnCxMJqam5ij+xE+E 4dRAuBO3gD3bO4MAZJzvnAOC8RE9uMgAW7CS9+kpwdnXtS7/30P2sl0Lb3sXw57t ZtoYdZXr2H2/5E67k1SiEIpLeGyx5nnS1Irb3+b5DYwovAQQMgGF0jhJqjvaHulp nKlFegYBw1tVYPx+WKDqTcDu+57hVNuH2TSDXAjX7xL02PpmWkBQdfW1DMYiUkDy vrgrjVIggYCxyNEK+by8kuJ0EndB5n1VO98IAFrb321Ze8PTiRcgEi7wvZqMZCKw TkV4lNGpQs8AE6eXcCsaucWIz/Mm1Qu7t/uCfVbJ8k6R1VrngsPL+xl/4+zNxtI2 DHITvlkOgIMLaa+7JWiW6bQ+tXpLpMkKvgUWneLTwzjGWCl9p3byTg/pBNAc8qzJ XR2CRviNgV4xGVRreBDGPzaOKalVicSNcEu6nGNpe1Np1WtXMBf5Ed4Je4P1v6wL CjSIvxe6S68koIOwdX73a7d+yQA+bEegsN/su3Tp/jp/aDSOR+93UCPjXHLd0q3Y 6C/dvh3wyEC5topIc8XJFfP1mCtGV5WG1rY87AwALhc+2c+AEtShX7rKw/5rHUCY WeDt5skjByqaFtr4JSjEwQSY7G1a0IaISFkP+qhV+CkN12orAjpvZKxmwbkCDQRe XSNrARAApHc0R4tfPntr5bhTuXU/iVLyxlAlzdEv1XsdDC8YBYehT72Jpvpphtq7 sKVsuC59l8szojgO/gW//yKSuc3Gm5h58+HpIthjviGcvZXf/JcN7Pps0UGkLeQN 2+IRZgbA6CAAPh2njE60v5iXgS91bxlSJi8GVHq1h28kbKQeqUYthu9yA2+8J4Fz ivYV2VImKLSxbQlc86tl6rMKKIIOph+N4WujJgd5HZ80n2qp1608X3+9CXvtBasX VCI2ZqCuWjffVCOQzsqRbJ6LQyMbgti/23F4Yqjqp+8eyiDNL6MyWJCBbtkW3Imi FHfR0sQIM6I7fk0hvt9ljx9SG6az/s3qWK5ceQ7XbJgCAVS4yVixfgIjWvNE5ggE QNOmeF9r76t0+0xsdMYJR6lxdaQI8AAYaoMXTkCXX2DrASOjjEP65Oq/d42xpSf9 tG6XIq+xtRQyFWSMc+HfTlEHbfGReAEBlJBZhNoAwpuDckOC08vw7v2ybS5PYjJ4 5Kzdwej0ga03Wg9hrAFd/lVa5eO4pzMLuexLplhpIbJjYwCUGS4cc/LQ2jq4fue5 oxDpWPN+JrBH8oyqy91b10e70ohHppN8dQoCa79ySgMxDim92oHCkGnaVyULYDqJ zy0zqbi3tJu639c4pbcggxtAAr0I3ot8HPhKiNJRA6u8HTm//xEAEQEAAYkCPAQY AQoAJgIbDBYhBPZAP2VEo4hj2qC24D8BYYpRMS8/BQJl35S0BQkLRNhJAAoJED8B YYpRMS8/QHwP/3g6Mcdn47OK55Dx5YD5zI1DuuqhSFP0xak59jT7pVJm5Yu55Bai XS4+59IYrqaZ+CvbAr1TJzDMnwP3U2fBOyRIFpypURw+Q1efAnzKtP8aF2YIpd06 NhHEr1EZZMQytI5NcDaDly1Idwj5FX0m23AzvgVg7QbTcNOH2bOcXal++WWQ10TT b1gsnATz+Tw84EBugjk3vML5yoAWc77L3SA8KxMTcUEGhDkhm1kuct4PGIuHXmp+ qUKVh9XwvmcQIcu2fr3qmm0Bw3khwYNhGczSDjGDrnLmE5u/5R/AHgod/d0+SkHW 2uI8gPbunkLZPHc2Xaf1EUiZq/8n91FONusykZX+CizleS8AvMQmstuUcf48V2rv v7rsUtRflxf5IGH1P/X/tQ+WewD2VIHDQu+dyXvkos6LHFnxz6irNM90QqmcihYd vBvvrdeW6t5HoT2Lfhv/Xj7fzjKF5ye21WJpWFSK9PFrGb/tqPypUQspnE5cUtAa A9fP5AurEmjpDDZPaoPGG27N3m/95Dak0Q+BEx3r7VeRu4ZFX31Df/tocM5ADsXR eADwVh1H+R9vhOrc1EVPPYPWHzdjXlLZKVTiRd7uLLRXzhCp4yFfOmq1FFewlqH0 2AcgVTGaAOT65penu7y+sQJyCMHISsV15vIQXcHwL94As5MvV+mD0pGR =0Y9y -----END PGP PUBLIC KEY BLOCK-----二、执行命令sudo cat /tmp/gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/gitlab-archive-keyring.gpg sudo apt update sudo apt install -y gitlab-ce三、修改配置 外部访问URLvi /etc/gitlab/gitlab.rb external_url 'http://192.168.1.100' # 替换为实际IP或域名 unicorn['listen_port'] = 8080 # 可选:调整默认端口四、关闭防火墙sudo ufw allow http sudo ufw allow https sudo ufw allow ssh sudo ufw enable五、启动sudo gitlab-ctl reconfigure sudo gitlab-ctl restart六、查看密码sudo cat /etc/gitlab/initial_root_password zlh2xR7fA814Z2TG0gx+QmHBZSFLrZJ1v6Lk2NEYk4w= 登录192.168.3.201 账号root 密码zlh2xR7fA814Z2TG0gx+QmHBZSFLrZJ1v6Lk2NEYk4w=七、添加ssh秘钥root@k8s02:~# ssh-keygen -t ed25519 -C "jenkins@your-server" Generating public/private ed25519 key pair. Enter file in which to save the key (/root/.ssh/id_ed25519): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_ed25519 Your public key has been saved in /root/.ssh/id_ed25519.pub The key fingerprint is: SHA256:9ddgsERNTkCNMf0mG9LOXtVamYkuewg1xnj+NUlaCUw jenkins@your-server The key's randomart image is: +--[ED25519 256]--+ | o@Eo | | ..O+ | | + ..*.*| | o B.o+X*| | S = ++=B+| | . o +=+.| | . =....| | o o. | | . | +----[SHA256]-----+ root@k8s02:~# cat ~/.ssh/id_ed25519.pub ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILncgKrxDBMvO8zW0WaBymGLbKIRjUo2ZBsdacdayP03 jenkins@your-server
2025年05月22日
2 阅读
0 评论
0 点赞
1
2
...
6