首页
导航
统计
留言
更多
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
Search
1
Ubuntu安装 kubeadm 部署k8s 1.30
200 阅读
2
rockylinux 9.3详细安装drbd
128 阅读
3
kubeadm 部署k8s 1.30
125 阅读
4
rockylinux 9.3详细安装drbd+keepalived
116 阅读
5
ceshi
80 阅读
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
登录
/
注册
Search
标签搜索
k8s
linux
docker
drbd+keepalivde
ansible
dcoker
webhook
星
累计撰写
112
篇文章
累计收到
940
条评论
首页
栏目
默认分类
日记
linux
docker
k8s
ELK
Jenkins
Grafana
Harbor
Prometheus
Cepf
k8s安装
Gitlab
traefik
sonarqube
OpenTelemetry
MinIOn
Containerd进阶使用
ArgoCD
golang
Git
Python
Web开发
HTML和CSS
JavaScript
对象模型
公司
页面
导航
统计
留言
壁纸
直播
关于
推荐
星的魔法
星的导航页
谷歌一下
镜像国内下载站
大模型国内下载站
docker镜像国内下载站
腾讯视频
搜索到
4
篇与
的结果
2025-07-27
Harbor升级https配置k8s使用
一、修改values.yaml文件expose: # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer" # and fill the information in the corresponding section type: nodePort tls: # Enable TLS or not. # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress" # Note: if the "expose.type" is "ingress" and TLS is disabled, # the port must be included in the command when pulling/pushing images. # Refer to https://github.com/goharbor/harbor/issues/5291 for details. enabled: true # The source of the tls certificate. Set as "auto", "secret" # or "none" and fill the information in the corresponding section # 1) auto: generate the tls certificate automatically # 2) secret: read the tls certificate from the specified secret. # The tls certificate can be generated manually or by cert manager # 3) none: configure no tls certificate for the ingress. If the default # tls certificate is configured in the ingress controller, choose this option certSource: auto auto: # The common name used to generate the certificate, it's necessary # when the type isn't "ingress" commonName: "192.168.3.200" secret: # The name of secret which contains keys named: # "tls.crt" - the certificate # "tls.key" - the private key secretName: "" ingress: hosts: core: 192.168.3.200 # set to the type of ingress controller if it has specific requirements. # leave as `default` for most ingress controllers. # set to `gce` if using the GCE ingress controller # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller # set to `alb` if using the ALB ingress controller # set to `f5-bigip` if using the F5 BIG-IP ingress controller controller: default ## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress kubeVersionOverride: "" className: "" annotations: # note different ingress controllers may require a different ssl-redirect annotation # for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines below ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "0" # ingress-specific labels labels: {} clusterIP: # The name of ClusterIP service name: harbor # The ip address of the ClusterIP service (leave empty for acquiring dynamic ip) staticClusterIP: "" ports: # The service port Harbor listens on when serving HTTP httpPort: 80 # The service port Harbor listens on when serving HTTPS httpsPort: 443 # Annotations on the ClusterIP service annotations: {} # ClusterIP-specific labels labels: {} nodePort: # The name of NodePort service name: harbor ports: http: # The service port Harbor listens on when serving HTTP port: 80 # The node port Harbor listens on when serving HTTP nodePort: 30002 https: # The service port Harbor listens on when serving HTTPS port: 443 # The node port Harbor listens on when serving HTTPS nodePort: 30003 # Annotations on the nodePort service annotations: {} # nodePort-specific labels labels: {} loadBalancer: # The name of LoadBalancer service name: harbor # Set the IP if the LoadBalancer supports assigning IP IP: "" ports: # The service port Harbor listens on when serving HTTP httpPort: 80 # The service port Harbor listens on when serving HTTPS httpsPort: 443 # Annotations on the loadBalancer service annotations: {} # loadBalancer-specific labels labels: {} sourceRanges: []二、上面配置好后会自动生成ca证书(下载备用) 三、docker配置使用(需要重新登录)# 1. 在 Docker 的信任证书目录创建 Harbor 仓库对应目录 sudo mkdir -p /etc/docker/certs.d/192.168.30.180:30003 # 2. 复制 CA 证书到 Docker 信任目录 sudo cp ca.crt /etc/docker/certs.d/192.168.30.180:30003/ca.crt # 3. 重新加载 Docker 守护进程 sudo systemctl reload docker # 或者 sudo systemctl restart docker # 4. 测试登录(会提示输入密码) docker login 192.168.30.180:30003 -u adminroot@k8s01:~/helm/harbor# docker push 192.168.3.200:30003/nginx/nginx:latest The push refers to repository [192.168.3.200:30003/nginx/nginx] f17478b6e8f3: Layer already exists 0662742b23b2: Layer already exists 5c91a024d899: Layer already exists 6b1b97dc9285: Layer already exists a6b19c3d00b1: Layer already exists 30837a0774b9: Layer already exists 7cc7fe68eff6: Layer already exists unauthorized: unauthorized to access repository: nginx/nginx, action: push: unauthorized to access repository: nginx/nginx, action: push root@k8s01:~/helm/harbor# cat ~/.docker/config.json { "auths": { "192.168.3.200:30002": { "auth": "YWRtaW46SGFyYm9yMTIzNDU=" } } }root@k8s01:~/helm/harbor# docker logout 192.168.3.200:30002 Removing login credentials for 192.168.3.200:30002 root@k8s01:~/helm/harbor# docker login 192.168.3.200:30003 Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded root@k8s01:~/helm/harbor# cat ~/.docker/config.json { "auths": { "192.168.3.200:30003": { "auth": "YWRtaW46SGFyYm9yMTIzNDU=" } } 四、k8s配置使用(每个k8s节点都要做)root@k8s02:~# sudo cp ca.crt /usr/local/share/ca-certificates/192.168.3.200-ca.crt root@k8s02:~# sudo update-ca-certificates Updating certificates in /etc/ssl/certs... rehash: warning: skipping ca-certificates.crt,it does not contain exactly one certificate or CRL 1 added, 0 removed; done. Running hooks in /etc/ca-certificates/update.d... done. root@k8s02:~# root@k8s02:~# sudo systemctl restart containerd root@k8s02:~# kubectl create secret docker-registry harbor-cred \ --docker-server=192.168.30.180:30003 \ --docker-username=admin \ # 替换为实际用户名 --docker-password=Harbor12345 \ # 替换为实际密码 -n cicdapiVersion: apps/v1 kind: Deployment metadata: name: gitlab namespace: cicd spec: selector: matchLabels: app: gitlab replicas: 1 template: metadata: labels: app: gitlab spec: imagePullSecrets: #这里要加上 - name: harbor-cred containers: - name: gitlab image: 192.168.30.180:30003/axing_demo/jenkins:16.11.1-ce.0五、测试root@k8s01:~/helm# kubectl apply -f nginx-test.yaml pod/nginx-pod created root@k8s01:~/helm# kubectl get pods -w NAME READY STATUS RESTARTS AGE my-sonarqube-postgresql-0 1/1 Running 23 (110m ago) 52d my-sonarqube-sonarqube-0 0/1 Pending 0 48d nginx-pod 0/1 ContainerCreating 0 4s nginx-pod 1/1 Running 0 4s ^Croot@k8s01:~/helm# cat nginx-test.yaml apiVersion: v1 kind: Pod metadata: name: nginx-pod labels: app: nginx spec: containers: - name: nginx-container image: 192.168.3.200:30003/nginx/nginx:latest ports: - containerPort: 80 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m1s default-scheduler Successfully assigned default/nginx-pod to k8s02 Normal Pulling 6m kubelet Pulling image "192.168.3.200:30003/nginx/nginx:latest" Normal Pulled 5m58s kubelet Successfully pulled image "192.168.3.200:30003/nginx/nginx:latest" in 2.486104494s (2.486113121s including waiting) Normal Created 5m58s kubelet Created container nginx-container Normal Started 5m58s kubelet Started container nginx-container expose: # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer" # and fill the information in the corresponding section type: nodePort tls: # Enable TLS or not. # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress" # Note: if the "expose.type" is "ingress" and TLS is disabled, # the port must be included in the command when pulling/pushing images. # Refer to https://github.com/goharbor/harbor/issues/5291 for details. enabled: true # The source of the tls certificate. Set as "auto", "secret" # or "none" and fill the information in the corresponding section # 1) auto: generate the tls certificate automatically # 2) secret: read the tls certificate from the specified secret. # The tls certificate can be generated manually or by cert manager # 3) none: configure no tls certificate for the ingress. If the default # tls certificate is configured in the ingress controller, choose this option certSource: auto auto: # The common name used to generate the certificate, it's necessary # when the type isn't "ingress" commonName: "192.168.30.180" secret: # The name of secret which contains keys named: # "tls.crt" - the certificate # "tls.key" - the private key secretName: "" ingress: hosts: core: 192.168.30.180 # set to the type of ingress controller if it has specific requirements. # leave as `default` for most ingress controllers. # set to `gce` if using the GCE ingress controller # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller # set to `alb` if using the ALB ingress controller # set to `f5-bigip` if using the F5 BIG-IP ingress controller controller: default ## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress kubeVersionOverride: "" className: "" annotations: # note different ingress controllers may require a different ssl-redirect annotation # for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines below ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "0" # ingress-specific labels labels: {} clusterIP: # The name of ClusterIP service name: harbor # The ip address of the ClusterIP service (leave empty for acquiring dynamic ip) staticClusterIP: "" ports: # The service port Harbor listens on when serving HTTP httpPort: 80 # The service port Harbor listens on when serving HTTPS httpsPort: 443 # Annotations on the ClusterIP service annotations: {} # ClusterIP-specific labels labels: {} nodePort: # The name of NodePort service name: harbor ports: http: # The service port Harbor listens on when serving HTTP port: 80 # The node port Harbor listens on when serving HTTP nodePort: 30002 https: # The service port Harbor listens on when serving HTTPS port: 443 # The node port Harbor listens on when serving HTTPS nodePort: 30003 # Annotations on the nodePort service annotations: {} # nodePort-specific labels labels: {} loadBalancer: # The name of LoadBalancer service name: harbor # Set the IP if the LoadBalancer supports assigning IP IP: "" ports: # The service port Harbor listens on when serving HTTP httpPort: 80 # The service port Harbor listens on when serving HTTPS httpsPort: 443 # Annotations on the loadBalancer service annotations: {} # loadBalancer-specific labels labels: {} sourceRanges: [] # The external URL for Harbor core service. It is used to # 1) populate the docker/helm commands showed on portal # 2) populate the token service URL returned to docker client # # Format: protocol://domain[:port]. Usually: # 1) if "expose.type" is "ingress", the "domain" should be # the value of "expose.ingress.hosts.core" # 2) if "expose.type" is "clusterIP", the "domain" should be # the value of "expose.clusterIP.name" # 3) if "expose.type" is "nodePort", the "domain" should be # the IP address of k8s node # # If Harbor is deployed behind the proxy, set it as the URL of proxy externalURL: https://192.168.30.180:30003 # The persistence is enabled by default and a default StorageClass # is needed in the k8s cluster to provision volumes dynamically. # Specify another StorageClass in the "storageClass" or set "existingClaim" # if you already have existing persistent volumes to use # # For storing images and charts, you can also use "azure", "gcs", "s3", # "swift" or "oss". Set it in the "imageChartStorage" section persistence: enabled: true # Setting it to "keep" to avoid removing PVCs during a helm delete # operation. Leaving it empty will delete PVCs after the chart deleted # (this does not apply for PVCs that are created for internal database # and redis components, i.e. they are never deleted automatically) resourcePolicy: "keep" persistentVolumeClaim: registry: # Use the existing PVC which must be created manually before bound, # and specify the "subPath" if the PVC is shared with other components existingClaim: "" # Specify the "storageClass" used to provision the volume. Or the default # StorageClass will be used (the default). # Set it to "-" to disable dynamic provisioning storageClass: "nfs-sc" subPath: "" accessMode: ReadWriteOnce size: 5Gi annotations: {} jobservice: jobLog: existingClaim: "" storageClass: "nfs-sc" subPath: "" accessMode: ReadWriteOnce size: 1Gi annotations: {} # If external database is used, the following settings for database will # be ignored database: existingClaim: "" storageClass: "nfs-sc" subPath: "" accessMode: ReadWriteOnce size: 1Gi annotations: {} # If external Redis is used, the following settings for Redis will # be ignored redis: existingClaim: "" storageClass: "nfs-sc" subPath: "" accessMode: ReadWriteOnce size: 1Gi annotations: {} trivy: existingClaim: "" storageClass: "nfs-sc" subPath: "" accessMode: ReadWriteOnce size: 5Gi annotations: {} # Define which storage backend is used for registry to store # images and charts. Refer to # https://github.com/distribution/distribution/blob/release/2.8/docs/configuration.md#storage # for the detail. imageChartStorage: # Specify whether to disable `redirect` for images and chart storage, for # backends which not supported it (such as using minio for `s3` storage type), please disable # it. To disable redirects, simply set `disableredirect` to `true` instead. # Refer to # https://github.com/distribution/distribution/blob/release/2.8/docs/configuration.md#redirect # for the detail. disableredirect: false # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate. # The secret must contain keys named "ca.crt" which will be injected into the trust store # of registry's containers. # caBundleSecretName: # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift", # "oss" and fill the information needed in the corresponding section. The type # must be "filesystem" if you want to use persistent volumes for registry type: filesystem filesystem: rootdirectory: /storage #maxthreads: 100 azure: accountname: accountname accountkey: base64encodedaccountkey container: containername #realm: core.windows.net # To use existing secret, the key must be AZURE_STORAGE_ACCESS_KEY existingSecret: "" gcs: bucket: bucketname # The base64 encoded json file which contains the key encodedkey: base64-encoded-json-key-file #rootdirectory: /gcs/object/name/prefix #chunksize: "5242880" # To use existing secret, the key must be GCS_KEY_DATA existingSecret: "" useWorkloadIdentity: false s3: # Set an existing secret for S3 accesskey and secretkey # keys in the secret should be REGISTRY_STORAGE_S3_ACCESSKEY and REGISTRY_STORAGE_S3_SECRETKEY for registry #existingSecret: "" region: us-west-1 bucket: bucketname #accesskey: awsaccesskey #secretkey: awssecretkey #regionendpoint: http://myobjects.local #encrypt: false #keyid: mykeyid #secure: true #skipverify: false #v4auth: true #chunksize: "5242880" #rootdirectory: /s3/object/name/prefix #storageclass: STANDARD #multipartcopychunksize: "33554432" #multipartcopymaxconcurrency: 100 #multipartcopythresholdsize: "33554432" swift: authurl: https://storage.myprovider.com/v3/auth username: username password: password container: containername # keys in existing secret must be REGISTRY_STORAGE_SWIFT_PASSWORD, REGISTRY_STORAGE_SWIFT_SECRETKEY, REGISTRY_STORAGE_SWIFT_ACCESSKEY existingSecret: "" #region: fr #tenant: tenantname #tenantid: tenantid #domain: domainname #domainid: domainid #trustid: trustid #insecureskipverify: false #chunksize: 5M #prefix: #secretkey: secretkey #accesskey: accesskey #authversion: 3 #endpointtype: public #tempurlcontainerkey: false #tempurlmethods: oss: accesskeyid: accesskeyid accesskeysecret: accesskeysecret region: regionname bucket: bucketname # key in existingSecret must be REGISTRY_STORAGE_OSS_ACCESSKEYSECRET existingSecret: "" #endpoint: endpoint #internal: false #encrypt: false #secure: true #chunksize: 10M #rootdirectory: rootdirectory # The initial password of Harbor admin. Change it from portal after launching Harbor # or give an existing secret for it # key in secret is given via (default to HARBOR_ADMIN_PASSWORD) # existingSecretAdminPassword: existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD harborAdminPassword: "Harbor12345" # The internal TLS used for harbor components secure communicating. In order to enable https # in each component tls cert files need to provided in advance. internalTLS: # If internal TLS enabled enabled: false # enable strong ssl ciphers (default: false) strong_ssl_ciphers: false # There are three ways to provide tls # 1) "auto" will generate cert automatically # 2) "manual" need provide cert file manually in following value # 3) "secret" internal certificates from secret certSource: "auto" # The content of trust ca, only available when `certSource` is "manual" trustCa: "" # core related cert configuration core: # secret name for core's tls certs secretName: "" # Content of core's TLS cert file, only available when `certSource` is "manual" crt: "" # Content of core's TLS key file, only available when `certSource` is "manual" key: "" # jobservice related cert configuration jobservice: # secret name for jobservice's tls certs secretName: "" # Content of jobservice's TLS key file, only available when `certSource` is "manual" crt: "" # Content of jobservice's TLS key file, only available when `certSource` is "manual" key: "" # registry related cert configuration registry: # secret name for registry's tls certs secretName: "" # Content of registry's TLS key file, only available when `certSource` is "manual" crt: "" # Content of registry's TLS key file, only available when `certSource` is "manual" key: "" # portal related cert configuration portal: # secret name for portal's tls certs secretName: "" # Content of portal's TLS key file, only available when `certSource` is "manual" crt: "" # Content of portal's TLS key file, only available when `certSource` is "manual" key: "" # trivy related cert configuration trivy: # secret name for trivy's tls certs secretName: "" # Content of trivy's TLS key file, only available when `certSource` is "manual" crt: "" # Content of trivy's TLS key file, only available when `certSource` is "manual" key: "" ipFamily: # ipv6Enabled set to true if ipv6 is enabled in cluster, currently it affected the nginx related component ipv6: enabled: true # ipv4Enabled set to true if ipv4 is enabled in cluster, currently it affected the nginx related component ipv4: enabled: true imagePullPolicy: IfNotPresent # Use this set to assign a list of default pullSecrets imagePullSecrets: # - name: docker-registry-secret # - name: internal-registry-secret # The update strategy for deployments with persistent volumes(jobservice, registry): "RollingUpdate" or "Recreate" # Set it as "Recreate" when "RWM" for volumes isn't supported updateStrategy: type: RollingUpdate # debug, info, warning, error or fatal logLevel: info # The name of the secret which contains key named "ca.crt". Setting this enables the # download link on portal to download the CA certificate when the certificate isn't # generated automatically caSecretName: "" # The secret key used for encryption. Must be a string of 16 chars. secretKey: "not-a-secure-key" # If using existingSecretSecretKey, the key must be secretKey existingSecretSecretKey: "" # The proxy settings for updating trivy vulnerabilities from the Internet and replicating # artifacts from/to the registries that cannot be reached directly proxy: httpProxy: httpsProxy: noProxy: 127.0.0.1,localhost,.local,.internal components: - core - jobservice - trivy # Run the migration job via helm hook enableMigrateHelmHook: false # The custom ca bundle secret, the secret must contain key named "ca.crt" # which will be injected into the trust store for core, jobservice, registry, trivy components # caBundleSecretName: "" ## UAA Authentication Options # If you're using UAA for authentication behind a self-signed # certificate you will need to provide the CA Cert. # Set uaaSecretName below to provide a pre-created secret that # contains a base64 encoded CA Certificate named `ca.crt`. # uaaSecretName: metrics: enabled: false core: path: /metrics port: 8001 registry: path: /metrics port: 8001 jobservice: path: /metrics port: 8001 exporter: path: /metrics port: 8001 ## Create prometheus serviceMonitor to scrape harbor metrics. ## This requires the monitoring.coreos.com/v1 CRD. Please see ## https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md ## serviceMonitor: enabled: false additionalLabels: {} # Scrape interval. If not set, the Prometheus default scrape interval is used. interval: "" # Metric relabel configs to apply to samples before ingestion. metricRelabelings: [] # - action: keep # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+' # sourceLabels: [__name__] # Relabel configs to apply to samples before ingestion. relabelings: [] # - sourceLabels: [__meta_kubernetes_pod_node_name] # separator: ; # regex: ^(.*)$ # targetLabel: nodename # replacement: $1 # action: replace trace: enabled: false # trace provider: jaeger or otel # jaeger should be 1.26+ provider: jaeger # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth sample_rate: 1 # namespace used to differentiate different harbor services # namespace: # attributes is a key value dict contains user defined attributes used to initialize trace provider # attributes: # application: harbor jaeger: # jaeger supports two modes: # collector mode(uncomment endpoint and uncomment username, password if needed) # agent mode(uncomment agent_host and agent_port) endpoint: http://hostname:14268/api/traces # username: # password: # agent_host: hostname # export trace data by jaeger.thrift in compact mode # agent_port: 6831 otel: endpoint: hostname:4318 url_path: /v1/traces compression: false insecure: true # timeout is in seconds timeout: 10 # cache layer configurations # if this feature enabled, harbor will cache the resource # `project/project_metadata/repository/artifact/manifest` in the redis # which help to improve the performance of high concurrent pulling manifest. cache: # default is not enabled. enabled: false # default keep cache for one day. expireHours: 24 ## set Container Security Context to comply with PSP restricted policy if necessary ## each of the conatiner will apply the same security context ## containerSecurityContext:{} is initially an empty yaml that you could edit it on demand, we just filled with a common template for convenience containerSecurityContext: privileged: false allowPrivilegeEscalation: false seccompProfile: type: RuntimeDefault runAsNonRoot: true capabilities: drop: - ALL # If service exposed via "ingress", the Nginx will not be used nginx: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/nginx-photon tag: v2.13.0 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## The priority class to run the pod as priorityClassName: portal: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-portal tag: v2.13.0 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## Additional service annotations serviceAnnotations: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] core: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-core tag: v2.13.0 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 ## Startup probe values startupProbe: enabled: true initialDelaySeconds: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## Additional service annotations serviceAnnotations: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] ## User settings configuration json string configureUserSettings: # The provider for updating project quota(usage), there are 2 options, redis or db. # By default it is implemented by db but you can configure it to redis which # can improve the performance of high concurrent pushing to the same project, # and reduce the database connections spike and occupies. # Using redis will bring up some delay for quota usage updation for display, so only # suggest switch provider to redis if you were ran into the db connections spike around # the scenario of high concurrent pushing to same project, no improvment for other scenes. quotaUpdateProvider: db # Or redis # Secret is used when core server communicates with other components. # If a secret key is not specified, Helm will generate one. Alternatively set existingSecret to use an existing secret # Must be a string of 16 chars. secret: "" # Fill in the name of a kubernetes secret if you want to use your own # If using existingSecret, the key must be secret existingSecret: "" # Fill the name of a kubernetes secret if you want to use your own # TLS certificate and private key for token encryption/decryption. # The secret must contain keys named: # "tls.key" - the private key # "tls.crt" - the certificate secretName: "" # If not specifying a preexisting secret, a secret can be created from tokenKey and tokenCert and used instead. # If none of secretName, tokenKey, and tokenCert are specified, an ephemeral key and certificate will be autogenerated. # tokenKey and tokenCert must BOTH be set or BOTH unset. # The tokenKey value is formatted as a multiline string containing a PEM-encoded RSA key, indented one more than tokenKey on the following line. tokenKey: | # If tokenKey is set, the value of tokenCert must be set as a PEM-encoded certificate signed by tokenKey, and supplied as a multiline string, indented one more than tokenCert on the following line. tokenCert: | # The XSRF key. Will be generated automatically if it isn't specified # While you specified, Please make sure it is 32 characters, otherwise would have validation issue at the harbor-core runtime # https://github.com/goharbor/harbor/pull/21154 xsrfKey: "" # If using existingSecret, the key is defined by core.existingXsrfSecretKey existingXsrfSecret: "" # If using existingSecret, the key existingXsrfSecretKey: CSRF_KEY # The time duration for async update artifact pull_time and repository # pull_count, the unit is second. Will be 10 seconds if it isn't set. # eg. artifactPullAsyncFlushDuration: 10 artifactPullAsyncFlushDuration: gdpr: deleteUser: false auditLogsCompliant: false jobservice: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-jobservice tag: v2.13.0 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] maxJobWorkers: 10 # The logger for jobs: "file", "database" or "stdout" jobLoggers: - file # - database # - stdout # The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`) loggerSweeperDuration: 14 #days notification: webhook_job_max_retry: 3 webhook_job_http_client_timeout: 3 # in seconds reaper: # the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24 max_update_hours: 24 # the max time for execution in running state without new task created max_dangling_hours: 168 # Secret is used when job service communicates with other components. # If a secret key is not specified, Helm will generate one. # Must be a string of 16 chars. secret: "" # Use an existing secret resource existingSecret: "" # Key within the existing secret for the job service secret existingSecretKey: JOBSERVICE_SECRET registry: registry: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/registry-photon tag: v2.13.0 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] controller: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-registryctl tag: v2.13.0 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] # Secret is used to secure the upload state from client # and registry storage backend. # See: https://github.com/distribution/distribution/blob/release/2.8/docs/configuration.md#http # If a secret key is not specified, Helm will generate one. # Must be a string of 16 chars. secret: "" # Use an existing secret resource existingSecret: "" # Key within the existing secret for the registry service secret existingSecretKey: REGISTRY_HTTP_SECRET # If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL. relativeurls: false credentials: username: "harbor_registry_user" password: "harbor_registry_password" # If using existingSecret, the key must be REGISTRY_PASSWD and REGISTRY_HTPASSWD existingSecret: "" # Login and password in htpasswd string format. Excludes `registry.credentials.username` and `registry.credentials.password`. May come in handy when integrating with tools like argocd or flux. This allows the same line to be generated each time the template is rendered, instead of the `htpasswd` function from helm, which generates different lines each time because of the salt. # htpasswdString: $apr1$XLefHzeG$Xl4.s00sMSCCcMyJljSZb0 # example string htpasswdString: "" middleware: enabled: false type: cloudFront cloudFront: baseurl: example.cloudfront.net keypairid: KEYPAIRID duration: 3000s ipfilteredby: none # The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key # that allows access to CloudFront privateKeySecret: "my-secret" # enable purge _upload directories upload_purging: enabled: true # remove files in _upload directories which exist for a period of time, default is one week. age: 168h # the interval of the purge operations interval: 24h dryrun: false trivy: # enabled the flag to enable Trivy scanner enabled: true image: # repository the repository for Trivy adapter image repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/trivy-adapter-photon # tag the tag for Trivy adapter image tag: v2.13.0 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false # replicas the number of Pod replicas replicas: 1 resources: requests: cpu: 200m memory: 512Mi limits: cpu: 1 memory: 1Gi extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] # debugMode the flag to enable Trivy debug mode with more verbose scanning log debugMode: false # vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`. vulnType: "os,library" # severity a comma-separated list of severities to be checked severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL" # ignoreUnfixed the flag to display only fixed vulnerabilities ignoreUnfixed: false # insecure the flag to skip verifying registry certificate insecure: false # gitHubToken the GitHub access token to download Trivy DB # # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases. # It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached # in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update # timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one. # Currently, the database is updated every 12 hours and published as a new release to GitHub. # # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000 # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult # https://developer.github.com/v3/#rate-limiting # # You can create a GitHub token by following the instructions in # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line gitHubToken: "" # skipUpdate the flag to disable Trivy DB downloads from GitHub # # You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues. # If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the # `/home/scanner/.cache/trivy/db/trivy.db` path. skipUpdate: false # skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the # `/home/scanner/.cache/trivy/java-db/trivy-java.db` path # skipJavaDBUpdate: false # The offlineScan option prevents Trivy from sending API requests to identify dependencies. # # Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it. # For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't # exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode. # It would work if all the dependencies are in local. # This option doesn’t affect DB download. You need to specify skipUpdate as well as offlineScan in an air-gapped environment. offlineScan: false # Comma-separated list of what security issues to detect. Defaults to `vuln`. securityCheck: "vuln" # The duration to wait for scan completion timeout: 5m0s database: # if external database is used, set "type" to "external" # and fill the connection information in "external" section type: internal internal: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-db tag: v2.13.0 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false # resources: # requests: # memory: 256Mi # cpu: 100m # The timeout used in livenessProbe; 1 to 5 seconds livenessProbe: timeoutSeconds: 1 # The timeout used in readinessProbe; 1 to 5 seconds readinessProbe: timeoutSeconds: 1 extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. extrInitContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] # The initial superuser password for internal database password: "changeit" # The size limit for Shared memory, pgSQL use it for shared_buffer # More details see: # https://github.com/goharbor/harbor/issues/15034 shmSizeLimit: 512Mi initContainer: migrator: {} # resources: # requests: # memory: 128Mi # cpu: 100m permissions: {} # resources: # requests: # memory: 128Mi # cpu: 100m external: host: "192.168.0.1" port: "5432" username: "user" password: "password" coreDatabase: "registry" # if using existing secret, the key must be "password" existingSecret: "" # "disable" - No SSL # "require" - Always SSL (skip verification) # "verify-ca" - Always SSL (verify that the certificate presented by the # server was signed by a trusted CA) # "verify-full" - Always SSL (verify that the certification presented by the # server was signed by a trusted CA and the server host name matches the one # in the certificate) sslmode: "disable" # The maximum number of connections in the idle connection pool per pod (core+exporter). # If it <=0, no idle connections are retained. maxIdleConns: 100 # The maximum number of open connections to the database per pod (core+exporter). # If it <= 0, then there is no limit on the number of open connections. # Note: the default number of connections is 1024 for harbor's postgres. maxOpenConns: 900 ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} redis: # if external Redis is used, set "type" to "external" # and fill the connection information in "external" section type: internal internal: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/redis-photon tag: v2.13.0 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] # # jobserviceDatabaseIndex defaults to "1" # # registryDatabaseIndex defaults to "2" # # trivyAdapterIndex defaults to "5" # # harborDatabaseIndex defaults to "0", but it can be configured to "6", this config is optional # # cacheLayerDatabaseIndex defaults to "0", but it can be configured to "7", this config is optional jobserviceDatabaseIndex: "1" registryDatabaseIndex: "2" trivyAdapterIndex: "5" # harborDatabaseIndex: "6" # cacheLayerDatabaseIndex: "7" external: # support redis, redis+sentinel # addr for redis: <host_redis>:<port_redis> # addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3> addr: "192.168.0.2:6379" # The name of the set of Redis instances to monitor, it must be set to support redis+sentinel sentinelMasterSet: "" # TLS configuration for redis connection # only server-authentication is supported, mTLS for redis connection is not supported # tls connection will be disable by default # Once `tlsOptions.enable` set as true, tls/ssl connection will be used for redis # Please set the `caBundleSecretName` in this configuration file which conatins redis server rootCA if it is self-signed. # The secret must contain keys named "ca.crt" which will be injected into the trust store tlsOptions: enable: false # The "coreDatabaseIndex" must be "0" as the library Harbor # used doesn't support configuring it # harborDatabaseIndex defaults to "0", but it can be configured to "6", this config is optional # cacheLayerDatabaseIndex defaults to "0", but it can be configured to "7", this config is optional coreDatabaseIndex: "0" jobserviceDatabaseIndex: "1" registryDatabaseIndex: "2" trivyAdapterIndex: "5" # harborDatabaseIndex: "6" # cacheLayerDatabaseIndex: "7" # username field can be an empty string, and it will be authenticated against the default user username: "" password: "" # If using existingSecret, the key must be REDIS_PASSWORD existingSecret: "" ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} exporter: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-exporter tag: v2.13.0 serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] podAnnotations: {} ## Additional deployment labels podLabels: {} nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] ## The priority class to run the pod as priorityClassName: # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule cacheDuration: 23 cacheCleanInterval: 14400
2025年07月27日
2 阅读
0 评论
0 点赞
2025-07-26
k8s部署最新harbor
一、创建资源root@k8s-master-01:~# mkdir harbor root@k8s-master-01:~# cd harbor/ root@k8s-master-01:~/harbor# ls root@k8s-master-01:~/harbor# helm repo add harbor https://helm.goharbor.io "harbor" has been added to your repositories root@k8s-master-01:~/harbor# helm repo list NAME URL nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner harbor https://helm.goharbor.io root@k8s-master-01:~/harbor# helm pull harbor/harbor root@k8s-master-01:~/harbor# ls harbor-1.17.1.tgz root@k8s-master-01:~/harbor# tar -zxvf harbor-1.17.1.tgz root@k8s-master-01:~/harbor# ls harbor harbor-1.17.1.tgzkubectl create ns harbor二、渲染&修改yaml文件helm template my-harbor ./test.yaml--- # Source: harbor/templates/core/core-secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-harbor-core namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: secretKey: "bm90LWEtc2VjdXJlLWtleQ==" secret: "Z3NncnBQWURsQ01hUjZlWg==" tls.key: "LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMWdRMlJMbHJ6bUQzM0ZkT2RvanNHU1hUaVNhSXhkam1maW9VWmJVNDIzSFB2bnRwCjBuRi9Vcm40Vlk4SWtSZU4vbGFjQVBCVm1BQys0czBCYkxXSytrbC9wVUxlMFZHTUI0UElCMFF5YW5hNW9wQm0KY25sb3pEcVVOTTVpMlBVMnFXUEFHWG1BenBhTmpuZlBycnlRS0ZCY0ZkOTM4NW9GSmJCdlhmZ00ycS9pU09MOQp1dzdBUDd1UUFhQ0xjMzYvSlFtb1N0ZE9tNWZtYU9idDZ5ZW9JaU50ZXNUaXF1eVN3Y0k5RFZqdXk4SDhuamp5CnNOWTdWZlBwaXhSemw0K1EzZVgxNU5Lb0NOWm4yL0lNb2x4aUk3Q3FnQTkzSU1aSkFvVy9zMnhqSVp4aDFVZ3oKRW93YkNsOXdjT1ZRR2g4eUV5a0MzRTlKQjZrTjhkY2pKT3N2VlFJREFRQUJBb0lCQVFDWEN2LzEvdHNjRzRteQo0NWRIeHhqQ0l0VXBqWjJuN0kyMzZ5RGNLMHRHYlF1T1J2R0hpWHl2dVBxUC85T3UrdTNHMi85Y0ZrS0NkYnhDCnV5YlBQMDBubWFuUnkrRVAzN3F4THd1RVBWaExsU0VzbnpiK2diczVyL29iVHJHcXAxMTlyUjNObk5nUWRXYlEKYnJTUGdSdElxSFpsSllNMTFMVGZSYWREcmFYOHpDeG1ZSkwxc3JtMHBGNXk4Q1E0SG1Ka0ZtMG0rVlMxSFk4NAptazR3OGE3SGpoZWZJSE15Z3dUREluREZ0VVFlaDNyTkRYMzI3cmw5c0NBV1NMZnpaZDBuUGc2K2lZVTE0amxmCnA1Mi8yd1E2Y3lMSFY4am1QTDNoRUVIWXJlNVp6cFEzM0sybVFWVVpTR2c2SFRIOGszdWxQS2c1UXRJTmc1K3MKR3A0aS9EREJBb0dCQVBWMVVNSjFWZ3RKTENYM1pRQVArZTBVbEhLa2dWNnZhS2R3d2tpU0tjbTBWa1g0NUlIUwpFQ3ZhaTEzWHpsNkQ0S0tlbHFRUHY2VHc0TXBRcE4xSGwwd25PVnBDTW01dGc5bmF1WDhEdlNpcGJQaVE4anFMCnhSTzhaa2NJaW15ZXdXYVUxRFVqQ281Qk5DMWd4bUV4RnFjdjVVMlVQM1RzWllmQVFTMm1NWWR4QW9HQkFOODEKTmRsRnFDWnZZT0NYMm1ud3pxZzlyQ1Y4YWFHMnQ1WjVKUkZTL2gvL2dyZFZNd25VWHZ6cXVsRmM4dHM1Q0Y2eAoxSDJwVTJFTnpJYjBCM0dKNTBCZnJBUUxwdjdXRThjaFBzUTZpRW0yeTlmLzVIeThTdVJWblRCU295NktpRk13Ci9ycTE1blBXVVJzZkF6aFYvUVgvMCtvSFFxdFVkNnY4bFVtMWRsd2xBb0dBQlhmYm1MbHNkVXZvQStDREM0RlAKbkF4OVVpQ0FFVS92RU92ZUtDZTVicGpwNHgwc1dnZ0gvREllTUxVQ0QvRDRMQ2RFUzl0ZDlacTRKMG1zb3BGWgp1WVNXTG9DVEJ3ckJpVFRxTlA0c1ZKK1JvZWY0dlgwbm9zenJxbUZ5VkFFbFpkZWk4cHdaUEJvUHc0TUlhRm5qCm0wM2gyZHlYblU4MjQ5TlFvR2UzYXNFQ2dZQjdrdDcwSWg5YzRBN1hhTnJRQ2pTdmFpMXpOM1RYeGV2UUQ5UFkKeW9UTXZFM25KL0V3d1BXeHVsWmFrMFlVM25kbXpiY2h0dXZsY0psS0liSTVScXJUdGVQcS9YUi80NDloa0dOSwppa2xIM2o3dW44b2swSzM1eWZoVGQzekdXSVh1NE5JMkZseTJ4dkZ5UFhJdjcxTTh6Z3pKcFNsZzUwdTEyUW5oCm0rZ2lUUUtCZ0R3dHBnMjNkWEh2WFBtOEt4ZmhseElGOVB3SkF4TnF2SDNNT0NRMnJCYmE2cWhseWdSa2UrSVMKeXFUWFlmSjVISFdtYTZDZ1ZMOG9nSnA2emZiS2FvOFg5VSsyOUNuaUpvK2RsM1VtTVR6VDFXcFdlRS9xWTk2LwoySVFVQVR0U1EyU2Z6YUFaQm5JL29EK1FlcTBlOEszUXZnZlc3S2Y0bzdsSnlJUGRKS2NCCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==" tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJRENDQWdpZ0F3SUJBZ0lSQUpMU3p5cTVFc05aQzV6TDZURmhpWFF3RFFZSktvWklodmNOQVFFTEJRQXcKR2pFWU1CWUdBMVVFQXhNUGFHRnlZbTl5TFhSdmEyVnVMV05oTUI0WERUSTFNRGN5TmpFME1UUXhNbG9YRFRJMgpNRGN5TmpFME1UUXhNbG93R2pFWU1CWUdBMVVFQXhNUGFHRnlZbTl5TFhSdmEyVnVMV05oTUlJQklqQU5CZ2txCmhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBMWdRMlJMbHJ6bUQzM0ZkT2RvanNHU1hUaVNhSXhkam0KZmlvVVpiVTQyM0hQdm50cDBuRi9Vcm40Vlk4SWtSZU4vbGFjQVBCVm1BQys0czBCYkxXSytrbC9wVUxlMFZHTQpCNFBJQjBReWFuYTVvcEJtY25sb3pEcVVOTTVpMlBVMnFXUEFHWG1BenBhTmpuZlBycnlRS0ZCY0ZkOTM4NW9GCkpiQnZYZmdNMnEvaVNPTDl1dzdBUDd1UUFhQ0xjMzYvSlFtb1N0ZE9tNWZtYU9idDZ5ZW9JaU50ZXNUaXF1eVMKd2NJOURWanV5OEg4bmpqeXNOWTdWZlBwaXhSemw0K1EzZVgxNU5Lb0NOWm4yL0lNb2x4aUk3Q3FnQTkzSU1aSgpBb1cvczJ4aklaeGgxVWd6RW93YkNsOXdjT1ZRR2g4eUV5a0MzRTlKQjZrTjhkY2pKT3N2VlFJREFRQUJvMkV3Clh6QU9CZ05WSFE4QkFmOEVCQU1DQXFRd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRkJ3TUMKTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRkhJcFJZZklNZkd5NkV1cU9iVSs0bGdhYVJOSApNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUJDK3plNi9yUjQ3WmJZUGN4dU00L2FXdUtkeWZpMUhhLzlGNitGCk4rMmxKT1JOTjRYeS9KT2VQWmE0NlQxYzd2OFFNNFpVMHhBbnlqdXdRN1E5WE1GcmlucEF6NVRYV1ZZcW44dWIKNlNBd2YrT01qVTVhWGM3VVZtdzJoMzdBc0svTFRYOGo3NmtMUXVyVGVNOHdFTjBDbXI5NlF2Rnk5d2d2ZThlTgpiZldac1A3c0FwYklBeDdOVmc3RUl2czQxdDgvY2dxQWVvaGpYa1UyQVZkdFNLbzh0TFU5ZzNVdTdUNGtWWXpLCm9IQzJiQ3lpbkRPRFZIOHVtcHBCbVRubEg0TDRTR0lnSGFWcVZoOUhoOGFjTmgvbE1ST0dtU1RpVC9DNDRkUS8KcXlZWWxrcXQ3Ymk0dkhjaXZJZXBzckIvdkZ1ZXlQSzZGbGNTMjEvN3BVS0FUOU1YCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K" HARBOR_ADMIN_PASSWORD: "SGFyYm9yMTIzNDU=" POSTGRESQL_PASSWORD: "Y2hhbmdlaXQ=" REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk" CSRF_KEY: "dXRzODI2MDNCZGJyVWFBRlhudFNqaWpSbUZPVUR5Mmg=" --- # Source: harbor/templates/database/database-secret.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-database" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: POSTGRES_PASSWORD: "Y2hhbmdlaXQ=" --- # Source: harbor/templates/jobservice/jobservice-secrets.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-jobservice" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: JOBSERVICE_SECRET: "Z3M3T2k4dTBTUk1GRzVlTg==" REGISTRY_CREDENTIAL_PASSWORD: "aGFyYm9yX3JlZ2lzdHJ5X3Bhc3N3b3Jk" --- # Source: harbor/templates/registry/registry-secret.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-registry" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: REGISTRY_HTTP_SECRET: "U0FnY21vVFNQVjlIdnlTRw==" REGISTRY_REDIS_PASSWORD: "" --- # Source: harbor/templates/registry/registry-secret.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-registry-htpasswd" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: REGISTRY_HTPASSWD: "aGFyYm9yX3JlZ2lzdHJ5X3VzZXI6JDJhJDEwJGNJZTk1ek9aQnp4RG1aMTQzem1HUmVWUkFmc0VMazZ0azNObWZXZmZjYkZtOU1ja1lrbnYy" --- # Source: harbor/templates/registry/registryctl-secret.yaml apiVersion: v1 kind: Secret metadata: name: "release-name-harbor-registryctl" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: --- # Source: harbor/templates/trivy/trivy-secret.yaml apiVersion: v1 kind: Secret metadata: name: release-name-harbor-trivy namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" type: Opaque data: redisURL: cmVkaXM6Ly9yZWxlYXNlLW5hbWUtaGFyYm9yLXJlZGlzOjYzNzkvNT9pZGxlX3RpbWVvdXRfc2Vjb25kcz0zMA== gitHubToken: "" --- # Source: harbor/templates/core/core-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-harbor-core namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: app.conf: |+ appname = Harbor runmode = prod enablegzip = true [prod] httpport = 8080 PORT: "8080" DATABASE_TYPE: "postgresql" POSTGRESQL_HOST: "release-name-harbor-database" POSTGRESQL_PORT: "5432" POSTGRESQL_USERNAME: "postgres" POSTGRESQL_DATABASE: "registry" POSTGRESQL_SSLMODE: "disable" POSTGRESQL_MAX_IDLE_CONNS: "100" POSTGRESQL_MAX_OPEN_CONNS: "900" EXT_ENDPOINT: "http://192.168.3.160:30002" CORE_URL: "http://release-name-harbor-core:80" JOBSERVICE_URL: "http://release-name-harbor-jobservice" REGISTRY_URL: "http://release-name-harbor-registry:5000" TOKEN_SERVICE_URL: "http://release-name-harbor-core:80/service/token" CORE_LOCAL_URL: "http://127.0.0.1:8080" WITH_TRIVY: "true" TRIVY_ADAPTER_URL: "http://release-name-harbor-trivy:8080" REGISTRY_STORAGE_PROVIDER_NAME: "filesystem" LOG_LEVEL: "info" CONFIG_PATH: "/etc/core/app.conf" CHART_CACHE_DRIVER: "redis" _REDIS_URL_CORE: "redis://release-name-harbor-redis:6379/0?idle_timeout_seconds=30" _REDIS_URL_REG: "redis://release-name-harbor-redis:6379/2?idle_timeout_seconds=30" PORTAL_URL: "http://release-name-harbor-portal" REGISTRY_CONTROLLER_URL: "http://release-name-harbor-registry:8080" REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user" HTTP_PROXY: "" HTTPS_PROXY: "" NO_PROXY: "release-name-harbor-core,release-name-harbor-jobservice,release-name-harbor-database,release-name-harbor-registry,release-name-harbor-portal,release-name-harbor-trivy,release-name-harbor-exporter,127.0.0.1,localhost,.local,.internal" PERMITTED_REGISTRY_TYPES_FOR_PROXY_CACHE: "docker-hub,harbor,azure-acr,aws-ecr,google-gcr,quay,docker-registry,github-ghcr,jfrog-artifactory" QUOTA_UPDATE_PROVIDER: "db" --- # Source: harbor/templates/jobservice/jobservice-cm-env.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-jobservice-env" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: CORE_URL: "http://release-name-harbor-core:80" TOKEN_SERVICE_URL: "http://release-name-harbor-core:80/service/token" REGISTRY_URL: "http://release-name-harbor-registry:5000" REGISTRY_CONTROLLER_URL: "http://release-name-harbor-registry:8080" REGISTRY_CREDENTIAL_USERNAME: "harbor_registry_user" JOBSERVICE_WEBHOOK_JOB_MAX_RETRY: "3" JOBSERVICE_WEBHOOK_JOB_HTTP_CLIENT_TIMEOUT: "3" LOG_LEVEL: "info" HTTP_PROXY: "" HTTPS_PROXY: "" NO_PROXY: "release-name-harbor-core,release-name-harbor-jobservice,release-name-harbor-database,release-name-harbor-registry,release-name-harbor-portal,release-name-harbor-trivy,release-name-harbor-exporter,127.0.0.1,localhost,.local,.internal" --- # Source: harbor/templates/jobservice/jobservice-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-jobservice" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: config.yml: |+ #Server listening port protocol: "http" port: 8080 worker_pool: workers: 10 backend: "redis" redis_pool: redis_url: "redis://release-name-harbor-redis:6379/1" namespace: "harbor_job_service_namespace" idle_timeout_second: 3600 job_loggers: - name: "FILE" level: INFO settings: # Customized settings of logger base_dir: "/var/log/jobs" sweeper: duration: 14 #days settings: # Customized settings of sweeper work_dir: "/var/log/jobs" metric: enabled: false path: /metrics port: 8001 #Loggers for the job service loggers: - name: "STD_OUTPUT" level: INFO reaper: # the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24 max_update_hours: 24 # the max time for execution in running state without new task created max_dangling_hours: 168 --- # Source: harbor/templates/nginx/configmap-http.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-harbor-nginx namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: nginx.conf: |+ worker_processes auto; pid /tmp/nginx.pid; events { worker_connections 3096; use epoll; multi_accept on; } http { client_body_temp_path /tmp/client_body_temp; proxy_temp_path /tmp/proxy_temp; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; tcp_nodelay on; # this is necessary for us to be able to disable request buffering in all cases proxy_http_version 1.1; upstream core { server "release-name-harbor-core:80"; } upstream portal { server release-name-harbor-portal:80; } log_format timed_combined '[$time_local]:$remote_addr - ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$request_time $upstream_response_time $pipe'; access_log /dev/stdout timed_combined; map $http_x_forwarded_proto $x_forwarded_proto { default $http_x_forwarded_proto; "" $scheme; } server { listen 8080; listen [::]:8080; server_tokens off; # disable any limits to avoid HTTP 413 for large image uploads client_max_body_size 0; # Add extra headers add_header X-Frame-Options DENY; add_header Content-Security-Policy "frame-ancestors 'none'"; location / { proxy_pass http://portal/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; } location /api/ { proxy_pass http://core/api/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; } location /c/ { proxy_pass http://core/c/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; } location /v1/ { return 404; } location /v2/ { proxy_pass http://core/v2/; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; proxy_send_timeout 900; proxy_read_timeout 900; } location /service/ { proxy_pass http://core/service/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $x_forwarded_proto; proxy_buffering off; proxy_request_buffering off; } location /service/notifications { return 404; } } } --- # Source: harbor/templates/portal/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-portal" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: nginx.conf: |+ worker_processes auto; pid /tmp/nginx.pid; events { worker_connections 1024; } http { client_body_temp_path /tmp/client_body_temp; proxy_temp_path /tmp/proxy_temp; fastcgi_temp_path /tmp/fastcgi_temp; uwsgi_temp_path /tmp/uwsgi_temp; scgi_temp_path /tmp/scgi_temp; server { listen 8080; listen [::]:8080; server_name localhost; root /usr/share/nginx/html; index index.html index.htm; include /etc/nginx/mime.types; gzip on; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript; location /devcenter-api-2.0 { try_files $uri $uri/ /swagger-ui-index.html; } location / { try_files $uri $uri/ /index.html; } location = /index.html { add_header Cache-Control "no-store, no-cache, must-revalidate"; } } } --- # Source: harbor/templates/registry/registry-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-registry" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: config.yml: |+ version: 0.1 log: level: info fields: service: registry storage: filesystem: rootdirectory: /storage cache: layerinfo: redis maintenance: uploadpurging: enabled: true age: 168h interval: 24h dryrun: false delete: enabled: true redirect: disable: false redis: addr: release-name-harbor-redis:6379 db: 2 readtimeout: 10s writetimeout: 10s dialtimeout: 10s enableTLS: false pool: maxidle: 100 maxactive: 500 idletimeout: 60s http: addr: :5000 relativeurls: false # set via environment variable # secret: placeholder debug: addr: localhost:5001 auth: htpasswd: realm: harbor-registry-basic-realm path: /etc/registry/passwd validation: disabled: true compatibility: schema1: enabled: true ctl-config.yml: |+ --- protocol: "http" port: 8080 log_level: info registry_config: "/etc/registry/config.yml" --- # Source: harbor/templates/registry/registryctl-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: "release-name-harbor-registryctl" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" data: --- # Source: harbor/templates/jobservice/jobservice-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: release-name-harbor-jobservice namespace: "harbor" annotations: helm.sh/resource-policy: keep labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: jobservice app.kubernetes.io/component: jobservice spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi storageClassName: nfs-sc --- # Source: harbor/templates/registry/registry-pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: release-name-harbor-registry namespace: "harbor" annotations: helm.sh/resource-policy: keep labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: registry app.kubernetes.io/component: registry spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: nfs-sc --- # Source: harbor/templates/core/core-svc.yaml apiVersion: v1 kind: Service metadata: name: release-name-harbor-core namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - name: http-web port: 80 targetPort: 8080 selector: release: release-name app: "harbor" component: core --- # Source: harbor/templates/database/database-svc.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-database" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - port: 5432 selector: release: release-name app: "harbor" component: database --- # Source: harbor/templates/jobservice/jobservice-svc.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-jobservice" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - name: http-jobservice port: 80 targetPort: 8080 selector: release: release-name app: "harbor" component: jobservice --- # Source: harbor/templates/nginx/service.yaml apiVersion: v1 kind: Service metadata: name: harbor labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: type: NodePort ports: - name: http port: 80 targetPort: 8080 nodePort: 30002 selector: release: release-name app: "harbor" component: nginx --- # Source: harbor/templates/portal/service.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-portal" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - port: 80 targetPort: 8080 selector: release: release-name app: "harbor" component: portal --- # Source: harbor/templates/redis/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-harbor-redis namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - port: 6379 selector: release: release-name app: "harbor" component: redis --- # Source: harbor/templates/registry/registry-svc.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-registry" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - name: http-registry port: 5000 - name: http-controller port: 8080 selector: release: release-name app: "harbor" component: registry --- # Source: harbor/templates/trivy/trivy-svc.yaml apiVersion: v1 kind: Service metadata: name: "release-name-harbor-trivy" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" spec: ports: - name: http-trivy protocol: TCP port: 8080 selector: release: release-name app: "harbor" component: trivy --- # Source: harbor/templates/core/core-dpl.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-harbor-core namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: core app.kubernetes.io/component: core spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: release: release-name app: "harbor" component: core template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: core app.kubernetes.io/component: core annotations: checksum/configmap: bf9940a91f31ccd8db1c6b0aa6a6cdbd27483d0ef0400b0478b766dba1e8778f checksum/secret: 2d768cea8bf3f359707036a5230c42ab503ee35988f494ed8e36cf09fbd7f04b checksum/secret-jobservice: c3bcac00f13ee5b6d0346fbcbabe8a495318a0b4f860f1d0d594652e0c3cfcdf spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false terminationGracePeriodSeconds: 120 containers: - name: core image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-core:v2.13.0 imagePullPolicy: IfNotPresent startupProbe: httpGet: path: /api/v2.0/ping scheme: HTTP port: 8080 failureThreshold: 360 initialDelaySeconds: 10 periodSeconds: 10 livenessProbe: httpGet: path: /api/v2.0/ping scheme: HTTP port: 8080 failureThreshold: 2 periodSeconds: 10 readinessProbe: httpGet: path: /api/v2.0/ping scheme: HTTP port: 8080 failureThreshold: 2 periodSeconds: 10 envFrom: - configMapRef: name: "release-name-harbor-core" - secretRef: name: "release-name-harbor-core" env: - name: CORE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-core key: secret - name: JOBSERVICE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-jobservice key: JOBSERVICE_SECRET securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault ports: - containerPort: 8080 volumeMounts: - name: config mountPath: /etc/core/app.conf subPath: app.conf - name: secret-key mountPath: /etc/core/key subPath: key - name: token-service-private-key mountPath: /etc/core/private_key.pem subPath: tls.key - name: psc mountPath: /etc/core/token volumes: - name: config configMap: name: release-name-harbor-core items: - key: app.conf path: app.conf - name: secret-key secret: secretName: release-name-harbor-core items: - key: secretKey path: key - name: token-service-private-key secret: secretName: release-name-harbor-core - name: psc emptyDir: {} --- # Source: harbor/templates/jobservice/jobservice-dpl.yaml apiVersion: apps/v1 kind: Deployment metadata: name: "release-name-harbor-jobservice" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: jobservice app.kubernetes.io/component: jobservice spec: replicas: 1 revisionHistoryLimit: 10 strategy: type: RollingUpdate selector: matchLabels: release: release-name app: "harbor" component: jobservice template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: jobservice app.kubernetes.io/component: jobservice annotations: checksum/configmap: 13935e266ee9ce33b7c4d1c769e78a85da9b0519505029a9b6098a497d9a1220 checksum/configmap-env: 9773f3ab781f37f25b82e63ed4e8cad53fbd5b6103a82128e4d793265eed1a1d checksum/secret: dc0d310c734ec96a18f24f108a124db429517b46f48067f10f45b653925286ea checksum/secret-core: 515f10ad6f291e46fa3c7e570e5fc27dd32089185e1306e0943e667ad083d334 spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false terminationGracePeriodSeconds: 120 containers: - name: jobservice image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-jobservice:v2.13.0 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /api/v1/stats scheme: HTTP port: 8080 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: path: /api/v1/stats scheme: HTTP port: 8080 initialDelaySeconds: 20 periodSeconds: 10 env: - name: CORE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-core key: secret securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault envFrom: - configMapRef: name: "release-name-harbor-jobservice-env" - secretRef: name: "release-name-harbor-jobservice" ports: - containerPort: 8080 volumeMounts: - name: jobservice-config mountPath: /etc/jobservice/config.yml subPath: config.yml - name: job-logs mountPath: /var/log/jobs subPath: volumes: - name: jobservice-config configMap: name: "release-name-harbor-jobservice" - name: job-logs persistentVolumeClaim: claimName: release-name-harbor-jobservice --- # Source: harbor/templates/nginx/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-harbor-nginx namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: nginx app.kubernetes.io/component: nginx spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: release: release-name app: "harbor" component: nginx template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: nginx app.kubernetes.io/component: nginx annotations: checksum/configmap: a9da1570c68479a856aa8cba7fa5ca3cc7f57eb28fb7180ea8630e1e96fbbcb0 spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false containers: - name: nginx image: "registry.cn-guangzhou.aliyuncs.com/xingcangku/nginx-photon:v2.13.0" imagePullPolicy: "IfNotPresent" livenessProbe: httpGet: scheme: HTTP path: / port: 8080 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: scheme: HTTP path: / port: 8080 initialDelaySeconds: 1 periodSeconds: 10 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault ports: - containerPort: 8080 volumeMounts: - name: config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: config configMap: name: release-name-harbor-nginx --- # Source: harbor/templates/portal/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: "release-name-harbor-portal" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: portal app.kubernetes.io/component: portal spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: release: release-name app: "harbor" component: portal template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: portal app.kubernetes.io/component: portal annotations: checksum/configmap: 92a534063aacac0294c6aefc269663fd4b65e2f3aabd23e05a7485cbb28cdc72 spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false containers: - name: portal image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-portal:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault livenessProbe: httpGet: path: / scheme: HTTP port: 8080 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: path: / scheme: HTTP port: 8080 initialDelaySeconds: 1 periodSeconds: 10 ports: - containerPort: 8080 volumeMounts: - name: portal-config mountPath: /etc/nginx/nginx.conf subPath: nginx.conf volumes: - name: portal-config configMap: name: "release-name-harbor-portal" --- # Source: harbor/templates/registry/registry-dpl.yaml apiVersion: apps/v1 kind: Deployment metadata: name: "release-name-harbor-registry" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: registry app.kubernetes.io/component: registry spec: replicas: 1 revisionHistoryLimit: 10 strategy: type: RollingUpdate selector: matchLabels: release: release-name app: "harbor" component: registry template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: registry app.kubernetes.io/component: registry annotations: checksum/configmap: e7cdc1d01e8e65c3e6a380a112730c8030cc0634a401cc2eda63a84d2098d2d0 checksum/secret: 0c29e3fdc2300f19ecf0d055e2d25e75bd856adf5dfb4eda338c647d90bdfca0 checksum/secret-jobservice: 4dd5bbe6b81b66aa7b11a43a7470bf3f04cbc1650f51e2b5ace05c9dc2c81151 checksum/secret-core: 8d2c730b5b3fa7401c1f5e78b4eac13f3b02a968843ce8cb652d795e5e810692 spec: securityContext: runAsUser: 10000 fsGroup: 10000 fsGroupChangePolicy: OnRootMismatch automountServiceAccountToken: false terminationGracePeriodSeconds: 120 containers: - name: registry image: registry.cn-guangzhou.aliyuncs.com/xingcangku/registry-photon:v2.13.0 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: / scheme: HTTP port: 5000 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: path: / scheme: HTTP port: 5000 initialDelaySeconds: 1 periodSeconds: 10 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault envFrom: - secretRef: name: "release-name-harbor-registry" env: ports: - containerPort: 5000 - containerPort: 5001 volumeMounts: - name: registry-data mountPath: /storage subPath: - name: registry-htpasswd mountPath: /etc/registry/passwd subPath: passwd - name: registry-config mountPath: /etc/registry/config.yml subPath: config.yml - name: registryctl image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-registryctl:v2.13.0 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /api/health scheme: HTTP port: 8080 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: httpGet: path: /api/health scheme: HTTP port: 8080 initialDelaySeconds: 1 periodSeconds: 10 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault envFrom: - configMapRef: name: "release-name-harbor-registryctl" - secretRef: name: "release-name-harbor-registry" - secretRef: name: "release-name-harbor-registryctl" env: - name: CORE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-core key: secret - name: JOBSERVICE_SECRET valueFrom: secretKeyRef: name: release-name-harbor-jobservice key: JOBSERVICE_SECRET ports: - containerPort: 8080 volumeMounts: - name: registry-data mountPath: /storage subPath: - name: registry-config mountPath: /etc/registry/config.yml subPath: config.yml - name: registry-config mountPath: /etc/registryctl/config.yml subPath: ctl-config.yml volumes: - name: registry-htpasswd secret: secretName: release-name-harbor-registry-htpasswd items: - key: REGISTRY_HTPASSWD path: passwd - name: registry-config configMap: name: "release-name-harbor-registry" - name: registry-data persistentVolumeClaim: claimName: release-name-harbor-registry --- # Source: harbor/templates/database/database-ss.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: "release-name-harbor-database" namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: database app.kubernetes.io/component: database spec: replicas: 1 serviceName: "release-name-harbor-database" selector: matchLabels: release: release-name app: "harbor" component: database template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: database app.kubernetes.io/component: database annotations: checksum/secret: 4ae67a99eb6eba38dcf86bbd000a763abf20cbb3cd0e2c11d2780167980b7c08 spec: securityContext: runAsUser: 999 fsGroup: 999 automountServiceAccountToken: false terminationGracePeriodSeconds: 120 initContainers: # with "fsGroup" set, each time a volume is mounted, Kubernetes must recursively chown() and chmod() all the files and directories inside the volume # this causes the postgresql reports the "data directory /var/lib/postgresql/data/pgdata has group or world access" issue when using some CSIs e.g. Ceph # use this init container to correct the permission # as "fsGroup" applied before the init container running, the container has enough permission to execute the command - name: "data-permissions-ensurer" image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-db:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault command: ["/bin/sh"] args: ["-c", "chmod -R 700 /var/lib/postgresql/data/pgdata || true"] volumeMounts: - name: database-data mountPath: /var/lib/postgresql/data subPath: containers: - name: database image: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-db:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault livenessProbe: exec: command: - /docker-healthcheck.sh initialDelaySeconds: 300 periodSeconds: 10 timeoutSeconds: 1 readinessProbe: exec: command: - /docker-healthcheck.sh initialDelaySeconds: 1 periodSeconds: 10 timeoutSeconds: 1 envFrom: - secretRef: name: "release-name-harbor-database" env: # put the data into a sub directory to avoid the permission issue in k8s with restricted psp enabled # more detail refer to https://github.com/goharbor/harbor-helm/issues/756 - name: PGDATA value: "/var/lib/postgresql/data/pgdata" volumeMounts: - name: database-data mountPath: /var/lib/postgresql/data subPath: - name: shm-volume mountPath: /dev/shm volumes: - name: shm-volume emptyDir: medium: Memory sizeLimit: 512Mi volumeClaimTemplates: - metadata: name: "database-data" labels: heritage: Helm release: release-name chart: harbor app: "harbor" annotations: spec: accessModes: ["ReadWriteMany"] storageClassName: "nfs-sc" resources: requests: storage: "1Gi" --- # Source: harbor/templates/redis/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-harbor-redis namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: redis app.kubernetes.io/component: redis spec: replicas: 1 serviceName: release-name-harbor-redis selector: matchLabels: release: release-name app: "harbor" component: redis template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: redis app.kubernetes.io/component: redis spec: securityContext: runAsUser: 999 fsGroup: 999 automountServiceAccountToken: false terminationGracePeriodSeconds: 120 containers: - name: redis image: registry.cn-guangzhou.aliyuncs.com/xingcangku/redis-photon:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault livenessProbe: tcpSocket: port: 6379 initialDelaySeconds: 300 periodSeconds: 10 readinessProbe: tcpSocket: port: 6379 initialDelaySeconds: 1 periodSeconds: 10 volumeMounts: - name: data mountPath: /var/lib/redis subPath: volumeClaimTemplates: - metadata: name: data labels: heritage: Helm release: release-name chart: harbor app: "harbor" annotations: spec: accessModes: ["ReadWriteMany"] storageClassName: "nfs-sc" resources: requests: storage: "1Gi" --- # Source: harbor/templates/trivy/trivy-sts.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-harbor-trivy namespace: "harbor" labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: trivy app.kubernetes.io/component: trivy spec: replicas: 1 serviceName: release-name-harbor-trivy selector: matchLabels: release: release-name app: "harbor" component: trivy template: metadata: labels: heritage: Helm release: release-name chart: harbor app: "harbor" app.kubernetes.io/instance: release-name app.kubernetes.io/name: harbor app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: harbor app.kubernetes.io/version: "2.13.0" component: trivy app.kubernetes.io/component: trivy annotations: checksum/secret: 44be12495ce86a4d9182302ace8a923cf60e791c072dddc10aab3dc17a54309f spec: securityContext: runAsUser: 10000 fsGroup: 10000 automountServiceAccountToken: false containers: - name: trivy image: registry.cn-guangzhou.aliyuncs.com/xingcangku/trivy-adapter-photon:v2.13.0 imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL privileged: false runAsNonRoot: true seccompProfile: type: RuntimeDefault env: - name: HTTP_PROXY value: "" - name: HTTPS_PROXY value: "" - name: NO_PROXY value: "release-name-harbor-core,release-name-harbor-jobservice,release-name-harbor-database,release-name-harbor-registry,release-name-harbor-portal,release-name-harbor-trivy,release-name-harbor-exporter,127.0.0.1,localhost,.local,.internal" - name: "SCANNER_LOG_LEVEL" value: "info" - name: "SCANNER_TRIVY_CACHE_DIR" value: "/home/scanner/.cache/trivy" - name: "SCANNER_TRIVY_REPORTS_DIR" value: "/home/scanner/.cache/reports" - name: "SCANNER_TRIVY_DEBUG_MODE" value: "false" - name: "SCANNER_TRIVY_VULN_TYPE" value: "os,library" - name: "SCANNER_TRIVY_TIMEOUT" value: "5m0s" - name: "SCANNER_TRIVY_GITHUB_TOKEN" valueFrom: secretKeyRef: name: release-name-harbor-trivy key: gitHubToken - name: "SCANNER_TRIVY_SEVERITY" value: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL" - name: "SCANNER_TRIVY_IGNORE_UNFIXED" value: "false" - name: "SCANNER_TRIVY_SKIP_UPDATE" value: "false" - name: "SCANNER_TRIVY_SKIP_JAVA_DB_UPDATE" value: "false" - name: "SCANNER_TRIVY_OFFLINE_SCAN" value: "false" - name: "SCANNER_TRIVY_SECURITY_CHECKS" value: "vuln" - name: "SCANNER_TRIVY_INSECURE" value: "false" - name: SCANNER_API_SERVER_ADDR value: ":8080" - name: "SCANNER_REDIS_URL" valueFrom: secretKeyRef: name: release-name-harbor-trivy key: redisURL - name: "SCANNER_STORE_REDIS_URL" valueFrom: secretKeyRef: name: release-name-harbor-trivy key: redisURL - name: "SCANNER_JOB_QUEUE_REDIS_URL" valueFrom: secretKeyRef: name: release-name-harbor-trivy key: redisURL ports: - name: api-server containerPort: 8080 volumeMounts: - name: data mountPath: /home/scanner/.cache subPath: readOnly: false livenessProbe: httpGet: scheme: HTTP path: /probe/healthy port: api-server initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 10 readinessProbe: httpGet: scheme: HTTP path: /probe/ready port: api-server initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 resources: limits: cpu: 1 memory: 1Gi requests: cpu: 200m memory: 512Mi volumeClaimTemplates: - metadata: name: data labels: heritage: Helm release: release-name chart: harbor app: "harbor" annotations: spec: accessModes: ["ReadWriteMany"] storageClassName: "nfs-sc" resources: requests: storage: "5Gi"root@k8s-master-01:~/harbor# kubectl apply -f test.yaml -n harbor secret/release-name-harbor-core created secret/release-name-harbor-database created secret/release-name-harbor-jobservice created secret/release-name-harbor-registry created secret/release-name-harbor-registry-htpasswd created secret/release-name-harbor-registryctl created secret/release-name-harbor-trivy created configmap/release-name-harbor-core created configmap/release-name-harbor-jobservice-env created configmap/release-name-harbor-jobservice created configmap/release-name-harbor-nginx created configmap/release-name-harbor-portal created configmap/release-name-harbor-registry created configmap/release-name-harbor-registryctl created persistentvolumeclaim/release-name-harbor-jobservice created persistentvolumeclaim/release-name-harbor-registry created service/release-name-harbor-core created service/release-name-harbor-database created service/release-name-harbor-jobservice created service/harbor created service/release-name-harbor-portal created service/release-name-harbor-redis created service/release-name-harbor-registry created service/release-name-harbor-trivy created deployment.apps/release-name-harbor-core created deployment.apps/release-name-harbor-jobservice created deployment.apps/release-name-harbor-nginx created deployment.apps/release-name-harbor-portal created deployment.apps/release-name-harbor-registry created statefulset.apps/release-name-harbor-database created statefulset.apps/release-name-harbor-redis created statefulset.apps/release-name-harbor-trivy created root@k8s-master-01:~/harbor# kubectl get all -n harbor NAME READY STATUS RESTARTS AGE pod/release-name-harbor-core-849974d76-f4wqp 1/1 Running 0 83s pod/release-name-harbor-database-0 1/1 Running 0 83s pod/release-name-harbor-jobservice-75f59fcb64-29sp8 1/1 Running 3 (57s ago) 83s pod/release-name-harbor-nginx-b67dcbfc6-wlvtv 1/1 Running 0 83s pod/release-name-harbor-portal-59b9cfd58c-l4wpj 1/1 Running 0 83s pod/release-name-harbor-redis-0 1/1 Running 0 83s pod/release-name-harbor-registry-659f59fcb5-wj8zm 2/2 Running 0 83s pod/release-name-harbor-trivy-0 1/1 Running 0 83s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/harbor NodePort 10.102.128.215 <none> 80:30002/TCP 83s service/release-name-harbor-core ClusterIP 10.108.245.194 <none> 80/TCP 83s service/release-name-harbor-database ClusterIP 10.97.69.58 <none> 5432/TCP 83s service/release-name-harbor-jobservice ClusterIP 10.106.125.16 <none> 80/TCP 83s service/release-name-harbor-portal ClusterIP 10.101.153.231 <none> 80/TCP 83s service/release-name-harbor-redis ClusterIP 10.101.144.182 <none> 6379/TCP 83s service/release-name-harbor-registry ClusterIP 10.97.84.52 <none> 5000/TCP,8080/TCP 83s service/release-name-harbor-trivy ClusterIP 10.106.250.151 <none> 8080/TCP 83s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/release-name-harbor-core 1/1 1 1 83s deployment.apps/release-name-harbor-jobservice 1/1 1 1 83s deployment.apps/release-name-harbor-nginx 1/1 1 1 83s deployment.apps/release-name-harbor-portal 1/1 1 1 83s deployment.apps/release-name-harbor-registry 1/1 1 1 83s NAME DESIRED CURRENT READY AGE replicaset.apps/release-name-harbor-core-849974d76 1 1 1 83s replicaset.apps/release-name-harbor-jobservice-75f59fcb64 1 1 1 83s replicaset.apps/release-name-harbor-nginx-b67dcbfc6 1 1 1 83s replicaset.apps/release-name-harbor-portal-59b9cfd58c 1 1 1 83s replicaset.apps/release-name-harbor-registry-659f59fcb5 1 1 1 83s NAME READY AGE statefulset.apps/release-name-harbor-database 1/1 83s statefulset.apps/release-name-harbor-redis 1/1 83s statefulset.apps/release-name-harbor-trivy 1/1 83s账号:admin 密码:Harbor1234
2025年07月26日
2 阅读
0 评论
0 点赞
2025-01-23
Harbor仓库安装
一、安装harborhttps://github.com/docker/compose/releases/download/v2.32.2/docker-compose-linux-x86_64 #docker-compose-linux-x86_64改docker-compose 放到/usr/local/bin/二、安装dockersudo yum update -y sudo yum install -y yum-utils device-mapper-persistent-data lvm2 #添加docker仓库 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #安装docker CE社区版 # 列出可用版本 yum list docker-ce --showduplicates | sort -r # 安装指定版本,例如 20.10.10 sudo yum install -y docker-ce-20.10.10 docker-ce-cli-20.10.10 containerd.io #安装最新版 sudo yum install -y docker-ce docker-ce-cli containerd.io #启动 sudo systemctl start docker sudo systemctl enable docker直接运行sudo ./install.sh支持http 默认拉取、推送到镜像仓库要使用https , 由于我们这里没有https,需要用http , 所以我们要在deploy-server.com服务器上执行如下 $ echo '{"insecure-registries":["192.168.3.20:8077"] }' >> /etc/docker/daemon.json三、秘钥mkdir /opt/cert && cd /opt/cert#创建admin-csr.json(kubernetes) cat > admin-csr.json << EOF { "CN":"admin", "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"BeiJing", "ST":"BeiJing", "O":"system:masters", "OU":"System" } ] } EOF#下载工具和添加执行权限 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 chmod +x cfssljson_linux-amd64 chmod +x cfssl_linux-amd64 #移动到/usr/local/bin mv cfssljson_linux-amd64 cfssljson mv cfssl_linux-amd64 cfssl mv cfssljson cfssl /usr/local/bin#创建证书私钥 cfssl gencert -ca=/etc/kubernetes/pki/ca.crt -ca-key=/etc/kubernetes/pki/ca.key --profile=kubernetes admin-csr.json | cfssljson -bare admin#配置证书(kubernetes) openssl pkcs12 -export -out ./jenkins-admin.pfx -inkey ./admin-key.pem -in ./admin.pem -passout pass:123456[root@master01 cert]# kubectl create secret generic kubeconfig --from-file=/root/.kube/config secret/kubeconfig created
2025年01月23日
6 阅读
0 评论
0 点赞
2023-09-10
安装harbor
Harbor 是一个主流的镜像仓库系统,在 v1.6 版本以后的 harbor 中新增加了 helm charts 的管理功能,可以存储Chart文件。 其实在Harbor 2.8+版本中,Helm Chart支持已经转移到了OCI(Open Container Initiative)格式。这意味着你需要使用OCI形式来上传和管理你的Helm Chart(不需要像网上一样,去为harbor开启chart仓库支持)一、安装一个nfs存储,提供一个sc默认存储类# 1、安装 helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/ helm upgrade --install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.110.101 --set nfs.path=/data/nfs --set storageClass.defaultClass=true -n kube-system # 2、查看 helm -n kube-system list # 3、查看nfs_provider的pod kubectl -n kube-system get pods |grep nfs nfs-subdir-external-provisioner-797c875548-rt4dh 1/1 Running 2 (58m ago) 23h # 4、查看sc(已经设置为默认的了) kubectl -n kube-system get sc nfs-client NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 23h 二、添加仓库地址helm repo add harbor https://helm.goharbor.io helm repo list三、下载Chart包到本地因为需要修改的参数比较多,在命令行直接helm install比较复杂,我就将Chart包下载到本地,再修改一些配置,这样比较直观,也比较符合实际工作中的业务环境。helm pull harbor/harbor # 下载Chart包 tar zxvf harbor-1.14.2.tgz # 解压包四、修改values.yamlexpose: # Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer" # and fill the information in the corresponding section type: nodePort tls: # Enable TLS or not. # Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress" # Note: if the "expose.type" is "ingress" and TLS is disabled, # the port must be included in the command when pulling/pushing images. # Refer to https://github.com/goharbor/harbor/issues/5291 for details. enabled: false # The source of the tls certificate. Set as "auto", "secret" # or "none" and fill the information in the corresponding section # 1) auto: generate the tls certificate automatically # 2) secret: read the tls certificate from the specified secret. # The tls certificate can be generated manually or by cert manager # 3) none: configure no tls certificate for the ingress. If the default # tls certificate is configured in the ingress controller, choose this option certSource: auto auto: # The common name used to generate the certificate, it's necessary # when the type isn't "ingress" commonName: "" secret: # The name of secret which contains keys named: # "tls.crt" - the certificate # "tls.key" - the private key secretName: "" ingress: hosts: core: core.harbor.domain # set to the type of ingress controller if it has specific requirements. # leave as `default` for most ingress controllers. # set to `gce` if using the GCE ingress controller # set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller # set to `alb` if using the ALB ingress controller # set to `f5-bigip` if using the F5 BIG-IP ingress controller controller: default ## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingress kubeVersionOverride: "" className: "" annotations: # note different ingress controllers may require a different ssl-redirect annotation # for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines below ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "0" # ingress-specific labels labels: {} clusterIP: # The name of ClusterIP service name: harbor # The ip address of the ClusterIP service (leave empty for acquiring dynamic ip) staticClusterIP: "" ports: # The service port Harbor listens on when serving HTTP httpPort: 80 # The service port Harbor listens on when serving HTTPS httpsPort: 443 # Annotations on the ClusterIP service annotations: {} # ClusterIP-specific labels labels: {} nodePort: # The name of NodePort service name: harbor ports: http: # The service port Harbor listens on when serving HTTP port: 80 # The node port Harbor listens on when serving HTTP nodePort: 30002 https: # The service port Harbor listens on when serving HTTPS port: 443 # The node port Harbor listens on when serving HTTPS nodePort: 30003 # Annotations on the nodePort service annotations: {} # nodePort-specific labels labels: {} loadBalancer: # The name of LoadBalancer service name: harbor # Set the IP if the LoadBalancer supports assigning IP IP: "" ports: # The service port Harbor listens on when serving HTTP httpPort: 80 # The service port Harbor listens on when serving HTTPS httpsPort: 443 # Annotations on the loadBalancer service annotations: {} # loadBalancer-specific labels labels: {} sourceRanges: [] # The external URL for Harbor core service. It is used to # 1) populate the docker/helm commands showed on portal # 2) populate the token service URL returned to docker client # # Format: protocol://domain[:port]. Usually: # 1) if "expose.type" is "ingress", the "domain" should be # the value of "expose.ingress.hosts.core" # 2) if "expose.type" is "clusterIP", the "domain" should be # the value of "expose.clusterIP.name" # 3) if "expose.type" is "nodePort", the "domain" should be # the IP address of k8s node # # If Harbor is deployed behind the proxy, set it as the URL of proxy externalURL: http://192.168.110.101:30002 # The persistence is enabled by default and a default StorageClass # is needed in the k8s cluster to provision volumes dynamically. # Specify another StorageClass in the "storageClass" or set "existingClaim" # if you already have existing persistent volumes to use # # For storing images and charts, you can also use "azure", "gcs", "s3", # "swift" or "oss". Set it in the "imageChartStorage" section persistence: enabled: true # Setting it to "keep" to avoid removing PVCs during a helm delete # operation. Leaving it empty will delete PVCs after the chart deleted # (this does not apply for PVCs that are created for internal database # and redis components, i.e. they are never deleted automatically) resourcePolicy: "keep" persistentVolumeClaim: registry: # Use the existing PVC which must be created manually before bound, # and specify the "subPath" if the PVC is shared with other components existingClaim: "" # Specify the "storageClass" used to provision the volume. Or the default # StorageClass will be used (the default). # Set it to "-" to disable dynamic provisioning storageClass: "nfs-client" subPath: "" accessMode: ReadWriteMany size: 5Gi annotations: {} jobservice: jobLog: existingClaim: "" storageClass: "nfs-client" subPath: "" accessMode: ReadWriteMany size: 1Gi annotations: {} # If external database is used, the following settings for database will # be ignored database: existingClaim: "" storageClass: "nfs-client" subPath: "" accessMode: ReadWriteMany size: 1Gi annotations: {} # If external Redis is used, the following settings for Redis will # be ignored redis: existingClaim: "" storageClass: "nfs-client" subPath: "" accessMode: ReadWriteMany size: 1Gi annotations: {} trivy: existingClaim: "" storageClass: "" subPath: "" accessMode: ReadWriteMany size: 5Gi annotations: {} # Define which storage backend is used for registry to store # images and charts. Refer to # https://github.com/distribution/distribution/blob/main/docs/content/about/configuration.md#storage # for the detail. imageChartStorage: # Specify whether to disable `redirect` for images and chart storage, for # backends which not supported it (such as using minio for `s3` storage type), please disable # it. To disable redirects, simply set `disableredirect` to `true` instead. # Refer to # https://github.com/distribution/distribution/blob/main/docs/configuration.md#redirect # for the detail. disableredirect: false # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate. # The secret must contain keys named "ca.crt" which will be injected into the trust store # of registry's containers. # caBundleSecretName: # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift", # "oss" and fill the information needed in the corresponding section. The type # must be "filesystem" if you want to use persistent volumes for registry type: filesystem filesystem: rootdirectory: /storage #maxthreads: 100 azure: accountname: accountname accountkey: base64encodedaccountkey container: containername #realm: core.windows.net # To use existing secret, the key must be AZURE_STORAGE_ACCESS_KEY existingSecret: "" gcs: bucket: bucketname # The base64 encoded json file which contains the key encodedkey: base64-encoded-json-key-file #rootdirectory: /gcs/object/name/prefix #chunksize: "5242880" # To use existing secret, the key must be GCS_KEY_DATA existingSecret: "" useWorkloadIdentity: false s3: # Set an existing secret for S3 accesskey and secretkey # keys in the secret should be REGISTRY_STORAGE_S3_ACCESSKEY and REGISTRY_STORAGE_S3_SECRETKEY for registry #existingSecret: "" region: us-west-1 bucket: bucketname #accesskey: awsaccesskey #secretkey: awssecretkey #regionendpoint: http://myobjects.local #encrypt: false #keyid: mykeyid #secure: true #skipverify: false #v4auth: true #chunksize: "5242880" #rootdirectory: /s3/object/name/prefix #storageclass: STANDARD #multipartcopychunksize: "33554432" #multipartcopymaxconcurrency: 100 #multipartcopythresholdsize: "33554432" swift: authurl: https://storage.myprovider.com/v3/auth username: username password: password container: containername # keys in existing secret must be REGISTRY_STORAGE_SWIFT_PASSWORD, REGISTRY_STORAGE_SWIFT_SECRETKEY, REGISTRY_STORAGE_SWIFT_ACCESSKEY existingSecret: "" #region: fr #tenant: tenantname #tenantid: tenantid #domain: domainname #domainid: domainid #trustid: trustid #insecureskipverify: false #chunksize: 5M #prefix: #secretkey: secretkey #accesskey: accesskey #authversion: 3 #endpointtype: public #tempurlcontainerkey: false #tempurlmethods: oss: accesskeyid: accesskeyid accesskeysecret: accesskeysecret region: regionname bucket: bucketname # key in existingSecret must be REGISTRY_STORAGE_OSS_ACCESSKEYSECRET existingSecret: "" #endpoint: endpoint #internal: false #encrypt: false #secure: true #chunksize: 10M #rootdirectory: rootdirectory # The initial password of Harbor admin. Change it from portal after launching Harbor # or give an existing secret for it # key in secret is given via (default to HARBOR_ADMIN_PASSWORD) # existingSecretAdminPassword: existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD harborAdminPassword: "Harbor12345" # The internal TLS used for harbor components secure communicating. In order to enable https # in each component tls cert files need to provided in advance. internalTLS: # If internal TLS enabled enabled: false # enable strong ssl ciphers (default: false) strong_ssl_ciphers: false # There are three ways to provide tls # 1) "auto" will generate cert automatically # 2) "manual" need provide cert file manually in following value # 3) "secret" internal certificates from secret certSource: "auto" # The content of trust ca, only available when `certSource` is "manual" trustCa: "" # core related cert configuration core: # secret name for core's tls certs secretName: "" # Content of core's TLS cert file, only available when `certSource` is "manual" crt: "" # Content of core's TLS key file, only available when `certSource` is "manual" key: "" # jobservice related cert configuration jobservice: # secret name for jobservice's tls certs secretName: "" # Content of jobservice's TLS key file, only available when `certSource` is "manual" crt: "" # Content of jobservice's TLS key file, only available when `certSource` is "manual" key: "" # registry related cert configuration registry: # secret name for registry's tls certs secretName: "" # Content of registry's TLS key file, only available when `certSource` is "manual" crt: "" # Content of registry's TLS key file, only available when `certSource` is "manual" key: "" # portal related cert configuration portal: # secret name for portal's tls certs secretName: "" # Content of portal's TLS key file, only available when `certSource` is "manual" crt: "" # Content of portal's TLS key file, only available when `certSource` is "manual" key: "" # trivy related cert configuration trivy: # secret name for trivy's tls certs secretName: "" # Content of trivy's TLS key file, only available when `certSource` is "manual" crt: "" # Content of trivy's TLS key file, only available when `certSource` is "manual" key: "" ipFamily: # ipv6Enabled set to true if ipv6 is enabled in cluster, currently it affected the nginx related component ipv6: enabled: true # ipv4Enabled set to true if ipv4 is enabled in cluster, currently it affected the nginx related component ipv4: enabled: true imagePullPolicy: IfNotPresent # Use this set to assign a list of default pullSecrets imagePullSecrets: # - name: docker-registry-secret # - name: internal-registry-secret # The update strategy for deployments with persistent volumes(jobservice, registry): "RollingUpdate" or "Recreate" # Set it as "Recreate" when "RWM" for volumes isn't supported updateStrategy: type: RollingUpdate # debug, info, warning, error or fatal logLevel: info # The name of the secret which contains key named "ca.crt". Setting this enables the # download link on portal to download the CA certificate when the certificate isn't # generated automatically caSecretName: "" # The secret key used for encryption. Must be a string of 16 chars. secretKey: "not-a-secure-key" # If using existingSecretSecretKey, the key must be secretKey existingSecretSecretKey: "" # The proxy settings for updating trivy vulnerabilities from the Internet and replicating # artifacts from/to the registries that cannot be reached directly proxy: httpProxy: httpsProxy: noProxy: 127.0.0.1,localhost,.local,.internal components: - core - jobservice - trivy # Run the migration job via helm hook enableMigrateHelmHook: false # The custom ca bundle secret, the secret must contain key named "ca.crt" # which will be injected into the trust store for core, jobservice, registry, trivy components # caBundleSecretName: "" ## UAA Authentication Options # If you're using UAA for authentication behind a self-signed # certificate you will need to provide the CA Cert. # Set uaaSecretName below to provide a pre-created secret that # contains a base64 encoded CA Certificate named `ca.crt`. # uaaSecretName: metrics: enabled: true core: path: /metrics port: 8001 registry: path: /metrics port: 8001 jobservice: path: /metrics port: 8001 exporter: path: /metrics port: 8001 ## Create prometheus serviceMonitor to scrape harbor metrics. ## This requires the monitoring.coreos.com/v1 CRD. Please see ## https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md ## serviceMonitor: enabled: false additionalLabels: {} # Scrape interval. If not set, the Prometheus default scrape interval is used. interval: "" # Metric relabel configs to apply to samples before ingestion. metricRelabelings: [] # - action: keep # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+' # sourceLabels: [__name__] # Relabel configs to apply to samples before ingestion. relabelings: [] # - sourceLabels: [__meta_kubernetes_pod_node_name] # separator: ; # regex: ^(.*)$ # targetLabel: nodename # replacement: $1 # action: replace trace: enabled: false # trace provider: jaeger or otel # jaeger should be 1.26+ provider: jaeger # set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forth sample_rate: 1 # namespace used to differentiate different harbor services # namespace: # attributes is a key value dict contains user defined attributes used to initialize trace provider # attributes: # application: harbor jaeger: # jaeger supports two modes: # collector mode(uncomment endpoint and uncomment username, password if needed) # agent mode(uncomment agent_host and agent_port) endpoint: http://hostname:14268/api/traces # username: # password: # agent_host: hostname # export trace data by jaeger.thrift in compact mode # agent_port: 6831 otel: endpoint: hostname:4318 url_path: /v1/traces compression: false insecure: true # timeout is in seconds timeout: 10 # cache layer configurations # if this feature enabled, harbor will cache the resource # `project/project_metadata/repository/artifact/manifest` in the redis # which help to improve the performance of high concurrent pulling manifest. cache: # default is not enabled. enabled: false # default keep cache for one day. expireHours: 24 ## set Container Security Context to comply with PSP restricted policy if necessary ## each of the conatiner will apply the same security context ## containerSecurityContext:{} is initially an empty yaml that you could edit it on demand, we just filled with a common template for convenience containerSecurityContext: privileged: false allowPrivilegeEscalation: false seccompProfile: type: RuntimeDefault runAsNonRoot: true capabilities: drop: - ALL # If service exposed via "ingress", the Nginx will not be used nginx: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/nginx-photon tag: v2.11.1 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## The priority class to run the pod as priorityClassName: portal: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-portal tag: v2.11.1 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## Additional service annotations serviceAnnotations: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] core: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-core tag: v2.11.1 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 ## Startup probe values startupProbe: enabled: true initialDelaySeconds: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## Additional service annotations serviceAnnotations: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] ## User settings configuration json string configureUserSettings: # The provider for updating project quota(usage), there are 2 options, redis or db. # By default it is implemented by db but you can configure it to redis which # can improve the performance of high concurrent pushing to the same project, # and reduce the database connections spike and occupies. # Using redis will bring up some delay for quota usage updation for display, so only # suggest switch provider to redis if you were ran into the db connections spike around # the scenario of high concurrent pushing to same project, no improvment for other scenes. quotaUpdateProvider: db # Or redis # Secret is used when core server communicates with other components. # If a secret key is not specified, Helm will generate one. Alternatively set existingSecret to use an existing secret # Must be a string of 16 chars. secret: "" # Fill in the name of a kubernetes secret if you want to use your own # If using existingSecret, the key must be secret existingSecret: "" # Fill the name of a kubernetes secret if you want to use your own # TLS certificate and private key for token encryption/decryption. # The secret must contain keys named: # "tls.key" - the private key # "tls.crt" - the certificate secretName: "" # If not specifying a preexisting secret, a secret can be created from tokenKey and tokenCert and used instead. # If none of secretName, tokenKey, and tokenCert are specified, an ephemeral key and certificate will be autogenerated. # tokenKey and tokenCert must BOTH be set or BOTH unset. # The tokenKey value is formatted as a multiline string containing a PEM-encoded RSA key, indented one more than tokenKey on the following line. tokenKey: | # If tokenKey is set, the value of tokenCert must be set as a PEM-encoded certificate signed by tokenKey, and supplied as a multiline string, indented one more than tokenCert on the following line. tokenCert: | # The XSRF key. Will be generated automatically if it isn't specified xsrfKey: "" # If using existingSecret, the key is defined by core.existingXsrfSecretKey existingXsrfSecret: "" # If using existingSecret, the key existingXsrfSecretKey: CSRF_KEY # The time duration for async update artifact pull_time and repository # pull_count, the unit is second. Will be 10 seconds if it isn't set. # eg. artifactPullAsyncFlushDuration: 10 artifactPullAsyncFlushDuration: gdpr: deleteUser: false auditLogsCompliant: false jobservice: image: repository: goharbor/harbor-jobservice tag: v2.11.1 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] maxJobWorkers: 10 # The logger for jobs: "file", "database" or "stdout" jobLoggers: - file # - database # - stdout # The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`) loggerSweeperDuration: 14 #days notification: webhook_job_max_retry: 3 webhook_job_http_client_timeout: 3 # in seconds reaper: # the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24 max_update_hours: 24 # the max time for execution in running state without new task created max_dangling_hours: 168 # Secret is used when job service communicates with other components. # If a secret key is not specified, Helm will generate one. # Must be a string of 16 chars. secret: "" # Use an existing secret resource existingSecret: "" # Key within the existing secret for the job service secret existingSecretKey: JOBSERVICE_SECRET registry: registry: image: repository: goharbor/registry-photon tag: v2.11.1 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] controller: image: repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/harbor-registryctl tag: v2.11.1 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] # Secret is used to secure the upload state from client # and registry storage backend. # See: https://github.com/distribution/distribution/blob/main/docs/configuration.md#http # If a secret key is not specified, Helm will generate one. # Must be a string of 16 chars. secret: "" # Use an existing secret resource existingSecret: "" # Key within the existing secret for the registry service secret existingSecretKey: REGISTRY_HTTP_SECRET # If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL. relativeurls: false credentials: username: "harbor_registry_user" password: "harbor_registry_password" # If using existingSecret, the key must be REGISTRY_PASSWD and REGISTRY_HTPASSWD existingSecret: "" # Login and password in htpasswd string format. Excludes `registry.credentials.username` and `registry.credentials.password`. May come in handy when integrating with tools like argocd or flux. This allows the same line to be generated each time the template is rendered, instead of the `htpasswd` function from helm, which generates different lines each time because of the salt. # htpasswdString: $apr1$XLefHzeG$Xl4.s00sMSCCcMyJljSZb0 # example string htpasswdString: "" middleware: enabled: false type: cloudFront cloudFront: baseurl: example.cloudfront.net keypairid: KEYPAIRID duration: 3000s ipfilteredby: none # The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key # that allows access to CloudFront privateKeySecret: "my-secret" # enable purge _upload directories upload_purging: enabled: true # remove files in _upload directories which exist for a period of time, default is one week. age: 168h # the interval of the purge operations interval: 24h dryrun: false trivy: # enabled the flag to enable Trivy scanner enabled: true image: # repository the repository for Trivy adapter image repository: registry.cn-guangzhou.aliyuncs.com/xingcangku/adapter-photon # tag the tag for Trivy adapter image tag: v2.11.1 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false # replicas the number of Pod replicas replicas: 1 resources: requests: cpu: 200m memory: 512Mi limits: cpu: 1 memory: 1Gi extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] # debugMode the flag to enable Trivy debug mode with more verbose scanning log debugMode: false # vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`. vulnType: "os,library" # severity a comma-separated list of severities to be checked severity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL" # ignoreUnfixed the flag to display only fixed vulnerabilities ignoreUnfixed: false # insecure the flag to skip verifying registry certificate insecure: false # gitHubToken the GitHub access token to download Trivy DB # # Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases. # It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached # in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update # timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one. # Currently, the database is updated every 12 hours and published as a new release to GitHub. # # Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough # for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000 # requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult # https://developer.github.com/v3/#rate-limiting # # You can create a GitHub token by following the instructions in # https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line gitHubToken: "" # skipUpdate the flag to disable Trivy DB downloads from GitHub # # You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues. # If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the # `/home/scanner/.cache/trivy/db/trivy.db` path. skipUpdate: false # skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the # `/home/scanner/.cache/trivy/java-db/trivy-java.db` path # skipJavaDBUpdate: false # The offlineScan option prevents Trivy from sending API requests to identify dependencies. # # Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it. # For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't # exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode. # It would work if all the dependencies are in local. # This option doesn’t affect DB download. You need to specify skipUpdate as well as offlineScan in an air-gapped environment. offlineScan: false # Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`. securityCheck: "vuln" # The duration to wait for scan completion timeout: 5m0s database: # if external database is used, set "type" to "external" # and fill the connection information in "external" section type: internal internal: image: repository: goharbor/harbor-db tag: v2.11.1 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false # resources: # requests: # memory: 256Mi # cpu: 100m # The timeout used in livenessProbe; 1 to 5 seconds livenessProbe: timeoutSeconds: 1 # The timeout used in readinessProbe; 1 to 5 seconds readinessProbe: timeoutSeconds: 1 extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. extrInitContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] # The initial superuser password for internal database password: "changeit" # The size limit for Shared memory, pgSQL use it for shared_buffer # More details see: # https://github.com/goharbor/harbor/issues/15034 shmSizeLimit: 512Mi initContainer: migrator: {} # resources: # requests: # memory: 128Mi # cpu: 100m permissions: {} # resources: # requests: # memory: 128Mi # cpu: 100m external: host: "192.168.0.1" port: "5432" username: "user" password: "password" coreDatabase: "registry" # if using existing secret, the key must be "password" existingSecret: "" # "disable" - No SSL # "require" - Always SSL (skip verification) # "verify-ca" - Always SSL (verify that the certificate presented by the # server was signed by a trusted CA) # "verify-full" - Always SSL (verify that the certification presented by the # server was signed by a trusted CA and the server host name matches the one # in the certificate) sslmode: "disable" # The maximum number of connections in the idle connection pool per pod (core+exporter). # If it <=0, no idle connections are retained. maxIdleConns: 100 # The maximum number of open connections to the database per pod (core+exporter). # If it <= 0, then there is no limit on the number of open connections. # Note: the default number of connections is 1024 for harbor's postgres. maxOpenConns: 900 ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} redis: # if external Redis is used, set "type" to "external" # and fill the connection information in "external" section type: internal internal: image: repository: goharbor/redis-photon tag: v2.11.1 # set the service account to be used, default if left empty serviceAccountName: "" # mount the service account token automountServiceAccountToken: false # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] nodeSelector: {} tolerations: [] affinity: {} ## The priority class to run the pod as priorityClassName: # containers to be run before the controller's container starts. initContainers: [] # Example: # # - name: wait # image: busybox # command: [ 'sh', '-c', "sleep 20" ] # # jobserviceDatabaseIndex defaults to "1" # # registryDatabaseIndex defaults to "2" # # trivyAdapterIndex defaults to "5" # # harborDatabaseIndex defaults to "0", but it can be configured to "6", this config is optional # # cacheLayerDatabaseIndex defaults to "0", but it can be configured to "7", this config is optional jobserviceDatabaseIndex: "1" registryDatabaseIndex: "2" trivyAdapterIndex: "5" # harborDatabaseIndex: "6" # cacheLayerDatabaseIndex: "7" external: # support redis, redis+sentinel # addr for redis: <host_redis>:<port_redis> # addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3> addr: "192.168.0.2:6379" # The name of the set of Redis instances to monitor, it must be set to support redis+sentinel sentinelMasterSet: "" # The "coreDatabaseIndex" must be "0" as the library Harbor # used doesn't support configuring it # harborDatabaseIndex defaults to "0", but it can be configured to "6", this config is optional # cacheLayerDatabaseIndex defaults to "0", but it can be configured to "7", this config is optional coreDatabaseIndex: "0" jobserviceDatabaseIndex: "1" registryDatabaseIndex: "2" trivyAdapterIndex: "5" # harborDatabaseIndex: "6" # cacheLayerDatabaseIndex: "7" # username field can be an empty string, and it will be authenticated against the default user username: "" password: "" # If using existingSecret, the key must be REDIS_PASSWORD existingSecret: "" ## Additional deployment annotations podAnnotations: {} ## Additional deployment labels podLabels: {} exporter: image: repository: goharbor/harbor-exporter tag: v2.11.1 serviceAccountName: "" # mount the service account token automountServiceAccountToken: false replicas: 1 revisionHistoryLimit: 10 # resources: # requests: # memory: 256Mi # cpu: 100m extraEnvVars: [] podAnnotations: {} ## Additional deployment labels podLabels: {} nodeSelector: {} tolerations: [] affinity: {} # Spread Pods across failure-domains like regions, availability zones or nodes topologySpreadConstraints: [] ## The priority class to run the pod as priorityClassName: # - maxSkew: 1 # topologyKey: topology.kubernetes.io/zone # nodeTaintsPolicy: Honor # whenUnsatisfiable: DoNotSchedule cacheDuration: 23 cacheCleanInterval: 14400 五、安装kubectl create namespace harbor helm install harbor . -n harbor # 将安装资源部署到harbor命名空间 # 注意 # 1、部署过程可能因为下载镜像慢导致redis尚未启动成功,其他pod会出现启动失败的现象,耐心等一会即可 # 2、如果下载速度过慢,可以自己制作镜像,或者下载镜像后上传到服务器导入 # nerdctl -n k8s.io load -i xxxxxxxxxxx.tar六、查看[root@master01 harbor]# kubectl -n harbor get pods -w NAME READY STATUS RESTARTS AGE harbor-core-586f48cb4c-4r7gz 0/1 Running 2 (66s ago) 3m21s harbor-database-0 1/1 Running 0 3m21s harbor-exporter-74ff648dfc-k6pb2 1/1 Running 2 (79s ago) 3m21s harbor-jobservice-864b5bc9b9-8wb26 0/1 CrashLoopBackOff 5 (6s ago) 3m21s harbor-nginx-6c5fc7c744-5m9lz 1/1 Running 0 3m21s harbor-portal-74484f87f5-lh8m6 1/1 Running 0 3m21s harbor-redis-0 1/1 Running 0 3m21s harbor-registry-b7f8d77d6-ltpw7 2/2 Running 0 3m21s harbor-trivy-0 1/1 Running 0 3m21s harbor-core-586f48cb4c-4r7gz 0/1 Running 2 (77s ago) 3m32s harbor-core-586f48cb4c-4r7gz 1/1 Running 2 (78s ago) 3m33s ^C[root@master01 harbor]# ^C [root@master01 harbor]# ^C [root@master01 harbor]# kubectl -n harbor delete pod harbor-jobservice-864b5bc9b9-8wb26 & [1] 103883 [root@master01 harbor]# pod "harbor-jobservice-864b5bc9b9-8wb26" deleted [1]+ 完成 kubectl -n harbor delete pod harbor-jobservice-864b5bc9b9-8wb26 [root@master01 harbor]# [root@master01 harbor]# kubectl -n harbor get pods -w NAME READY STATUS RESTARTS AGE harbor-core-586f48cb4c-4r7gz 1/1 Running 2 (2m13s ago) 4m28s harbor-database-0 1/1 Running 0 4m28s harbor-exporter-74ff648dfc-k6pb2 1/1 Running 2 (2m26s ago) 4m28s harbor-jobservice-864b5bc9b9-vkr6w 0/1 Running 0 6s harbor-nginx-6c5fc7c744-5m9lz 1/1 Running 0 4m28s harbor-portal-74484f87f5-lh8m6 1/1 Running 0 4m28s harbor-redis-0 1/1 Running 0 4m28s harbor-registry-b7f8d77d6-ltpw7 2/2 Running 0 4m28s harbor-trivy-0 1/1 Running 0 4m28s ^C[root@master01 harbor]# kubectl -n harbor get pods -w NAME READY STATUS RESTARTS AGE harbor-core-586f48cb4c-4r7gz 1/1 Running 2 (2m26s ago) 4m41s harbor-database-0 1/1 Running 0 4m41s harbor-exporter-74ff648dfc-k6pb2 1/1 Running 2 (2m39s ago) 4m41s harbor-jobservice-864b5bc9b9-vkr6w 0/1 Running 0 19s harbor-nginx-6c5fc7c744-5m9lz 1/1 Running 0 4m41s harbor-portal-74484f87f5-lh8m6 1/1 Running 0 4m41s harbor-redis-0 1/1 Running 0 4m41s harbor-registry-b7f8d77d6-ltpw7 2/2 Running 0 4m41s harbor-trivy-0 1/1 Running 0 4m41s harbor-jobservice-864b5bc9b9-vkr6w 1/1 Running 0 21s ^C[root@master01 harbor]# ^C [root@master01 harbor]# kubectl -n harbor get pods -w NAME READY STATUS RESTARTS AGE harbor-core-586f48cb4c-4r7gz 1/1 Running 2 (2m31s ago) 4m46s harbor-database-0 1/1 Running 0 4m46s harbor-exporter-74ff648dfc-k6pb2 1/1 Running 2 (2m44s ago) 4m46s harbor-jobservice-864b5bc9b9-vkr6w 1/1 Running 0 24s harbor-nginx-6c5fc7c744-5m9lz 1/1 Running 0 4m46s harbor-portal-74484f87f5-lh8m6 1/1 Running 0 4m46s harbor-redis-0 1/1 Running 0 4m46s harbor-registry-b7f8d77d6-ltpw7 2/2 Running 0 4m46s harbor-trivy-0 1/1 Running 0 4m46s七、登录http://192.168.110.101:30002,账号:admin 密码:Harbor12345
2023年09月10日
11 阅读
0 评论
0 点赞