V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
cookgo
V2EX  ›  Kubernetes

kubernetes 环境下部署 nacos 集群的持久化路径“不生效”问题请教

  •  
  •   cookgo · 2022-11-23 11:08:28 +08:00 · 1476 次点击
    这是一个创建于 511 天前的主题,其中的信息可能已经有所发展或是发生改变。

    遇到的疑点

    • 在 kubernetes 里部署 nacos 集群的时候,集群部署成功了,能正常访问 nacos 了,但是检查 nacos 的日志存储的时候,发现文件并不存在?
    • 进入 nacos 的容器里面,发现日志等文件都是存在的,但是登录 NFS 的挂载路径下,却没有任何文件,请问这是为何? nacos 状态 /容器内日志 /NFS 浏览结果: https://imgur.com/a/4PtjUvK

    YAML

    存储类( NFS 方式)

    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      annotations:
        k8s.kuboard.cn/storageType: nfs_client_provisioner
      name: nfs-with-deleted
      resourceVersion: '351788'
    parameters:
      archiveOnDelete: 'false'
    provisioner: nfs-nfs-with-deleted
    reclaimPolicy: Delete
    volumeBindingMode: Immediate
    
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations: {}
      labels:
        app: eip-nfs-nfs-with-deleted
      name: eip-nfs-nfs-with-deleted
      namespace: kube-system
      resourceVersion: '352584'
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: eip-nfs-nfs-with-deleted
      strategy:
        type: Recreate
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: eip-nfs-nfs-with-deleted
        spec:
          containers:
            - env:
                - name: PROVISIONER_NAME
                  value: nfs-nfs-with-deleted
                - name: NFS_SERVER
                  value: 10.2.0.108
                - name: NFS_PATH
                  value: /volume1/nfs-k8s-unretain
              image: 'eipwork/nfs-subdir-external-provisioner:v4.0.2'
              imagePullPolicy: IfNotPresent
              name: nfs-client-provisioner
              resources: {}
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
                - mountPath: /persistentvolumes
                  name: nfs-client-root
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          serviceAccount: eip-nfs-client-provisioner
          serviceAccountName: eip-nfs-client-provisioner
          terminationGracePeriodSeconds: 30
          volumes:
            - name: nfs-client-root
              persistentVolumeClaim:
                claimName: nfs-pvc-nfs-with-deleted
    
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      annotations:
        pv.kubernetes.io/bound-by-controller: 'yes'
      finalizers:
        - kubernetes.io/pv-protection
      name: nfs-pv-nfs-with-deleted
      resourceVersion: '351782'
    spec:
      accessModes:
        - ReadWriteMany
      capacity:
        storage: '100'
      claimRef:
        apiVersion: v1
        kind: PersistentVolumeClaim
        name: nfs-pvc-nfs-with-deleted
        namespace: kube-system
        resourceVersion: '351779'
        uid: 2d84dd98-1e45-42e8-b343-21b21e003ca2
      nfs:
        path: /volume1/nfs-k8s-unretain
        server: 10.2.0.108
      persistentVolumeReclaimPolicy: Retain
      storageClassName: nfs-storageclass-provisioner
      volumeMode: Filesystem
    
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      annotations:
        pv.kubernetes.io/bind-completed: 'yes'
      finalizers:
        - kubernetes.io/pvc-protection
      name: nfs-pvc-nfs-with-deleted
      namespace: kube-system
      resourceVersion: '351814'
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: '100'
      storageClassName: nfs-storageclass-provisioner
      volumeMode: Filesystem
      volumeName: nfs-pv-nfs-with-deleted
    
    
    
    

    PVC

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      annotations:
        pv.kubernetes.io/bind-completed: 'yes'
        pv.kubernetes.io/bound-by-controller: 'yes'
        volume.beta.kubernetes.io/storage-provisioner: nfs-nfs-with-deleted
        volume.kubernetes.io/storage-provisioner: nfs-nfs-with-deleted
      finalizers:
        - kubernetes.io/pvc-protection
      name: nacos-pvc
      namespace: default
      resourceVersion: '353666'
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi
      storageClassName: nfs-with-deleted
      volumeMode: Filesystem
      volumeName: pvc-b5cba474-47cb-44ee-b310-9722af14b2e1
    
    
    

    StatefulSet

    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      labels:
        app: nacos
      name: nacos
      namespace: default
      resourceVersion: '355133'
    spec:
      podManagementPolicy: OrderedReady
      replicas: 3
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: nacos
      serviceName: nacos-headless
      template:
        metadata:
          annotations:
            kubectl.kubernetes.io/restartedAt: '2022-11-23T10:53:19+08:00'
            pod.alpha.kubernetes.io/initialized: 'true'
          creationTimestamp: null
          labels:
            app: nacos
        spec:
          affinity:
            podAntiAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchExpressions:
                      - key: app
                        operator: In
                        values:
                          - nacos
                  topologyKey: kubernetes.io/hostname
          containers:
            - env:
                - name: NACOS_REPLICAS
                  value: '3'
                - name: SERVICE_NAME
                  value: nacos-headless
                - name: DOMAIN_NAME
                  value: cluster.local
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: metadata.namespace
                - name: MYSQL_SERVICE_DB_NAME
                  valueFrom:
                    configMapKeyRef:
                      key: mysql.db.name
                      name: nacos-configmap
                - name: MYSQL_SERVICE_HOST
                  valueFrom:
                    configMapKeyRef:
                      key: mysql.host
                      name: nacos-configmap
                - name: MYSQL_SERVICE_PORT
                  valueFrom:
                    configMapKeyRef:
                      key: mysql.port
                      name: nacos-configmap
                - name: MYSQL_SERVICE_USER
                  valueFrom:
                    configMapKeyRef:
                      key: mysql.user
                      name: nacos-configmap
                - name: MYSQL_SERVICE_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      key: password
                      name: mysql-secret
                - name: NACOS_SERVER_PORT
                  value: '8848'
                - name: NACOS_APPLICATION_PORT
                  value: '8848'
                - name: PREFER_HOST_MODE
                  value: hostname
              image: 'nacos/nacos-server:latest'
              imagePullPolicy: IfNotPresent
              name: nacos
              ports:
                - containerPort: 8848
                  name: client-port
                  protocol: TCP
                - containerPort: 9848
                  name: client-rpc
                  protocol: TCP
                - containerPort: 9849
                  name: raft-rpc
                  protocol: TCP
              resources:
                requests:
                  cpu: '2'
                  memory: 2Gi
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
                - mountPath: /home/nacos/plugins/peer-finder
                  name: data
                  subPath: peer-finder
                - mountPath: /home/nacos/data
                  name: data
                  subPath: data
                - mountPath: /home/nacos/logs
                  name: data
                  subPath: logs
          dnsPolicy: ClusterFirst
          initContainers:
            - image: 'nacos/nacos-peer-finder-plugin:1.1'
              imagePullPolicy: IfNotPresent
              name: peer-finder-plugin-install
              resources: {}
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
                - mountPath: /home/nacos/plugins/peer-finder
                  name: data
                  subPath: peer-finder
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
          volumes:
            - emptyDir: {}
              name: data
      updateStrategy:
        rollingUpdate:
          partition: 0
        type: RollingUpdate
    
    
    

    Service

    ---
    apiVersion: v1
    kind: Service
    metadata:
      annotations: {}
      labels:
        app: nacos
      name: nacos-headless
      namespace: default
      resourceVersion: '352618'
    spec:
      clusterIP: 10.233.251.189
      clusterIPs:
        - 10.233.251.189
      externalTrafficPolicy: Cluster
      internalTrafficPolicy: Cluster
      ipFamilies:
        - IPv4
      ipFamilyPolicy: SingleStack
      ports:
        - name: server
          nodePort: 30038
          port: 8848
          protocol: TCP
          targetPort: 8848
        - name: client-rpc
          nodePort: 31038
          port: 9848
          protocol: TCP
          targetPort: 9848
        - name: raft-rpc
          nodePort: 31039
          port: 9849
          protocol: TCP
          targetPort: 9849
      selector:
        app: nacos
      sessionAffinity: None
      type: NodePort
    
    
    
    8 条回复    2022-11-24 16:31:59 +08:00
    NoahNye
        1
    NoahNye  
       2022-11-23 11:26:01 +08:00
    volumes:
    - emptyDir: {}
    name: data

    没有挂到你的 nfs
    idblife
        2
    idblife  
       2022-11-23 11:48:36 +08:00
    说个题外话
    nacos 这种不需要挂 pv
    CheckMySoul
        3
    CheckMySoul  
       2022-11-23 15:33:00 +08:00
    同 1 楼,StatefulSet 中没有使用 nfs
    b1ghawk
        4
    b1ghawk  
       2022-11-23 16:25:13 +08:00
    说个题外话
    nacos 这种不需要挂 pv 。
    cookgo
        5
    cookgo  
    OP
       2022-11-24 10:08:00 +08:00
    如果 nacos 不挂载 PV ,那么用什么方式保存 nacos 的日志呢?
    cookgo
        6
    cookgo  
    OP
       2022-11-24 10:22:51 +08:00
    确实如 1 楼所说,我 StatefulSet 原始的 yaml 里有

    volumes:
    - name: data
    persistentVolumeClaim:
    claimName: nacos-pvc
    - name: nacos-configmap
    configMap:
    name: nacos-configmap

    但是通过 Kuboard 创建的却没了,这个细节确实没注意。
    tudou1514
        7
    tudou1514  
       2022-11-24 16:30:52 +08:00
    举例:sts 要用这种方式。
    volumeClaimTemplates:
    - metadata:
    name: data
    spec:
    accessModes: [ "ReadWriteOnce" ]
    storageClassName: data
    resources:
    requests:
    storage: 200Gi
    tudou1514
        8
    tudou1514  
       2022-11-24 16:31:59 +08:00
    @tudou1514 格式错了,要缩进的。你看下官方的,
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   我们的愿景   ·   实用小工具   ·   2909 人在线   最高记录 6543   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 27ms · UTC 00:27 · PVG 08:27 · LAX 17:27 · JFK 20:27
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.