Skip to content
This repository has been archived by the owner on Oct 20, 2022. It is now read-only.

nodegroup configuration #165

Open
yossisht9876 opened this issue Nov 29, 2021 · 4 comments
Open

nodegroup configuration #165

yossisht9876 opened this issue Nov 29, 2021 · 4 comments

Comments

@yossisht9876
Copy link

hey all,

i have a problem configure toleration to the nifi-cluster pods.

"error: error validating "nifi-cluster.yaml": error validating data: ValidationError(NifiCluster.spec): unknown field "tolerations" in com.orange.nifi.v1alpha1.NifiCluster.spec; if you choose to ignore these errors, turn validation off with --validate=false"

its works for the zookeeper setup and for the helm nifikop chart.

but i didnt find a way to set toleration or Nodeselector to the nifi-cluster.yaml

here is my conf:

kind: NifiCluster
metadata:
  name: nificluster
spec:
  service:
    headlessEnabled: true
  zkAddress: "zookeeper:2181"
  zkPath: "/nifi"
  clusterImage: "apache/nifi:1.13.2"
  oneNifiNodePerNode: false
  managedAdminUsers:
    -  identity: "nidddha.com"
       name: "ndd.s"
    -  identity: "yodddusddha.dd"
       name: "yddi.s"
  propagateLabels: true
  nifiClusterTaskSpec:
    retryDurationMinutes: 10
  readOnlyConfig:
    logbackConfig:
      # logback.xml configuration that will replace the one produced based on template
      replaceConfigMap:
        # The key of the value,in data content, that we want use.
        data: logback.xml
        # Name of the configmap that we want to refer.
        name: nifi-configs
        # Namespace where is located the secret that we want to refer.
        namespace: nifi
    nifiProperties:
      webProxyHosts:
        - nifi.dddv.ludddsha.co
      # Additionnals nifi.properties configuration that will override the one produced based
      # on template and configurations.
      overrideConfigs: |
        xxxxxxxxxxxx
  nodeConfigGroups:
    default_group:
      serviceAccountName: "default"
      runAsUser: 1000
      fsGroup: 1000
      isNode: true
      storageConfigs:
        - mountPath: "/opt/nifi/nifi-current/logs"
          name: logs
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "gp2"
            resources:
              requests:
                storage: 20Gi
        - mountPath: "/opt/nifi/data"
          name: data
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "gp2"
            resources:
              requests:
                storage: 50Gi
        - mountPath: "/opt/nifi/flowfile_repository"
          name: flowfile-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "gp2"
            resources:
              requests:
                storage: 50Gi
        - mountPath: "/opt/nifi/nifi-current/conf"
          name: conf
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "gp2"
            resources:
              requests:
                storage: 10Gi
        - mountPath: "/opt/nifi/content_repository"
          name: content-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "gp2"
            resources:
              requests:
                storage: 50Gi
        - mountPath: "/opt/nifi/provenance_repository"
          name: provenance-repository
          pvcSpec:
            accessModes:
              - ReadWriteOnce
            storageClassName: "gp2"
            resources:
              requests:
                storage: 50Gi
      resourcesRequirements:
        limits:
          cpu: "2"
          memory: 2Gi
        requests:
          cpu: "1"
          memory: 1Gi
  nodes:
    - id: 0
      nodeConfigGroup: "default_group"
    - id: 1
      nodeConfigGroup: "default_group"
    - id: 2
      nodeConfigGroup: "default_group"
  listenersConfig:
    internalListeners:
      - type: "https"
        name: "https"
        containerPort: 8443
      - type: "cluster"
        name: "cluster"
        containerPort: 6007
      - type: "s2s"
        name: "s2s"
        containerPort: 10000
      - type: "prometheus"
        name: "prometheus"
        containerPort: 9090
    sslSecrets:
      tlsSecretName: "nifi-sdds"
      create: true
  externalServices:
    - name: "nifi-clds"
      spec:
        type: ClusterIP
        portConfigs:
          - port: 8443
            internalListenerName: "https"



trying to add one of the 2:

#  nodeSelector:
#    nodegroup: nifi

#  tolerations:
#    -   key: "nifi-dedicated"
#        operator: "Equal"
#        value: "on_demand"
#        effect: "NoSchedule"


any help ? thanks @erdrix 
@mh013370
Copy link
Contributor

mh013370 commented Nov 29, 2021

tolerations go under a specific nodeConfigGroup. So if you're using default_group for each of your nodes, it'd look like this:

spec:
  nodeConfigGroups:
    default_group:
      nodeConfigGroups:
        default_group:
          tolerations:
            - key: taint.key
              operator: Equal
              value: "true"
              effect: "NoSchedule"

nodeSelector goes in the same spot.

docs: https://orange-opensource.github.io/nifikop/docs/5_references/1_nifi_cluster/3_node_config

@yossisht9876
Copy link
Author

yossisht9876 commented Nov 29, 2021

thanks i figure it out but i have another problem
for some reason only the master pod scheduled to the dedicated node-group but the 2 other workers pods aren't scheduled in the node group even when they have the toleration configured.

is it need to be like that ? @michael81877

@mh013370
Copy link
Contributor

mh013370 commented Nov 29, 2021

If you want to pin the NiFi pods to dedicated nodes which you've tainted, you must specify both a nodeSelector/nodeAffinity and the toleration. Specifying only the tolerations just tells k8s that it is allowed to schedule the pods on nodes which have a particular taint, but it won't necessarily.

@yossisht9876
Copy link
Author

thanks @michael81877 works like a charm

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants