We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I've reimplemented this plugin from scratch and noticed discrepancies between my implementation vs yours
Output from kubectl blame:
kubectl blame
spec: containers: kubectl-client-side-apply (Update 2 weeks ago) - env: kubectl-client-side-apply (Update 2 weeks ago) - name: barx kubectl-client-side-apply (Update 2 weeks ago) value: bar
ManagedFields input to kubectl blame:
- apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:template: f:spec: f:containers: k:{"name":"nginx"}: f:env: .: {} k:{"name":"barx"}: .: {} f:name: {} f:value: {} manager: envpatcher operation: Update time: "2024-04-10T00:34:50Z"
It should be showing envpatcher as the owner.
envpatcher
apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "2" kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx-deployment","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx:1.14.2","name":"nginx","ports":[{"containerPort":80}]}]}}}} creationTimestamp: "2024-04-10T00:34:50Z" finalizers: - example.com/foo generation: 2 labels: app: nginx managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubectl.kubernetes.io/last-applied-configuration: {} f:labels: .: {} f:app: {} f:spec: f:progressDeadlineSeconds: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:labels: .: {} f:app: {} f:spec: f:containers: k:{"name":"nginx"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:ports: .: {} k:{"containerPort":80,"protocol":"TCP"}: .: {} f:containerPort: {} f:protocol: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: kubectl-client-side-apply operation: Update time: "2024-04-10T00:44:50Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:template: f:spec: f:containers: k:{"name":"nginx"}: f:env: .: {} k:{"name":"barx"}: .: {} f:name: {} f:value: {} manager: envpatcher operation: Update time: "2024-04-10T00:34:50Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:deployment.kubernetes.io/revision: {} f:status: f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Progressing"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:observedGeneration: {} f:readyReplicas: {} f:replicas: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update subresource: status time: "2024-04-10T00:34:50Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: {} v:"example.com/foo": {} manager: finalizerpatcher operation: Update time: "2024-04-10T00:35:29Z" name: nginx-deployment namespace: default resourceVersion: "7792385" uid: 2e77f9dd-e8da-47b0-be11-75b04f1b4460 spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - env: - name: barx value: bar image: nginx:1.14.2 imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 3 conditions: - lastTransitionTime: "2024-04-10T00:34:50Z" lastUpdateTime: "2024-04-10T00:34:50Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2024-04-10T00:34:49Z" lastUpdateTime: "2024-04-10T00:35:14Z" message: ReplicaSet "nginx-deployment-779d59bcb" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 2 readyReplicas: 3 replicas: 3 updatedReplicas: 3
The text was updated successfully, but these errors were encountered:
Thank you! I will take a deeper look later
Sorry, something went wrong.
knight42
No branches or pull requests
I've reimplemented this plugin from scratch and noticed discrepancies between my implementation vs yours
Output from
kubectl blame
:ManagedFields input to
kubectl blame
:It should be showing
envpatcher
as the owner.full input
The text was updated successfully, but these errors were encountered: