Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/mysql] Issue with scaling the secondary statefulset #29777

Open
RustyBridge opened this issue Oct 4, 2024 · 0 comments
Open

[bitnami/mysql] Issue with scaling the secondary statefulset #29777

RustyBridge opened this issue Oct 4, 2024 · 0 comments
Assignees
Labels
tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@RustyBridge
Copy link

RustyBridge commented Oct 4, 2024

Name and Version

bitnami mysql 9.7.0

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. Install the chart with:
  • replication architecture
  • In auth specify the creation of a DB koko and a user kokouser
  • 1 replica for each statefulset
  1. Populate the DB, and verify that replication is working
  2. Scale the secondary statefulset to 2 replicas

Are you using any custom parameters or values?

We deploy via flux. The committed HelmRelease Object, to scale from 1 replica to 2, looks like this:

---
apiVersion: "helm.fluxcd.io/v1"
kind: "HelmRelease"
metadata:
  name: "lala-mysql"
  namespace: "lala"
  labels:
    gitops: "true"
spec:
  releaseName: "lala-mysql"
  targetNamespace: "lala"
  timeout: 300
  chart:
    repository: "https://charts.bitnami.com/bitnami"
    name: "mysql"
    version: "9.7.0"
  valuesFrom:
    - secretKeyRef:
        name: "mysql-secrets"
        key: "secretvalues.yaml"
  values:
    global:
      storageClass: "default"
    architecture: "replication"
    primary:
      resources:
        requests:
          cpu: 500m
          memory: 512Mi
    secondary:
      replicaCount: 2
      pdb:
        create: true
        minAvailable: 1
      resources:
        requests:
          cpu: 500m
          memory: 512Mi
    auth:
      database: "koko"
      username: "kokouser"
    metrics:
      enabled: true
      resources:
        requests:
          cpu: 30m
          memory: 50Mi
      serviceMonitor:
        enabled: true
        labels:
          release: 'prometheus-operator'
      extraArgs:
        primary:
          - '--collect.info_schema.processlist'
        secondary:
          - '--collect.info_schema.processlist'

What is the expected behavior?

  1. Create the DB koko and user kokouser
  2. Start replicating from the primary DB
  3. Both containers in the pod become ready

What do you see instead?

  1. The exporter container becomes ready, the mysql container does not
  2. The logs show
2024-10-04T10:57:57.983050Z 5 [System] [MY-010562] [Repl] Slave I/O thread for channel '': connected to master 'replicator@lala-mysql-primary:3306',replication started in log 'FIRST' at position 4
2024-10-04T10:57:57.983129Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /tmp/mysqlx.sock
2024-10-04T10:57:57.983214Z 0 [System] [MY-010931] [Server] /opt/bitnami/mysql/bin/mysqld: ready for connections. Version: '8.0.32'  socket: '/opt/bitnami/mysql/tmp/mysql.sock'  port: 3306  Source distribution.
2024-10-04T10:57:58.185967Z 7 [ERROR] [MY-010584] [Repl] Slave SQL for channel '': Worker 1 failed executing transaction 'ANONYMOUS' at master log mysql-bin.000117, end_log_pos 1128; Error executing row event: 'Unknown database 'koko'', Error_code: MY-001049
2024-10-04T10:57:58.186219Z 6 [ERROR] [MY-010586] [Repl] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.000117' position 157
  1. The LivenessProbe restarts the container after a while, finally reaching CrashLoopBackOff
  2. From the container we can see all the required env variables (root / rootpass, koko / kokopass, replicator / replicator pass, etc)
  3. Attempts to use mysql to gather information about the state fail with (regardless of user)
I have no name!@lala-mysql-secondary-1:/$ mysql -u root -p
Enter password: 
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
  1. In the db data dir (/bitnami/mysql/data/) we can see shipped binlogs but not the koko db directory
  2. From the primary DB we can see the two secondaries
mysql> SHOW SLAVE HOSTS;
+-----------+------+------+-----------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID                           |
+-----------+------+------+-----------+--------------------------------------+
|       489 |      | 3306 |       314 | 047b5538-0b40-11ee-85d2-22592ca943bd |
|       612 |      | 3306 |       314 | 1c90733a-817e-11ef-9001-3e1bd10df9cd |
+-----------+------+------+-----------+--------------------------------------+
  1. The primary DB and the first secondary replica continue to work as expected
@RustyBridge RustyBridge added the tech-issues The user has a technical issue about an application label Oct 4, 2024
@github-actions github-actions bot added the triage Triage is needed label Oct 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

2 participants