Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

try apply t.Parallel() #723

Merged
merged 8 commits into from
Aug 12, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions ChangeLog.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,16 @@ IMPROVEMENTS
- Backup/Restore RBAC related objects from Zookeeper via direct connection to zookeeper/keeper, fix [604](https://github.com/Altinity/clickhouse-backup/issues/604)
- Add `SHARDED_OPERATION_MODE` option, to easy create backup for sharded cluster, available values `none` (no sharding), `table` (table granularity), `database` (database granularity), `first-replica` (on the lexicographically sorted first active replica), thanks @mskwon, fix [639](https://github.com/Altinity/clickhouse-backup/issues/639), fix [648](https://github.com/Altinity/clickhouse-backup/pull/648)
- Add support for `compression_format: none` for upload and download backups created with `--rbac` / `--rbac-only` or `--configs` / `--configs-only` options, fix [713](https://github.com/Altinity/clickhouse-backup/issues/713)
- Add support for s3 `GLACIER` storage class, when GET return error, then, it requires 5 minutes per key and restore could be slow. Use `GLACIER_IR`, it looks more robust, fix [614](https://github.com/Altinity/clickhouse-backup/issues/614)

- Add support for s3 `GLACIER` storage class, when GET return error, then, it requires 5 minutes per key and restore could be slow. Use `GLACIER_IR`, it looks more robust, fix [614](https://github.com/Altinity/clickhouse-backup/issues/614)
- Try Make ./tests/integration/ test parallel fix [721](https://github.com/Altinity/clickhouse-backup/issues/721)

BUG FIXES
- fix possible create backup failures during UNFREEZE not exists tables, affected 2.2.7+ version, fix [704](https://github.com/Altinity/clickhouse-backup/issues/704)
- fix too strict `system.parts_columns` check when backup create, exclude Enum and Tuple (JSON) and Nullable(Type) vs Type corner cases, fix [685](https://github.com/Altinity/clickhouse-backup/issues/685), fix [699](https://github.com/Altinity/clickhouse-backup/issues/699)
- fix `--rbac` behavior when /var/lib/clickhouse/access not exists
- restore functions via `CREATE OR REPLACE`
- fix `skip_databases` behavior for corner case `--tables="*pattern.*"`
- fix `skip_database_engines` behavior

# v2.3.2
BUG FIXES
Expand Down
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ FROM ${CLICKHOUSE_IMAGE}:${CLICKHOUSE_VERSION} AS builder-base
USER root
# TODO remove ugly workaround for musl, https://www.perplexity.ai/search/2ead4c04-060a-4d78-a75f-f26835238438
RUN rm -fv /etc/apt/sources.list.d/clickhouse.list && \
find /etc/apt/ -type f -exec sed -i 's/ru.archive.ubuntu.com/archive.ubuntu.com/g' {} + && \
find /etc/apt/ -type f -name *.list -exec sed -i 's/ru.archive.ubuntu.com/archive.ubuntu.com/g' {} + && \
( apt-get update || true ) && \
apt-get install -y --no-install-recommends gnupg ca-certificates wget && apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 52B59B1571A79DBC054901C0F6BC817356A3D45E && \
DISTRIB_CODENAME=$(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d "=" -f 2) && \
Expand Down
13 changes: 10 additions & 3 deletions pkg/backup/create.go
Original file line number Diff line number Diff line change
Expand Up @@ -450,11 +450,14 @@ func (b *Backuper) createBackupRBAC(ctx context.Context, backupPath string, disk
if err != nil {
return 0, err
}
var fInfo os.FileInfo
if fInfo, err = os.Stat(accessPath); err != nil && !os.IsNotExist(err) {
accessPathInfo, err := os.Stat(accessPath)
if err != nil && !os.IsNotExist(err) {
return 0, err
}
if fInfo.IsDir() {
if err == nil && !accessPathInfo.IsDir() {
return 0, fmt.Errorf("%s is not directory", accessPath)
}
if err == nil {
log.Debugf("copy %s -> %s", accessPath, rbacBackup)
copyErr := recursiveCopy.Copy(accessPath, rbacBackup, recursiveCopy.Options{
Skip: func(srcinfo os.FileInfo, src, dest string) (bool, error) {
Expand All @@ -465,6 +468,10 @@ func (b *Backuper) createBackupRBAC(ctx context.Context, backupPath string, disk
if copyErr != nil {
return 0, copyErr
}
} else {
if err = os.MkdirAll(rbacBackup, 0755); err != nil {
return 0, err
}
}
replicatedRBACDataSize, err := b.createBackupRBACReplicated(ctx, rbacBackup)
if err != nil {
Expand Down
15 changes: 13 additions & 2 deletions pkg/backup/table_pattern.go
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ func (b *Backuper) getTableListByPatternLocal(ctx context.Context, metadataPath
return nil, nil, err
}
result.Sort(dropTable)
for i := 1; i < len(result); i++ {
for i := 0; i < len(result); i++ {
if b.shouldSkipByTableEngine(result[i]) {
t := result[i]
delete(resultPartitionNames, metadata.TableTitle{Database: t.Database, Table: t.Table})
Expand All @@ -138,9 +138,20 @@ func (b *Backuper) getTableListByPatternLocal(ctx context.Context, metadataPath

func (b *Backuper) shouldSkipByTableEngine(t metadata.TableMetadata) bool {
for _, engine := range b.cfg.ClickHouse.SkipTableEngines {
if strings.Contains(strings.ToLower(t.Query), fmt.Sprintf("engine=%s(", engine)) {
if engine == "MaterializedView" && (strings.HasPrefix(t.Query, "ATTACH MATERIALIZED VIEW") || strings.HasPrefix(t.Query, "CREATE MATERIALIZED VIEW")) {
b.log.Warnf("shouldSkipByTableEngine engine=%s found in : %s", engine, t.Query)
return true
}
if engine == "View" && strings.HasPrefix(t.Query, "CREATE VIEW") {
b.log.Warnf("shouldSkipByTableEngine engine=%s found in : %s", engine, t.Query)
return true
}
if shouldSkip, err := regexp.MatchString(fmt.Sprintf("(?mi)ENGINE\\s*=\\s*%s\\(", engine), t.Query); err == nil && shouldSkip {
b.log.Warnf("shouldSkipByTableEngine engine=%s found in : %s", engine, t.Query)
return true
} else if err != nil {
b.log.Warnf("shouldSkipByTableEngine engine=%s return error: %v", engine, err)
}
}
return false
}
Expand Down
14 changes: 9 additions & 5 deletions pkg/clickhouse/clickhouse.go
Original file line number Diff line number Diff line change
Expand Up @@ -436,7 +436,7 @@ func (ch *ClickHouse) prepareGetTablesSQL(tablePattern string, skipDatabases, sk
allTablesSQL += fmt.Sprintf(" AND database NOT IN ('%s')", strings.Join(skipDatabases, "','"))
}
if len(skipTableEngines) > 0 {
allTablesSQL += fmt.Sprintf("AND engine NOT IN ('%s')", strings.Join(skipTableEngines, "','"))
allTablesSQL += fmt.Sprintf(" AND engine NOT IN ('%s')", strings.Join(skipTableEngines, "','"))
}
// try to upload big tables first
if len(isSystemTablesFieldPresent) > 0 && isSystemTablesFieldPresent[0].IsTotalBytesPresent > 0 {
Expand Down Expand Up @@ -508,18 +508,19 @@ func (ch *ClickHouse) GetDatabases(ctx context.Context, cfg *config.Config, tabl
case <-ctx.Done():
return nil, ctx.Err()
default:
fileMatchToRE := strings.NewReplacer("*", ".*", "?", ".", "(", "\\(", ")", "\\)", "[", "\\[", "]", "\\]", "$", "\\$", "^", "\\^")
if len(bypassDatabases) > 0 {
allDatabasesSQL := fmt.Sprintf(
"SELECT name, engine FROM system.databases WHERE name NOT IN ('%s') AND name IN ('%s')",
strings.Join(skipDatabases, "','"), strings.Join(bypassDatabases, "','"),
"SELECT name, engine FROM system.databases WHERE NOT match(name,'^(%s)$') AND match(name,'^(%s)$')",
fileMatchToRE.Replace(strings.Join(skipDatabases, "|")), fileMatchToRE.Replace(strings.Join(bypassDatabases, "|")),
)
if err := ch.StructSelect(&allDatabases, allDatabasesSQL); err != nil {
return nil, err
}
} else {
allDatabasesSQL := fmt.Sprintf(
"SELECT name, engine FROM system.databases WHERE name NOT IN ('%s')",
strings.Join(skipDatabases, "','"),
"SELECT name, engine FROM system.databases WHERE NOT match(name,'^(%s)$')",
fileMatchToRE.Replace(strings.Join(skipDatabases, "|")),
)
if err := ch.StructSelect(&allDatabases, allDatabasesSQL); err != nil {
return nil, err
Expand Down Expand Up @@ -1055,6 +1056,9 @@ func (ch *ClickHouse) GetUserDefinedFunctions(ctx context.Context) ([]Function,
if err := ch.SelectContext(ctx, &allFunctions, allFunctionsSQL); err != nil {
return nil, err
}
for i := range allFunctions {
allFunctions[i].CreateQuery = strings.Replace(allFunctions[i].CreateQuery, "CREATE FUNCTION", "CREATE OR REPLACE FUNCTION", 1)
}
return allFunctions, nil
}

Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-custom-kopia.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ clickhouse:
username: backup
password: meow=& 123?*%# МЯУ
sync_replicated_tables: true
timeout: 2s
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; sql:SYSTEM SHUTDOWN"
custom:
# all `kopia` uploads are incremental
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-custom-restic.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ clickhouse:
username: backup
password: meow=& 123?*%# МЯУ
sync_replicated_tables: true
timeout: 2s
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; sql:SYSTEM SHUTDOWN"
custom:
upload_command: /custom/restic/upload.sh {{ .backupName }} {{ .diffFromRemote }}
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-custom-rsync.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ clickhouse:
username: backup
password: meow=& 123?*%# МЯУ
sync_replicated_tables: true
timeout: 2s
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; sql:SYSTEM SHUTDOWN"
custom:
upload_command: /custom/rsync/upload.sh {{ .backupName }} {{ .diffFromRemote }}
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-s3-fips.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ s3:
allow_multipart_download: true
concurrency: 3
api:
listen: :7171
listen: :7172
create_integration_tables: true
integration_tables_host: "localhost"
allow_parallel: false
Expand Down
2 changes: 1 addition & 1 deletion test/integration/config-s3.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ clickhouse:
secure: true
skip_verify: true
sync_replicated_tables: true
timeout: 2s
timeout: 5s
restart_command: "sql:SYSTEM RELOAD USERS; sql:SYSTEM RELOAD CONFIG; sql:SYSTEM SHUTDOWN"
backup_mutations: true
s3:
Expand Down
15 changes: 15 additions & 0 deletions test/integration/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,21 @@ services:
- ${CLICKHOUSE_BACKUP_BIN:-../../clickhouse-backup/clickhouse-backup-race}:/usr/bin/clickhouse-backup
- ${CLICKHOUSE_BACKUP_BIN_FIPS:-../../clickhouse-backup/clickhouse-backup-race-fips}:/usr/bin/clickhouse-backup-fips
- ./credentials.json:/etc/clickhouse-backup/credentials.json
- ./config-azblob.yml:/etc/clickhouse-backup/config-azblob.yml
- ./config-azblob-embedded.yml:/etc/clickhouse-backup/config-azblob-embedded.yml
- ./config-custom-kopia.yml:/etc/clickhouse-backup/config-custom-kopia.yml
- ./config-custom-restic.yml:/etc/clickhouse-backup/config-custom-restic.yml
- ./config-custom-rsync.yml:/etc/clickhouse-backup/config-custom-rsync.yml
- ./config-database-mapping.yml:/etc/clickhouse-backup/config-database-mapping.yml
- ./config-ftp.yaml:/etc/clickhouse-backup/config-ftp.yaml
- ./config-gcs.yml:/etc/clickhouse-backup/config-gcs.yml
- ./config-s3.yml:/etc/clickhouse-backup/config-s3.yml
- ./config-s3-embedded.yml:/etc/clickhouse-backup/config-s3-embedded.yml
- ./config-s3-fips.yml:/etc/clickhouse-backup/config-s3-fips.yml.template
- ./config-s3-nodelete.yml:/etc/clickhouse-backup/config-s3-nodelete.yml
- ./config-s3-plain-embedded.yml:/etc/clickhouse-backup/config-s3-plain-embedded.yml
- ./config-sftp-auth-key.yaml:/etc/clickhouse-backup/config-sftp-auth-key.yaml
- ./config-sftp-auth-password.yaml:/etc/clickhouse-backup/config-sftp-auth-password.yaml
- ./_coverage_/:/tmp/_coverage_/
# for local debug
- ./install_delve.sh:/tmp/install_delve.sh
Expand Down
15 changes: 15 additions & 0 deletions test/integration/docker-compose_advanced.yml
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,21 @@ services:
- ${CLICKHOUSE_BACKUP_BIN:-../../clickhouse-backup/clickhouse-backup-race}:/usr/bin/clickhouse-backup
- ${CLICKHOUSE_BACKUP_BIN_FIPS:-../../clickhouse-backup/clickhouse-backup-race-fips}:/usr/bin/clickhouse-backup-fips
- ./credentials.json:/etc/clickhouse-backup/credentials.json
- ./config-azblob.yml:/etc/clickhouse-backup/config-azblob.yml
- ./config-azblob-embedded.yml:/etc/clickhouse-backup/config-azblob-embedded.yml
- ./config-custom-kopia.yml:/etc/clickhouse-backup/config-custom-kopia.yml
- ./config-custom-restic.yml:/etc/clickhouse-backup/config-custom-restic.yml
- ./config-custom-rsync.yml:/etc/clickhouse-backup/config-custom-rsync.yml
- ./config-database-mapping.yml:/etc/clickhouse-backup/config-database-mapping.yml
- ./config-ftp.yaml:/etc/clickhouse-backup/config-ftp.yaml
- ./config-gcs.yml:/etc/clickhouse-backup/config-gcs.yml
- ./config-s3.yml:/etc/clickhouse-backup/config-s3.yml
- ./config-s3-embedded.yml:/etc/clickhouse-backup/config-s3-embedded.yml
- ./config-s3-fips.yml:/etc/clickhouse-backup/config-s3-fips.yml.template
- ./config-s3-nodelete.yml:/etc/clickhouse-backup/config-s3-nodelete.yml
- ./config-s3-plain-embedded.yml:/etc/clickhouse-backup/config-s3-plain-embedded.yml
- ./config-sftp-auth-key.yaml:/etc/clickhouse-backup/config-sftp-auth-key.yaml
- ./config-sftp-auth-password.yaml:/etc/clickhouse-backup/config-sftp-auth-password.yaml
- ./_coverage_/:/tmp/_coverage_/
# for local debug
- ./install_delve.sh:/tmp/install_delve.sh
Expand Down
Loading