Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It's now allowed to switch replication factor for several node simultaneously #8813

Open
xtrey opened this issue Sep 23, 2024 · 1 comment
Open
Assignees

Comments

@xtrey
Copy link

xtrey commented Sep 23, 2024

Packages

Scylla version: 6.2.0~rc1-20240919.a71d4bc49cc8 with build-id 2a79c005ca22208ec14a7708a4f423e96b5d861f

Kernel Version: 6.8.0-1016-aws

Issue description

Scylla does not allow switching a replication factor for the DC more than one at a time. Appeared in several tests in disrupt_mgmt_restore nemesis. Test should be aware of this limitation.

Installation details

Cluster size: 6 nodes (i4i.4xlarge)

Scylla Nodes used in this run:

  • longevity-100gb-4h-6-2-db-node-67327e5d-9 (54.196.185.211 | 10.12.10.83) (shards: 14)
  • longevity-100gb-4h-6-2-db-node-67327e5d-8 (54.197.33.229 | 10.12.9.252) (shards: 14)
  • longevity-100gb-4h-6-2-db-node-67327e5d-7 (54.174.231.30 | 10.12.8.220) (shards: 14)
  • longevity-100gb-4h-6-2-db-node-67327e5d-6 (34.227.229.226 | 10.12.10.185) (shards: 14)
  • longevity-100gb-4h-6-2-db-node-67327e5d-5 (3.92.138.201 | 10.12.8.12) (shards: 14)
  • longevity-100gb-4h-6-2-db-node-67327e5d-4 (54.166.139.247 | 10.12.9.115) (shards: 14)
  • longevity-100gb-4h-6-2-db-node-67327e5d-3 (3.80.81.187 | 10.12.9.224) (shards: 14)
  • longevity-100gb-4h-6-2-db-node-67327e5d-2 (18.234.238.233 | 10.12.9.91) (shards: 14)
  • longevity-100gb-4h-6-2-db-node-67327e5d-1 (54.225.43.118 | 10.12.10.219) (shards: 14)

OS / Image: ami-059b505168db98ed8 (aws: undefined_region)

Test: longevity-100gb-4h-test
Test id: 67327e5d-0f18-40fa-b295-1ca132662408
Test name: scylla-6.2/longevity/longevity-100gb-4h-test
Test method: longevity_test.LongevityTest.test_custom_time
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 67327e5d-0f18-40fa-b295-1ca132662408
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 67327e5d-0f18-40fa-b295-1ca132662408

Logs:

Jenkins job URL
Argus

@soyacz
Copy link
Contributor

soyacz commented Sep 23, 2024

Next time, please post also related errors, for better/faster issue understanding, like:

2024-09-22 18:42:42.380: (DisruptionEvent Severity.ERROR) period_type=end event_id=48db5054-5a2a-4a85-a202-6360187d8a0a duration=2m35s: nemesis_name=MgmtRestore target_node=Node longevity-100gb-4h-6-2-db-node-67327e5d-7 [54.174.231.30 | 10.12.8.220] errors=Error from server: code=2200 [Invalid query] message="Cannot modify replication factor of any DC by more than 1 at a time."
Traceback (most recent call last):
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 5222, in wrapper
result = method(*args[1:], **kwargs)
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 2972, in disrupt_mgmt_restore
self.tester.set_ks_strategy_to_network_and_rf_according_to_cluster(
File "/home/ubuntu/scylla-cluster-tests/sdcm/tester.py", line 1107, in set_ks_strategy_to_network_and_rf_according_to_cluster
NetworkTopologyReplicationStrategy(**datacenters).apply(node, keyspace)
File "/home/ubuntu/scylla-cluster-tests/sdcm/utils/replication_strategy_utils.py", line 47, in apply
session.execute(cql)
File "/home/ubuntu/scylla-cluster-tests/sdcm/utils/common.py", line 1313, in execute_verbose
return execute_orig(*args, **kwargs)
File "cassandra/cluster.py", line 2729, in cassandra.cluster.Session.execute
File "cassandra/cluster.py", line 5120, in cassandra.cluster.ResponseFuture.result
cassandra.InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot modify replication factor of any DC by more than 1 at a time."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants