{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":6443435,"defaultBranch":"main","name":"nats-server","ownerLogin":"nats-io","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2012-10-29T16:12:24.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/10203055?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1727471266.0","currentOid":""},"activityList":{"items":[{"before":"c2f6c34f4864daec50f591bef20d70e117ee0683","after":"a65f6b19e5a5d6d4a5c4f654405a6e04fd712536","ref":"refs/heads/fix_ipqueue","pushedAt":"2024-09-27T21:29:05.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"kozlovic","name":"Ivan Kozlovic","path":"/kozlovic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/13736691?s=80&v=4"},"commit":{"message":"(2.11) Internal: Small udpates to ipQueue\n\nSmall updates to ipQueue:\n- Document that caller must not reuse the slice after call to recycle().\n- Reset size to 0 on pop() or last popOne() without need to call q.caclc().\n- Added missing recycle() calls in some places.\n- Added benchmarks to check perf impact on future changes.\n\nSigned-off-by: Ivan Kozlovic ","shortMessageHtmlLink":"(2.11) Internal: Small udpates to ipQueue"}},{"before":null,"after":"c2f6c34f4864daec50f591bef20d70e117ee0683","ref":"refs/heads/fix_ipqueue","pushedAt":"2024-09-27T21:07:46.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"kozlovic","name":"Ivan Kozlovic","path":"/kozlovic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/13736691?s=80&v=4"},"commit":{"message":"(2.11) Internal: Udpates to ipQueue\n\nWe use a `sync.Pool` and store `*[]T` due to SA6002. Change `q.elts`\nto be `*[]T` instead of `[]T`, make pop() return `*[]T` and `recycle`\nwill set the caller's pointer value to `nil` to prevent misused after\nrecycle.\n\nAdded benchmarks to check perf impact on future changes.\n\nSigned-off-by: Ivan Kozlovic ","shortMessageHtmlLink":"(2.11) Internal: Udpates to ipQueue"}},{"before":null,"after":"34e2b0e230248fb76e6ef725ed825f6f0ac8ddd5","ref":"refs/heads/raft/desync-after-catchup-too-many-retries","pushedAt":"2024-09-27T18:42:08.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"MauriceVanVeen","name":"Maurice van Veen","path":"/MauriceVanVeen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23521895?s=80&v=4"},"commit":{"message":"Fix desync after errCatchupTooManyRetries\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Fix desync after errCatchupTooManyRetries"}},{"before":"781c561016364768267c814327694c4a25e99330","after":"8d27731c459d8b81247202f43607b3215545444c","ref":"refs/heads/maurice/raft-wal-reset","pushedAt":"2024-09-27T12:59:11.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"MauriceVanVeen","name":"Maurice van Veen","path":"/MauriceVanVeen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23521895?s=80&v=4"},"commit":{"message":"Revert previous step & don't resetWAL when in sendSnapshotToFollower if NeedSnapshot\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Revert previous step & don't resetWAL when in sendSnapshotToFollower …"}},{"before":"a0589e3ed40b1a94dd72b2b25e3f02e9f893babb","after":"781c561016364768267c814327694c4a25e99330","ref":"refs/heads/maurice/raft-wal-reset","pushedAt":"2024-09-27T09:21:33.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"MauriceVanVeen","name":"Maurice van Veen","path":"/MauriceVanVeen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23521895?s=80&v=4"},"commit":{"message":"Don't reset WAL on outdated ae.pterm/ae.pindex\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Don't reset WAL on outdated ae.pterm/ae.pindex"}},{"before":"aa75eaf982e640a8e0e344cdd65f6ba80b5bc972","after":"a0589e3ed40b1a94dd72b2b25e3f02e9f893babb","ref":"refs/heads/maurice/raft-wal-reset","pushedAt":"2024-09-27T07:41:59.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"MauriceVanVeen","name":"Maurice van Veen","path":"/MauriceVanVeen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23521895?s=80&v=4"},"commit":{"message":"Revert previous step & assert.Unreachable on reset n.pindex\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Revert previous step & assert.Unreachable on reset n.pindex"}},{"before":null,"after":"aa75eaf982e640a8e0e344cdd65f6ba80b5bc972","ref":"refs/heads/maurice/raft-wal-reset","pushedAt":"2024-09-26T20:48:36.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"MauriceVanVeen","name":"Maurice van Veen","path":"/MauriceVanVeen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23521895?s=80&v=4"},"commit":{"message":"Add to debug logging & introduce WAL truncate instead of reset\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Add to debug logging & introduce WAL truncate instead of reset"}},{"before":"41e6029b34151d89d935fc4c0db5cb478e8c90bb","after":"ed6aadbca254b8a5425887aac39deb17b0d87cb0","ref":"refs/heads/ipqueues_fixes_and_improvements","pushedAt":"2024-09-26T18:48:48.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"kozlovic","name":"Ivan Kozlovic","path":"/kozlovic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/13736691?s=80&v=4"},"commit":{"message":"Small updates\n\nSigned-off-by: Ivan Kozlovic ","shortMessageHtmlLink":"Small updates"}},{"before":null,"after":"a260dad33252360c856beb1cf6f4d6c67c91f1f2","ref":"refs/heads/cluster-traffic-canary","pushedAt":"2024-09-26T18:17:50.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"aricart","name":"Alberto Ricart","path":"/aricart","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1032976?s=80&v=4"},"commit":{"message":"canary for cluster-traffic-as-string jwt changes","shortMessageHtmlLink":"canary for cluster-traffic-as-string jwt changes"}},{"before":null,"after":"149a83f27d45b1cc7a817c5fe2b6e891fff06d61","ref":"refs/heads/retract-tags","pushedAt":"2024-09-26T18:02:05.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"aricart","name":"Alberto Ricart","path":"/aricart","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1032976?s=80&v=4"},"commit":{"message":"canary for retract-tags jwt changes","shortMessageHtmlLink":"canary for retract-tags jwt changes"}},{"before":"cbbef5a8200456f6395ce61f5da9ea92d450244a","after":"d3a8868e9ddadb2556cddcd99f02d6ed1df11d23","ref":"refs/heads/release/v2.10.21","pushedAt":"2024-09-26T14:33:58.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"neilalexander","name":"Neil","path":"/neilalexander","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/310854?s=80&v=4"},"commit":{"message":"Release v2.10.21\n\nSigned-off-by: Neil Twigg ","shortMessageHtmlLink":"Release v2.10.21"}},{"before":"b39694de7ef9e0d60cdf84060b9598424ffaef34","after":"cbbef5a8200456f6395ce61f5da9ea92d450244a","ref":"refs/heads/release/v2.10.21","pushedAt":"2024-09-26T13:17:49.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"neilalexander","name":"Neil","path":"/neilalexander","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/310854?s=80&v=4"},"commit":{"message":"Reuse pre-existing sys account reference\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Reuse pre-existing sys account reference"}},{"before":"4bd700438edc059bf99637978e1857642ee3981d","after":"5447e1db3250a7a7790abf43d92826ddf138bd2d","ref":"refs/heads/maurice/warn-if-tmp-storage","pushedAt":"2024-09-26T13:08:05.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"MauriceVanVeen","name":"Maurice van Veen","path":"/MauriceVanVeen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23521895?s=80&v=4"},"commit":{"message":"Warn if using temp storage for JetStream\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Warn if using temp storage for JetStream"}},{"before":null,"after":"4bd700438edc059bf99637978e1857642ee3981d","ref":"refs/heads/maurice/warn-if-tmp-storage","pushedAt":"2024-09-26T13:04:06.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"MauriceVanVeen","name":"Maurice van Veen","path":"/MauriceVanVeen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23521895?s=80&v=4"},"commit":{"message":"Warn if using temp storage for JetStream\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Warn if using temp storage for JetStream"}},{"before":"70fc18ddefc38f2b6e37959e0099888e5931c92b","after":null,"ref":"refs/heads/panic-on-nil-sys-account","pushedAt":"2024-09-26T12:53:00.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"}},{"before":"74ef475025ce4baaaf8724412ae08b20e54be8d2","after":"ee0f7766f1d52b40f28861b60c06fe37de9975e2","ref":"refs/heads/main","pushedAt":"2024-09-26T12:52:58.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"},"commit":{"message":"Reuse pre-existing sys account reference (#5934)\n\nFixes the following panic:\r\n```\r\n[inf] panic: runtime error: invalid memory address or nil pointer dereference\r\n[inf] [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xc0a90d]\r\n[inf]\r\n[inf] goroutine 1 [running]:\r\n[inf] github.com/nats-io/nats-server/v2/server.(*jetStream).setupMetaGroup(0xc0002c2200)\r\n[inf] server/jetstream_cluster.go:877 +0x12ed\r\n[inf] github.com/nats-io/nats-server/v2/server.(*Server).enableJetStreamClustering(0xc00023e008)\r\n[inf] server/jetstream_cluster.go:759 +0x1af\r\n[inf] github.com/nats-io/nats-server/v2/server.(*Server).enableJetStream(0xc00023e008, {0x40000000, 0x280000000, {0xc00002a3f0, 0x15}, 0x1bf08eb000, 0x0, {0x0, 0x0}, 0x1, ...})\r\n[inf] server/jetstream.go:508 +0x11fc\r\n[inf] github.com/nats-io/nats-server/v2/server.(*Server).EnableJetStream(0xc00023e008, 0xc000033d18)\r\n[inf] server/jetstream.go:230 +0x945\r\n[inf] github.com/nats-io/nats-server/v2/server.(*Server).Start(0xc00023e008)\r\n[inf] server/server.go:2347 +0x2005\r\n[inf] github.com/nats-io/nats-server/v2/server.Run(0xc00023e008)\r\n[inf] server/service.go:22 +0x32\r\n[inf] main.main()\r\n[inf] main.go:127 +0x62c\r\n```\r\n\r\n\r\nThe panic would occur if we were starting up and were in\r\n`setupMetaGroup`, but while we're there we get shutdown.\r\n```\r\n[inf] [1] 2024/09/24 23:18:56.478586 [INF] Starting JetStream cluster\r\n[inf] [1] 2024/09/24 23:18:56.478739 [DBG] JetStream cluster checking for stable cluster name and peers\r\n[inf] [1] 2024/09/24 23:18:56.480463 [INF] Creating JetStream metadata controller\r\n[inf] [1] 2024/09/24 23:18:56.501280 [INF] JetStream cluster recovering state\r\n[inf] [1] 2024/09/24 23:18:56.554002 [DBG] Trapped \"terminated\" signal\r\n[inf] [1] 2024/09/24 23:18:56.565929 [INF] Initiating Shutdown...\r\n[inf] [1] 2024/09/24 23:18:56.568012 [INF] Initiating JetStream Shutdown...\r\n[inf] [1] 2024/09/24 23:18:56.585020 [DBG] JETSTREAM - JetStream connection closed: Client Closed\r\n[inf] [1] 2024/09/24 23:18:56.585908 [DBG] JETSTREAM - JetStream connection closed: Client Closed\r\n[inf] [1] 2024/09/24 23:18:56.595023 [DBG] JETSTREAM - JetStream connection closed: Client Closed\r\n[inf] [1] 2024/09/24 23:18:56.598251 [DBG] RAFT [yrzKKRBu - _meta_] Started\r\n[inf] [1] 2024/09/24 23:18:56.601805 [INF] Server is ready\r\n```\r\n\r\nUpon shutting down `s.sys` gets reset:\r\n\r\nhttps://github.com/nats-io/nats-server/blob/74ef475025ce4baaaf8724412ae08b20e54be8d2/server/events.go#L1793\r\n\r\nSo if we would then request the system account here, it could be `nil`:\r\n\r\nhttps://github.com/nats-io/nats-server/blob/74ef475025ce4baaaf8724412ae08b20e54be8d2/server/jetstream_cluster.go#L857\r\n\r\nThen panic/nil dereference happening here:\r\n\r\nhttps://github.com/nats-io/nats-server/blob/74ef475025ce4baaaf8724412ae08b20e54be8d2/server/jetstream_cluster.go#L877\r\n\r\nThis PR proposes to just reuse the `sysAcc` variable that was already\r\navailable:\r\n\r\nhttps://github.com/nats-io/nats-server/blob/74ef475025ce4baaaf8724412ae08b20e54be8d2/server/jetstream_cluster.go#L782\r\n\r\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Reuse pre-existing sys account reference (#5934)"}},{"before":null,"after":"70fc18ddefc38f2b6e37959e0099888e5931c92b","ref":"refs/heads/panic-on-nil-sys-account","pushedAt":"2024-09-26T08:28:13.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"MauriceVanVeen","name":"Maurice van Veen","path":"/MauriceVanVeen","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/23521895?s=80&v=4"},"commit":{"message":"Reuse pre-existing sys account reference\n\nSigned-off-by: Maurice van Veen ","shortMessageHtmlLink":"Reuse pre-existing sys account reference"}},{"before":"87ee275d9593c294bc887536615b80478f9db59f","after":null,"ref":"refs/heads/fix_5913","pushedAt":"2024-09-26T03:43:47.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"}},{"before":"4c21aa3ac686a7f071c5790ace9e690ce8101627","after":"74ef475025ce4baaaf8724412ae08b20e54be8d2","ref":"refs/heads/main","pushedAt":"2024-09-26T03:43:46.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"},"commit":{"message":"[FIXED] LeafNode: Don't report cluster name in varz/banner when none defined (#5931)\n\nIn situation where a Leaf server has no cluster specified, we internally\r\nuse the server name as the cluster name. We may need it in the protocol\r\nso that the hub can suppress some messages to avoid duplicates.\r\n\r\nWe were however still reporting a cluster name in `/varz` and in the\r\nbanner on startup. This PR fixes both.\r\n\r\nResolves #5913\r\n\r\nSigned-off-by: Ivan Kozlovic ivan@synadia.com","shortMessageHtmlLink":"[FIXED] LeafNode: Don't report cluster name in varz/banner when none …"}},{"before":null,"after":"87ee275d9593c294bc887536615b80478f9db59f","ref":"refs/heads/fix_5913","pushedAt":"2024-09-25T23:47:02.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"kozlovic","name":"Ivan Kozlovic","path":"/kozlovic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/13736691?s=80&v=4"},"commit":{"message":"[FIXED] LeafNode: Don't report cluster name in varz/banner when none defined.\n\nIn situation where a Leaf server has no cluster specified, we internally\nuse the server name as the cluster name. We may need it in the protocol\nso that the hub can suppress some messages to avoid duplicates.\n\nWe were however still reporting a cluster name in `/varz` and in the\nbanner on startup. This PR fixes both.\n\nResolves #5913\n\nSigned-off-by: Ivan Kozlovic ","shortMessageHtmlLink":"[FIXED] LeafNode: Don't report cluster name in varz/banner when none …"}},{"before":"4ae75513ff732023862ed2167d2ea07ecabde97f","after":"41e6029b34151d89d935fc4c0db5cb478e8c90bb","ref":"refs/heads/ipqueues_fixes_and_improvements","pushedAt":"2024-09-25T16:57:22.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"kozlovic","name":"Ivan Kozlovic","path":"/kozlovic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/13736691?s=80&v=4"},"commit":{"message":"(2.11) Internal: Intra-Process Queue fixes/improvements.\n\nGo says that when using pool we should use pointer to a slice, not\nthe slice itself to save on copy when getting/putting things back.\n\nThat's what I tried to do, but I believe that the gain was defeated\nby the fact that I was storing it as a []T in the queue object, and\nmore importantly, was returning the address of the local variable\nwhen putting back in the pool. The way it was used, worked, but was\ndangerous if queues were to be used differently. For instance with\nimplementation before this change, this loop would fail:\n```\nvar elts []int\nfor i:=0;i<1000;i++{\n q.push(i+1)\n expected += i+1\n elts = q.pop()\n for _, v := range elts {\n sum += v\n }\n q.recycle(&elts)\n\n q.push(i+2)\n expected += i+2\n elts = q.pop()\n\n q.push(i+3)\n expected += i+3\n\n for _, v := range elts {\n sum += v\n }\n q.recycle(&elts)\n\n elts = q.pop()\n for _, v := range elts {\n sum += v\n }\n q.recycle(&elts)\n}\nif sum != expected {\n // ERROR!\n}\n```\nIf we use different variables, such as `elts1 := q.pop()`, etc.. then\nit works. And again, the way it was used before did not cause issues.\n\nIn this PR, I am using []T and no sync.Pool. Instead, we will store up\nto 5 slices in queue's specific \"pool\". In the server, I believe that\nalmost all use-cases are 1P-1C or NP-1C, so even a single pooled slice\nmay be enough, but since this is a generic IPQueue, we use this small\npool.\n\nThe other changes are to use the \"in progress\" count (and now size) when\nchecking for the limits, which we were not. So after a `pop()`, since the\nqueue is empty, the `push()` side would be able to store up to the limits\nwhile the receiver was processing the popped elements.\n\nI have added APIs to indicate progress when processing elements in the\n\"for-loop\" that goes over the `pop()` result. I have modified code that\nuses a queue with limit (only 2 so far) so that they use the new API.\n\nI have added benchmarks so we can evaluate future changes. Aside from\nthe modifications to queue with limits, running the benchmark from\noriginal code to new shows a slight improvement. Of course, updating\nprogress for each element popped() is slower than doing as a bulk, but\nit allows for fine-grained control on the queue limits. And when using\nwith processing of JetStream messages, it is likely that the effect is\nnot relevant.\n\nSigned-off-by: Ivan Kozlovic ","shortMessageHtmlLink":"(2.11) Internal: Intra-Process Queue fixes/improvements."}},{"before":"2625e324117c575f0926d375f54d95a963f99891","after":"4ae75513ff732023862ed2167d2ea07ecabde97f","ref":"refs/heads/ipqueues_fixes_and_improvements","pushedAt":"2024-09-25T16:49:08.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"kozlovic","name":"Ivan Kozlovic","path":"/kozlovic","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/13736691?s=80&v=4"},"commit":{"message":"(2.11) Internal: Intra-Process Queue fixes/improvements.\n\nGo says that when using pool we should use pointer to a slice, not\nthe slice itself to save on copy when getting/putting things back.\n\nThat's what I tried to do, but I believe that the gain was defeated\nby the fact that I was storing it as a []T in the queue object, and\nmore importantly, was returning the address of the local variable\nwhen putting back in the pool. The way it was used, worked, but was\ndangerous if queues were to be used differently. For instance with\nimplementation before this change, this loop would fail:\n```\nvar elts []int\nfor i:=0;i<1000;i++{\n q.push(i+1)\n expected += i+1\n elts = q.pop()\n for _, v := range elts {\n sum += v\n }\n q.recycle(&elts)\n\n q.push(i+2)\n expected += i+2\n elts = q.pop()\n\n q.push(i+3)\n expected += i+3\n\n for _, v := range elts {\n sum += v\n }\n q.recycle(&elts)\n\n elts = q.pop()\n for _, v := range elts {\n sum += v\n }\n q.recycle(&elts)\n}\nif sum != expected {\n // ERROR!\n}\n```\nIf we use different variables, such as `elts1 := q.pop()`, etc.. then\nit works. And again, the way it was used before did not cause issues.\n\nIn this PR, I am using []T and no sync.Pool. Instead, we will store up\nto 5 slices in queue's specific \"pool\". In the server, I believe that\nalmost all use-cases are 1P-1C or NP-1C, so even a single pooled slice\nmay be enough, but since this is a generic IPQueue, we use this small\npool.\n\nThe other changes are to use the \"in progress\" count (and now size) when\nchecking for the limits, which we were not. So after a `pop()`, since the\nqueue is empty, the `push()` side would be able to store up to the limits\nwhile the receiver was processing the popped elements.\n\nI have added APIs to indicate progress when processing elements in the\n\"for-loop\" that goes over the `pop()` result. I have modified code that\nuses a queue with limit (only 2 so far) so that they use the new API.\n\nI have added benchmarks so we can evaluate future changes. Aside from\nthe modifications to queue with limits, running the benchmark from\noriginal code to new shows a slight improvement. Of course, updating\nprogress for each element popped() is slower than doing as a bulk, but\nit allows for fine-grained control on the queue limits. And when using\nwith processing of JetStream messages, it is likely that the effect is\nnot relevant.\n\nSigned-off-by: Ivan Kozlovic ","shortMessageHtmlLink":"(2.11) Internal: Intra-Process Queue fixes/improvements."}},{"before":"098b4f8bd470948f5467e19e6c74cf10c7bacc1b","after":null,"ref":"refs/heads/neil/21021rc4","pushedAt":"2024-09-25T16:48:35.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"neilalexander","name":"Neil","path":"/neilalexander","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/310854?s=80&v=4"}},{"before":"58fba0076e26f0ed4ea1eb7a2c25b28f27ccb99a","after":"b39694de7ef9e0d60cdf84060b9598424ffaef34","ref":"refs/heads/release/v2.10.21","pushedAt":"2024-09-25T16:48:33.000Z","pushType":"pr_merge","commitsCount":4,"pusher":{"login":"neilalexander","name":"Neil","path":"/neilalexander","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/310854?s=80&v=4"},"commit":{"message":"Cherry-picks for 2.10.21-RC.4 (#5928)\n\nIncludes:\r\n\r\n- #5925\r\n- #5926\r\n- #5927\r\n\r\nSigned-off-by: Neil Twigg ","shortMessageHtmlLink":"Cherry-picks for 2.10.21-RC.4 (#5928)"}},{"before":null,"after":"098b4f8bd470948f5467e19e6c74cf10c7bacc1b","ref":"refs/heads/neil/21021rc4","pushedAt":"2024-09-25T16:15:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"neilalexander","name":"Neil","path":"/neilalexander","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/310854?s=80&v=4"},"commit":{"message":"When scaling down a stream make sure replica count is correct if adjusted and also make sure we do not have orphan consumers.\n\nWhen we scale down a replicated stream, say R5, if it has consumers that are a lower replica count, say R1, they could be placed on the peers that may go away. We need to make sure we properly assign peers and transfer state as needed.\n\nNote the consumer state transfer expects the state to be stable, so should be paused.\n\nSigned-off-by: Derek Collison ","shortMessageHtmlLink":"When scaling down a stream make sure replica count is correct if adju…"}},{"before":"596b71c072ddf8069f7896809f7a7280e4e83935","after":null,"ref":"refs/heads/scale-down-orphans","pushedAt":"2024-09-25T16:11:12.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"}},{"before":"678eed7e97184fa885ac8dc6441818f0ffa351ee","after":"4c21aa3ac686a7f071c5790ace9e690ce8101627","ref":"refs/heads/main","pushedAt":"2024-09-25T16:11:09.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"},"commit":{"message":"[FIXED] When scaling down a stream make sure consumers are adjusted properly. (#5927)\n\nWhen scaling down a stream make sure replica count is correct if\r\nadjusted and also make sure we do not have orphan consumers.\r\n\r\nWhen we scale down a replicated stream, say R5, if it has consumers that\r\nare a lower replica count, say R1, they could be placed on the peers\r\nthat may go away. We need to make sure we properly assign peers and\r\ntransfer state as needed.\r\n\r\nNote the consumer state transfer expects the state to be stable, so\r\nshould be paused.\r\n\r\nSigned-off-by: Derek Collison ","shortMessageHtmlLink":"[FIXED] When scaling down a stream make sure consumers are adjusted p…"}},{"before":null,"after":"596b71c072ddf8069f7896809f7a7280e4e83935","ref":"refs/heads/scale-down-orphans","pushedAt":"2024-09-25T15:44:04.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"},"commit":{"message":"When scaling down a stream make sure replica count is correct if adjusted and also make sure we do not have orphan consumers.\n\nWhen we scale down a replicated stream, say R5, if it has consumers that are a lower replica count, say R1, they could be placed on the peers that may go away. We need to make sure we properly assign peers and transfer state as needed.\n\nNote the consumer state transfer expects the state to be stable, so should be paused.\n\nSigned-off-by: Derek Collison ","shortMessageHtmlLink":"When scaling down a stream make sure replica count is correct if adju…"}},{"before":"c8ad011bd91ffc6b2d7c17bdeea6a20cfd286f16","after":null,"ref":"refs/heads/neil/pendingstatsz","pushedAt":"2024-09-25T14:50:19.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"}},{"before":"eab19e5e71ca7fa92ff43d863c823132448a5a2d","after":"678eed7e97184fa885ac8dc6441818f0ffa351ee","ref":"refs/heads/main","pushedAt":"2024-09-25T14:50:17.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"derekcollison","name":"Derek Collison","path":"/derekcollison","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/90097?s=80&v=4"},"commit":{"message":"Ensure `pending` sent correctly in regular `statsz` messages (#5926)\n\nSigned-off-by: Neil Twigg ","shortMessageHtmlLink":"Ensure pending sent correctly in regular statsz messages (#5926)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"startCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yN1QyMToyOTowNS4wMDAwMDBazwAAAATDCO9f","endCursor":"Y3Vyc29yOnYyOpK7MjAyNC0wOS0yNVQxNDo1MDoxNy4wMDAwMDBazwAAAATAdfvy"}},"title":"Activity · nats-io/nats-server"}