Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

presence: Fix database purge of activewatchers (clustering, fallback2db=0) #2520

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

wdoekes
Copy link
Contributor

@wdoekes wdoekes commented May 11, 2021

When clustering sharing_tags were added to presence, they were added to the fallback2db "on" case only:

There are a couple of dimensions with differing behaviours:

                +------------+------------+
                | fallback2- | fallback2- |
                | -db = on   | -db = off  |
  +-clustering:-+------------+------------+
  | - no        | OK         | OK         |
  | - tagless   | PR-2519    | PR-2519    |
  | - active    | OK         | this       |
  +-------------+------------+------------+

The non-OK behaviour above refers to the activewatcher table getting filled up with stale/expired items.

fallback2db on or off:

modparam("presence", "fallback2db", 0)  # or 1=on

The no-clustering case:

handle_subscribe();

The tagless case:

modparam("presence", "cluster_id", 1)
modparam("clusterer", "my_node_id", 2)

handle_subscribe();

The active case:

modparam("presence", "cluster_id", 1)
modparam("clusterer", "my_node_id", 2)
modparam("clusterer", "sharing_tag", "node2/1=active")

handle_subscribe("0", "node2");

Where PR #2519 fixes the tagless case, this PR fixes the fallback2db=0 case by writing the sharing_tag to the database so the records can get found and cleaned up.

(Sidenote: subscriptions which ended with a timeout or 481 would get cleaned up. This makes sense in all cases: if they have an error before their expiry, it makes sense to purge them from the DB immediately. And if the periodic cleanup had cleaned those records already, it would not be an issue.)

@wdoekes
Copy link
Contributor Author

wdoekes commented May 12, 2021

(This one might be for you @bogdan-iancu 😉 . See also PR #2519. Both affect both the 2.4.x and 3.1.x branches.)

@bogdan-iancu bogdan-iancu self-assigned this Jun 4, 2021
@stale
Copy link

stale bot commented Jul 21, 2021

Any updates here? No progress has been made in the last 15 days, marking as stale. Will close this issue if no further updates are made in the next 30 days.

@stale stale bot added the stale label Jul 21, 2021
…ck2db)

When clustering sharing_tags were added to presence, they were added to
the fallback2db "on" case only:

There are a couple of dimensions with differing behaviours:

                +------------+------------+
                | fallback2- | fallback2- |
                | -db = on   | -db = off  |
  +-clustering:-+------------+------------+
  | - no        | OK         | OK         |
  | - tagless   | PR-2519    | PR-2519    |
  | - active    | OK         | this       |
  +-------------+------------+------------+

The non-OK behaviour above refers to the activewatcher table getting
filled up with stale/expired items.

fallback2db on or off:
  ```
  modparam("presence", "fallback2db", 0)  # or 1=on
  ```

The no-clustering case:
  ```
  handle_subscribe();
  ```

The tagless case:
  ```
  modparam("presence", "cluster_id", 1)
  modparam("clusterer", "my_node_id", 2)

  handle_subscribe();
  ```

The active case:
  ```
  modparam("presence", "cluster_id", 1)
  modparam("clusterer", "my_node_id", 2)
  modparam("clusterer", "sharing_tag", "node2/1=active")

  handle_subscribe("0", "node2");
  ```

Where PR OpenSIPS#2519 fixes the tagless case, this PR fixes the fallback2db=0
case by writing the sharing_tag to the database so the records can get
found and cleaned up.

(Sidenote: subscriptions which ended with a timeout or 481 *would* get
cleaned up. This makes sense in all cases: if they have an error before
their expiry, it makes sense to purge them from the DB immediately. And
if the periodic cleanup had cleaned those records already, it would not
be an issue.)
@wdoekes wdoekes force-pushed the fix-database-purging-of-activewatchers-with-no-fallback2db branch from 5120dac to b01bb18 Compare July 26, 2021 12:47
@stale stale bot removed the stale label Jul 26, 2021
@wdoekes
Copy link
Contributor Author

wdoekes commented Jul 26, 2021

Rebased, just in case

@stale
Copy link

stale bot commented Jan 9, 2022

Any updates here? No progress has been made in the last 15 days, marking as stale. Will close this issue if no further updates are made in the next 30 days.

@stale stale bot added the stale label Jan 9, 2022
@stale
Copy link

stale bot commented Apr 17, 2022

Marking as closed due to lack of progress for more than 30 days. If this issue is still relevant, please re-open it with additional details.

@stale stale bot closed this Apr 17, 2022
@bogdan-iancu bogdan-iancu reopened this May 5, 2022
@stale stale bot removed the stale label May 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants