Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #36329: Read UI notification cache as JSON #9792

Merged
merged 1 commit into from
Aug 8, 2023

Conversation

ehelms
Copy link
Member

@ehelms ehelms commented Aug 4, 2023

The UI notification data is cached as JSON and when retrieved needs to also be JSON or else a marshalling error is thrown.

@theforeman-bot
Copy link
Member

Issues: #36329

@ehelms
Copy link
Member Author

ehelms commented Aug 4, 2023

@ianballou This failure seems unrelated and something I would expect to be showing up in the Katello PR tests?

@ianballou
Copy link
Contributor

@ehelms is that failing locally? I haven't seen that on Katello tests yet.

@@ -7,7 +7,7 @@ def initialize(user_id = User.current.try(:id))

# JSON Payload
def payload
result = cache.read(cache_key)
result = cache.read(cache_key, raw: true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading https://guides.rubyonrails.org/caching_with_rails.html#low-level-caching it makes me wonder if the following code wouldn't be the same:

cache.fetch(cache_key, raw: true, expires_in: cache_expiry) do
  { notifications: notifications }.to_json
end

You don't have the debug info about cache hits and misses, but I wonder how many people care about that. I'd just trust the fetch API to do the right thing.

https://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html#method-i-fetch is the exact API for that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And just after I post it I think I realize the reason: expiry does a DB call.

Now that I think about it more: this has been on the agenda to replace with Action Cable: https://edgeguides.rubyonrails.org/action_cable_overview.html can use websockets to keep the connection open. It would mean you no longer need to poll, which reduces the need to cache. We still do a lot of full page loads so it's not completely gone, but helps a lot. Probably an interesting area for optimization.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good find, I'm not sure all that debug info is necessarily worth it either vs. leaning into using the conventions outlined by Rails guide itself.

if result
logger.debug("Cache Hit: notification, reading cache for #{cache_key}")
return result
cache.fetch(cache_key, raw: true, expires_in: cache_expiry) do
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify what I said in #9792 (comment): cache_expiry does a DB call. The way it's called here means it's always done, even if there's a cache hit. I don't see a good way to avoid that using the fetch API.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could cache the cache_expiry? I was trying to determine how the expiry is calculated and it appears to not be a user defined value but rather defined for a particular notification and hard coded? Assuming I got that right, we would not need to expire it, as it would require a server restart for any changes to take affect which would rebuild the caches anyway.

Perhaps that is more complex than just keeping the previous read / write and fixing the simple bug.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also had a hard time understanding that. I got the impression that if you have a notification that expires in a month that you then don't receive new notifications for a month. At first I thought I missed something, but then I realized something else may clear the cache. And there is this:

after_commit :delete_user_cache

So if a new notification is created the it already clears the user's cache anyway. It's slightly inefficient that it's not a bulk transaction, so for many users it needs to do many clears. That's not a great design.

I was thinking about just setting a very long expires date, without looking in the DB. Then you need to filter out expired notifications. It would allow you to use stale caches. That would increase complexity though.

In short: I think the current design is correct but it's a bit complicated.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Random after thought: does this design work if you have multiple cache servers? What if the Cache server is down when you want to delete?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what I can tell, multiple servers still works as this would clear the cache and it knows where to clear it from.

If a cache server is down, then the delete will silently fail (unless we implement an exception handler ourselves) and return nil but the cache data will still be there when Redis comes back up. So worst case scenario is that users would see the notifications they dismissed a second time and have to clear them I think.

@ehelms
Copy link
Member Author

ehelms commented Aug 7, 2023

I have reverted this back to the simplest fix of the bug without heavily altering the code structure or way it works.

@ekohl ekohl merged commit eff7aa2 into theforeman:develop Aug 8, 2023
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants