Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Production readiness] measure accuracy of metering in the context of "real" traffic #3776

Closed
3 tasks
MonsieurNicolas opened this issue Jun 13, 2023 · 3 comments
Assignees

Comments

@MonsieurNicolas
Copy link
Contributor

This complements #3759 that tracks production readiness for dapp developpers and to help validators pick "market" prices for different resources.

This issue tracks that we need to make sure that we have the right data streams/processes in place that we can monitor calibration accuracy:
we have ways to perform calibration based on synthetic data (ie "fancy tests") in the host crate.

We need something to help us determine how close we were from our target models when we compute cpuInstruction.

We need to be able to detect when we're "very wrong", both in places where we overestimate (in which case, we're missing out on capacity), and underestimate (possible DoS attack vector).

There are a couple ways we can try to measure this:

  • cpuInstructionCount/executionTime
    • pros: easiest to measure, "true" (in that cpuInstruction is a proxy for execution time)
    • cons: metric depends on runtime environment
  • cpuInstructionCount/actualCpuInstructionCount
    • pros: gives direct feedback on models
    • cons: architecture dependent (x86), probably requires high privilege, does not catch issues related to memory access (like impact of CPU caches, etc)

I think that we could do a bit of both (in order of impact, descending):

  • track cpuInstructionCount/executionTime at the transaction and ledger level, exported as a medida metric (this would allow to catch larger trends)
  • hook up Tracy to track cpuInstructionCount/executionTime as close as possible to components (this would allow us to quickly identify model issues)
  • hook up Tracy to track cpuInstructionCount/actualCpuInstructionCount (maybe to be used with special build flavor?) , which we can use when running under controlled environments (to match calibration environment)

We can then use this instrumentation both as part of tracking node health and when replaying historical data (catchup), the later can also be used as part of acceptance criteria when validating builds.

@MonsieurNicolas
Copy link
Contributor Author

@anupsdf @jayz22 @graydon -- we should probably get this done as part of the testnet push

@graydon
Copy link
Contributor

graydon commented Jul 6, 2023

In discussion with @jayz22 today, he noted that the instructions-to-real-time ratio we observe on the dashboard shows a drop when there's a lot of data moving through the system, and this might be caused by the fact that the XDR serialization and deserialization that happens on the rust side of the rust bridge (in contract.rs) isn't accounted-for in the block of code that has a budget active, but it is currently accounted-for by the real time clock (that is done by a medida TimeScope object, on the C++ side).

This is relatively easy to fix:

  1. use rust's std::time::Instant::now() function to track the narrower time-scope on the rust side
  2. plumb the resulting difference-of-times, as a u64 nanosecond duration, back to C++
  3. feed that duration into the Medida::Timer directly, rather than using a TimeScope

This should improve accuracy of the time-to-instructions ratio.

@jayz22
Copy link
Contributor

jayz22 commented Aug 1, 2023

We have metrics and dashboard tracking cpuInstructionCount/executionTime. #3847 also adds an "invoke time" metrics (which is more directly related to the cpuInstructionCount than the "operation time") which should fix the divergence mentioned above. Dashboard needs to be updated to reflect it once the change goes alive.
Closing this for now as there are no actionable item. Feel free to reopen it in the future if more "advanced" measurements becomes necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants