Skip to content

Easily add metrics to your system -- and actually understand them using automatically customized Prometheus queries

Notifications You must be signed in to change notification settings

systems-explained/autometrics-rs

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Autometrics 📈✨

Easily add metrics to your system -- and actually understand them using automatically customized Prometheus queries.

Documentation Crates.io Discord Shield

Metrics are powerful and relatively inexpensive, but they are still hard to use. Developers need to:

  • Think about what metrics to track and which metric type to use (counter, gauge... 😕)
  • Figure out how to write PromQL or another query language to get some data 😖
  • Verify that the data returned actually answers the right question 😫

Autometrics makes it easy to add metrics to any function in your codebase. Then, it automatically generates common Prometheus for each function to help you easily understand the data. Explore your production metrics directly from your editor/IDE.

1️⃣ Add #[autometrics] to your code

#[autometrics]
async fn create_user(Json(payload): Json<CreateUser>) -> Result<Json<User>, ApiError> {
  // ...
}

2️⃣ Hover over the function name to see the generated queries

VS Code Hover Example

3️⃣ Click a query link to go directly to the Prometheus chart for that function

Prometheus Chart

4️⃣ Go back to shipping features 🚀

See it in action

  1. Install prometheus locally
  2. Run the axum example:
cargo run --features="prometheus-exporter" --example axum
  1. Hover over the function names to see the generated query links (like in the image above) and try clicking on them to go straight to that Prometheus chart.

How it works

The autometrics macro rewrites your functions to include a variety of useful metrics. It adds a counter for tracking function calls and errors (for functions that return Results), a histogram for latency, and a gauge for concurrent requests.

We currently use the opentelemetry crate for producing metrics in the OpenTelemetry format. This can be converted to the Prometheus export format, as well as others, using different exporters.

Autometrics can generate the PromQL queries and Prometheus links for each function because it is creating the metrics using specific names and labeling conventions.

API

#[autometrics] Macro

For most use cases, you can simply add the #[autometrics] attribute to any function you want to collect metrics for. We recommend using it for any important function in your code base (HTTP handlers, database calls, etc), possibly excluding simple utilities that are infallible or have negligible execution time.

Result Type Labels

By default, the metrics generated will have labels for the function, module, and result (where the value is ok or error if the function returns a Result).

The concrete result type(s) (the T and E in Result<T, E>) can also be included as labels if the types implement Into<&'static str>.

For example, if you have an Error enum to define specific error types, you can have the enum variant names included as labels:

use strum::IntoStaticStr;

#[derive(IntoStaticStr)]
#[strum(serialize_all = "snake_case")]
pub enum MyError {
  SomethingBad(String),
  Unknown,
  ComplexType { message: String },
}

In the above example, functions that return Result<_, MyError> would have an additional label error added with the values something_bad, unknown, or complex_type.

Why no dynamic labels?

Autometrics only supports &'static strs as labels to avoid the footgun of attaching labels with too many possible values. The Prometheus docs explain why this is important in the following warning:

CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.

Exporting Prometheus Metrics

Autometrics includes optional functions to help collect and prepare metrics to be collected by Prometheus.

In your Cargo.toml file, enable the optional prometheus-exporter feature:

autometrics = { version = "*", features = ["prometheus-exporter"] }

Then, call the global_metrics_exporter function in your main function:

pub fn main() {
  let _exporter = autometrics::global_metrics_exporter();
  // ...
}

And create a route on your API (probably mounted under /metrics) that returns the following:

pub fn get_metrics() -> (StatusCode, String) {
  match autometrics::encode_global_metrics() {
    Ok(metrics) => (StatusCode::OK, metrics),
    Err(err) => (StatusCode::INTERNAL_SERVER_ERROR, format!("{:?}", err))
  }
}

Configuring

Custom Prometheus URL

By default, Autometrics creates Prometheus query links that point to http://localhost:9090.

You can configure a custom Prometheus URL using a build-time environment in your build.rs file:

// build.rs

fn main() {
  let prometheus_url = "https://your-prometheus-url.example";
  println!("cargo:rustc-env=PROMETHEUS_URL={prometheus_url}");
}

Note that when using Rust Analyzer, you may need to reload the workspace in order for URL changes to take effect.

About

Easily add metrics to your system -- and actually understand them using automatically customized Prometheus queries

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 100.0%