Skip to content

Commit

Permalink
Expose metrics and other improvements
Browse files Browse the repository at this point in the history
  • Loading branch information
actualwitch committed Aug 16, 2023
1 parent 44ba726 commit 8b9c05e
Show file tree
Hide file tree
Showing 18 changed files with 387 additions and 294 deletions.
4 changes: 3 additions & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ jobs:
strategy:
matrix:
python-version: ["3.8", "3.11", "pypy3.10"]
env:
FORCE_COLOR: 1
steps:
- uses: actions/checkout@v3
- name: Install poetry
Expand All @@ -28,4 +30,4 @@ jobs:
- name: Lint code
run: poetry run pyright
- name: Run tests
run: poetry run pytest
run: poetry run pytest -n auto
12 changes: 7 additions & 5 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,20 +12,22 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

### Added

-
- Added the `start_http_server`, which starts a separate HTTP server to expose
the metrics instead of using a separate endpoint in the existing server. (#77)
- Added the `init` function that you can use to configure autometrics. (#77)

### Changed

- Renamed the `function.calls.count` metric to `function.calls` (which is exported
to Prometheus as `function_calls_total`) to be in line with OpenTelemetry and
OpenMetrics naming conventions. **Dashboards and alerting rules must be updated.**
OpenMetrics naming conventions. **Dashboards and alerting rules must be updated.** (#74)
- When the `function.calls.duration` histogram is exported to Prometheus, it now
includes the units (`function_calls_duration_seconds`) to be in line with
Prometheus/OpenMetrics naming conventions. **Dashboards and alerting rules must be updated.**
Prometheus/OpenMetrics naming conventions. **Dashboards and alerting rules must be updated.** (#74)
- The `caller` label on the `function.calls` metric was replaced with `caller.function`
and `caller.module`
and `caller.module` (#75)
- All metrics now have a `service.name` label attached. This is set via runtime environment
variable (`AUTOMETRICS_SERVICE_NAME` or `OTEL_SERVICE_NAME`), or falls back to the package name.
variable (`AUTOMETRICS_SERVICE_NAME` or `OTEL_SERVICE_NAME`), or falls back to the package name. (#76)

### Deprecated

Expand Down
22 changes: 15 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ See [Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for m
- 💡 Writes Prometheus queries so you can understand the data generated without
knowing PromQL
- 🔗 Create links to live Prometheus charts directly into each function's docstring
- [🔍 Identify commits](#identifying-commits-that-introduced-problems) that introduced errors or increased latency
- [🔍 Identify commits](#build-info) that introduced errors or increased latency
- [🚨 Define alerts](#alerts--slos) using SLO best practices directly in your source code
- [📊 Grafana dashboards](#dashboards) work out of the box to visualize the performance of instrumented functions & SLOs
- [⚙️ Configurable](#metrics-libraries) metric collection library (`opentelemetry` or `prometheus`)
- [⚙️ Configurable](#settings) metric collection library (`opentelemetry` or `prometheus`)
- [📍 Attach exemplars](#exemplars) to connect metrics with traces
- ⚡ Minimal runtime overhead

Expand Down Expand Up @@ -89,14 +89,17 @@ def api_handler():

Autometrics keeps track of instrumented functions calling each other. If you have a function that calls another function, metrics for later will include `caller` label set to the name of the autometricised function that called it.

## Metrics Libraries
## Settings

Configure the package that autometrics will use to produce metrics with the `AUTOMETRICS_TRACKER` environment variable.
Autometrics makes use of a number of environment variables to configure its behavior. All of them are also configurable with keyword arguments to the `init` function.

- `opentelemetry` - Enabled by default, can also be explicitly set using the env var `AUTOMETRICS_TRACKER="OPEN_TELEMETERY"`. Look in `pyproject.toml` for the versions of the OpenTelemetry packages that will be used.
- `prometheus` - Can be set using the env var `AUTOMETRICS_TRACKER="PROMETHEUS"`. Look in `pyproject.toml` for the version of the `prometheus-client` package that will be used.
- `tracker` - Configure the package that autometrics will use to produce metrics. Default is `opentelemetry`, but you can also use `prometheus`. Look in `pyproject.toml` for the corresponding versions of packages that will be used.
- `histogram_buckets` - Configure the buckets used for latency histograms. Default is `[0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1.0, 2.5, 5.0, 7.5, 10.0]`.
- `enable_exemplars` - Enable [exemplar collection](#exemplars). Default is `False`.
- `service_name` - Configure the [service name](#service-name).
- `version`, `commit`, `branch` - Used to configure [build_info](#build-info).

## Identifying commits that introduced problems
## Identifying commits that introduced problems <span name="build-info" />

> **NOTE** - As of writing, `build_info` will not work correctly when using the default tracker (`AUTOMETRICS_TRACKER=OPEN_TELEMETRY`).
> This will be fixed once the following PR is merged on the opentelemetry-python project: https://github.com/open-telemetry/opentelemetry-python/pull/3306
Expand Down Expand Up @@ -126,6 +129,7 @@ The service name is loaded from the following environment variables, in this ord

1. `AUTOMETRICS_SERVICE_NAME` (at runtime)
2. `OTEL_SERVICE_NAME` (at runtime)
3. First part of `__package__` (at runtime)

## Exemplars

Expand All @@ -137,6 +141,10 @@ Exemplars are a way to associate a metric sample to a trace by attaching `trace_
To use exemplars, you need to first switch to a tracker that supports them by setting `AUTOMETRICS_TRACKER=prometheus` and enable
exemplar collection by setting `AUTOMETRICS_EXEMPLARS=true`. You also need to enable exemplars in Prometheus by launching Prometheus with the `--enable-feature=exemplar-storage` flag.

## Exporting metrics

After collecting metrics with Autometrics, you need to export them to Prometheus. You can either add a separate route to your server and use the `generate_latest` function from the `prometheus_client` package, or you can use the `start_http_server` function from the same package to start a separate server that will expose the metrics. Autometrics also re-exports the `start_http_server` function with a preselected port 9464 for compatibility with other Autometrics packages.

## Development of the package

This package uses [poetry](https://python-poetry.org) as a package manager, with all dependencies separated into three groups:
Expand Down
4 changes: 1 addition & 3 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,8 +69,6 @@ This is a default Django project with autometrics configured. You can find examp

## `starlette-otel-exemplars.py`

This app shows how to use the OpenTelemetry integration to add exemplars to your metrics. In a distributed system, it allows you to track a request as it flows through your system by adding trace/span ids to it. We can catch these ids from OpenTelemetry and expose them to Prometheus as exemplars. Do note that exemplars are an experimental feature and you need to enable it in Prometheus with a `--enable-feature=exemplar-storage` flag. Run the example with a command:

`AUTOMETRICS_TRACKER=prometheus AUTOMETRICS_EXEMPLARS=true uvicorn starlette-otel-exemplars:app --port 8080`
This app shows how to use the OpenTelemetry integration to add exemplars to your metrics. In a distributed system, it allows you to track a request as it flows through your system by adding trace/span ids to it. We can catch these ids from OpenTelemetry and expose them to Prometheus as exemplars. Do note that exemplars are an experimental feature and you need to enable it in Prometheus with a `--enable-feature=exemplar-storage` flag.

> Don't forget to configure Prometheus itself to scrape the metrics endpoint. Refer to the example `prometheus.yaml` file in the root of this project on how to set this up.
3 changes: 2 additions & 1 deletion examples/fastapi-example.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
import asyncio
from fastapi import FastAPI, Response
import uvicorn

from autometrics import autometrics
from fastapi import FastAPI, Response
from prometheus_client import generate_latest

app = FastAPI()
Expand Down
28 changes: 17 additions & 11 deletions examples/starlette-otel-exemplars.py
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
from opentelemetry import trace
from autometrics import autometrics
from prometheus_client import REGISTRY
from prometheus_client.openmetrics.exposition import generate_latest
from starlette import applications
from starlette.responses import PlainTextResponse
from starlette.routing import Route
import uvicorn

from autometrics import autometrics, init
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
BatchSpanProcessor,
ConsoleSpanExporter,
)
from prometheus_client import REGISTRY
from prometheus_client.openmetrics.exposition import generate_latest
from starlette import applications
from starlette.responses import PlainTextResponse
from starlette.routing import Route

# Let's start by setting up the OpenTelemetry SDK with some defaults
provider = TracerProvider()
Expand All @@ -21,6 +22,10 @@
# Now we can instrument our Starlette application
tracer = trace.get_tracer(__name__)

# Exemplars support requires some additional configuration on autometrics,
# so we need to initialize it with the proper settings
init(tracker="prometheus", enable_exemplars=True)


# We need to add tracer decorator before autometrics so that we see the spans
@tracer.start_as_current_span("request")
Expand All @@ -39,7 +44,7 @@ def inner_function():

def metrics(request):
# Exemplars are not supported by default prometheus format, so we specifically
# make an endpoint that uses the OpenMetrics format that supoorts exemplars.
# make an endpoint that uses the OpenMetrics format that supports exemplars.
body = generate_latest(REGISTRY)
return PlainTextResponse(body, media_type="application/openmetrics-text")

Expand All @@ -48,9 +53,10 @@ def metrics(request):
routes=[Route("/", outer_function), Route("/metrics", metrics)]
)

# Now, start the app (env variables are required to enable exemplars):
# AUTOMETRICS_TRACKER=prometheus AUTOMETRICS_EXEMPLARS=true uvicorn starlette-otel-exemplars:app --port 8080
# And make some requests to /. You should see the spans in the console.
if __name__ == "__main__":
uvicorn.run(app, port=8080)

# Start the app and make some requests to http://127.0.0.1:8080/, you should see the spans in the console.
# With autometrics extension installed, you can now hover over the hello handler
# and see the charts and queries associated with them. Open one of the queries
# in Prometheus and you should see exemplars added to the metrics. Don't forget
Expand Down
Loading

0 comments on commit 8b9c05e

Please sign in to comment.