Skip to content

Commit 7e42201

Browse files
authored
Add quickstart to README and spruce up a few sections (#86)
* Update readme with clearer quickstart, as well as references to am cli * Add a copy improvement * More copy tweaks * Add more improvements to the README * Improve am start comment in README * Fix typo * Add more readme tweaks * Add example code of calling the `init` function * Make copy clearer * Just use the intro from the rust repo * Add back explainer on PROMETHEUS_URL in .env
1 parent 87eb542 commit 7e42201

File tree

1 file changed

+126
-39
lines changed

1 file changed

+126
-39
lines changed

README.md

+126-39
Original file line numberDiff line numberDiff line change
@@ -6,18 +6,15 @@
66
> A Python port of the Rust
77
> [autometrics-rs](https://github.com/fiberplane/autometrics-rs) library
88
9-
**Autometrics is a library that exports a decorator that makes it easy to understand the error rate, response time, and production usage of any function in your code.** Jump straight from your IDE to live Prometheus charts for each HTTP/RPC handler, database method, or other piece of application logic.
9+
Metrics are a powerful and cost-efficient tool for understanding the health and performance of your code in production. But it's hard to decide what metrics to track and even harder to write queries to understand the data.
1010

11-
Autometrics for Python provides:
12-
13-
1. A decorator that can create [Prometheus](https://prometheus.io/) metrics for your functions and class methods throughout your code base.
14-
2. A helper function that will write corresponding Prometheus queries for you in a Markdown file.
11+
Autometrics provides a decorator that makes it trivial to instrument any function with the most useful metrics: request rate, error rate, and latency. It standardizes these metrics and then generates powerful Prometheus queries based on your function details to help you quickly identify and debug issues in production.
1512

1613
See [Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for more details on the ideas behind autometrics.
1714

1815
## Features
1916

20-
-`autometrics` decorator instruments any function or class method to track the
17+
-`@autometrics` decorator instruments any function or class method to track the
2118
most useful metrics
2219
- 💡 Writes Prometheus queries so you can understand the data generated without
2320
knowing PromQL
@@ -29,67 +26,136 @@ See [Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for m
2926
- [📍 Attach exemplars](#exemplars) to connect metrics with traces
3027
- ⚡ Minimal runtime overhead
3128

32-
## Using autometrics-py
29+
## Quickstart
3330

34-
- Set up a [Prometheus instance](https://prometheus.io/download/)
35-
- Configure prometheus to scrape your application ([check our instructions if you need help](https://github.com/autometrics-dev#5-configuring-prometheus))
36-
- Include a .env file with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`
37-
- `pip install autometrics`
38-
- Import the library in your code and use the decorator for any function:
31+
1. Add `autometrics` to your project's dependencies:
32+
```shell
33+
pip install autometrics
34+
```
3935

40-
```py
41-
from autometrics import autometrics
36+
2. Instrument your functions with the `@autometrics` decorator
4237

43-
@autometrics
44-
def sayHello:
45-
return "hello"
38+
```python
39+
from autometrics import autometrics
4640
47-
```
41+
@autometrics
42+
def my_function():
43+
# ...
44+
```
4845

49-
- You can also track the number of concurrent calls to a function by using the `track_concurrency` argument: `@autometrics(track_concurrency=True)`. Note: currently only supported by the `prometheus` tracker.
46+
3. Export the metrics for Prometheus
47+
```python
48+
# This example uses FastAPI, but you can use any web framework
49+
from fastapi import FastAPI, Response
50+
from prometheus_client import generate_latest
51+
52+
# Set up a metrics endpoint for Prometheus to scrape
53+
# `generate_latest` returns metrics data in the Prometheus text format
54+
@app.get("/metrics")
55+
def metrics():
56+
return Response(generate_latest())
57+
```
5058

51-
- To access the PromQL queries for your decorated functions, run `help(yourfunction)` or `print(yourfunction.__doc__)`.
59+
4. Run Prometheus locally with the [Autometrics CLI](https://docs.autometrics.dev/local-development#getting-started-with-am) or [configure it manually](https://github.com/autometrics-dev#5-configuring-prometheus) to scrape your metrics endpoint
60+
```sh
61+
# Replace `8080` with the port that your app runs on
62+
am start :8080
63+
```
64+
65+
5. (Optional) If you have Grafana, import the [Autometrics dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) for an overview and detailed view of all the function metrics you've collected
66+
67+
## Using `autometrics-py`
68+
69+
- You can import the library in your code and use the decorator for any function:
70+
71+
```py
72+
from autometrics import autometrics
73+
74+
@autometrics
75+
def sayHello:
76+
return "hello"
77+
78+
```
5279
5380
- To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing [the VSCode extension](https://marketplace.visualstudio.com/items?itemName=Fiberplane.autometrics).
5481
55-
> Note that we cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.
82+
> **Note**: We cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.
83+
84+
- You can also track the number of concurrent calls to a function by using the `track_concurrency` argument: `@autometrics(track_concurrency=True)`.
85+
86+
> **Note**: Concurrency tracking is only supported when you set with the environment variable `AUTOMETRICS_TRACKER=prometheus`.
87+
88+
- To access the PromQL queries for your decorated functions, run `help(yourfunction)` or `print(yourfunction.__doc__)`.
89+
90+
> For these queries to work, include a `.env` file in your project with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`
5691
5792
## Dashboards
5893
5994
Autometrics provides [Grafana dashboards](https://github.com/autometrics-dev/autometrics-shared#dashboards) that will work for any project instrumented with the library.
6095
6196
## Alerts / SLOs
6297
63-
Autometrics makes it easy to add Prometheus alerts using Service-Level Objectives (SLOs) to a function or group of functions.
64-
65-
In order to receive alerts you need to add a set of rules to your Prometheus set up. You can find out more about those rules here: [Prometheus alerting rules](https://github.com/autometrics-dev/autometrics-shared#prometheus-recording--alerting-rules). Once added, most of the recording rules are dormant. They are enabled by specific metric labels that can be automatically attached by autometrics.
66-
67-
To use autometrics SLOs and alerts, create one or multiple `Objective`s based on the function(s) success rate and/or latency, as shown below. The `Objective` can be passed as an argument to the `autometrics` decorator to include the given function in that objective.
98+
Autometrics makes it easy to add intelligent alerting to your code, in order to catch increases in the error rate or latency across multiple functions.
6899
69100
```python
70101
from autometrics import autometrics
71102
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile
72103
73104
# Create an objective for a high success rate
105+
# Here, we want our API to have a success rate of 99.9%
74106
API_SLO_HIGH_SUCCESS = Objective(
75107
"My API SLO for High Success Rate (99.9%)",
76108
success_rate=ObjectivePercentile.P99_9,
77109
)
78110
79-
# Or you can also create an objective for low latency
111+
@autometrics(objective=API_SLO_HIGH_SUCCESS)
112+
def api_handler():
113+
# ...
114+
```
115+
116+
The library uses the concept of Service-Level Objectives (SLOs) to define the acceptable error rate and latency for groups of functions. Alerts will fire depending on the SLOs you set.
117+
118+
> Not sure what SLOs are? [Check out our docs](https://docs.autometrics.dev/slo) for an introduction.
119+
120+
In order to receive alerts, **you need to add a special set of rules to your Prometheus setup**. These are configured automatically when you use the [Autometrics CLI](https://docs.autometrics.dev/local-development#getting-started-with-am) to run Prometheus.
121+
122+
> Already running Prometheus yourself? [Read about how to load the autometrics alerting rules into Prometheus here](https://github.com/autometrics-dev/autometrics-shared#prometheus-recording--alerting-rules).
123+
124+
Once the alerting rules are in Prometheus, you're ready to go.
125+
126+
To use autometrics SLOs and alerts, create one or multiple `Objective`s based on the function(s) success rate and/or latency, as shown above.
127+
128+
The `Objective` can be passed as an argument to the `autometrics` decorator, which will include the given function in that objective.
129+
130+
The example above used a success rate objective. (I.e., we wanted to be alerted when the error rate started to increase.)
131+
132+
You can also create an objective for the latency of your functions like so:
133+
134+
```python
135+
from autometrics import autometrics
136+
from autometrics.objectives import Objective, ObjectiveLatency, ObjectivePercentile
137+
138+
# Create an objective for low latency
139+
# - Functions with this objective should have a 99th percentile latency of less than 250ms
80140
API_SLO_LOW_LATENCY = Objective(
81141
"My API SLO for Low Latency (99th percentile < 250ms)",
82142
latency=(ObjectiveLatency.Ms250, ObjectivePercentile.P99),
83143
)
84144
85-
@autometrics(objective=API_SLO_HIGH_SUCCESS)
145+
@autometrics(objective=API_SLO_LOW_LATENCY)
86146
def api_handler():
87147
# ...
88148
```
89149

90-
Autometrics keeps track of instrumented functions calling each other. If you have a function that calls another function, metrics for later will include `caller` label set to the name of the autometricised function that called it.
150+
## The `caller` Label
151+
152+
Autometrics keeps track of instrumented functions that call each other. So, if you have a function `get_users` that calls another function `db.query`, then the metrics for latter will include a label `caller="get_users"`.
91153

92-
## Settings
154+
This allows you to drill down into the metrics for functions that are _called by_ your instrumented functions, provided both of those functions are decorated with `@autometrics`.
155+
156+
In the example above, this means that you could investigate the latency of the database queries that `get_users` makes, which is rather useful.
157+
158+
## Settings and Configuration
93159

94160
Autometrics makes use of a number of environment variables to configure its behavior. All of them are also configurable with keyword arguments to the `init` function.
95161

@@ -99,17 +165,38 @@ Autometrics makes use of a number of environment variables to configure its beha
99165
- `service_name` - Configure the [service name](#service-name).
100166
- `version`, `commit`, `branch` - Used to configure [build_info](#build-info).
101167

168+
Below is an example of initializing autometrics with build information, as well as the `prometheus` tracker. (Note that you can also accomplish the same confiugration with environment variables.)
169+
170+
```python
171+
from autometrics import autometrics, init
172+
from git_utils import get_git_commit, get_git_branch
173+
174+
VERSION = "0.0.1"
175+
176+
init(
177+
tracker="prometheus",
178+
version=VERSION,
179+
commit=get_git_commit(),
180+
branch=get_git_branch()
181+
)
182+
```
183+
102184
## Identifying commits that introduced problems <span name="build-info" />
103185

104-
> **NOTE** - As of writing, `build_info` will not work correctly when using the default tracker (`AUTOMETRICS_TRACKER=OPEN_TELEMETRY`).
105-
> This will be fixed once the following PR is merged on the opentelemetry-python project: https://github.com/open-telemetry/opentelemetry-python/pull/3306
186+
Autometrics makes it easy to identify if a specific version or commit introduced errors or increased latencies.
187+
188+
> **NOTE** - As of writing, `build_info` will not work correctly when using the default setting of `AUTOMETRICS_TRACKER=opentelemetry`. If you wish to use `build_info`, you must use the `prometheus` tracker instead (`AUTOMETRICS_TRACKER=prometheus`).
189+
>
190+
> The issue will be fixed once the following PR is merged and released on the opentelemetry-python project: https://github.com/open-telemetry/opentelemetry-python/pull/3306
106191
>
107-
> autometrics-py will track support for build_info using the OpenTelemetry tracker via #38
192+
> autometrics-py will track support for build_info using the OpenTelemetry tracker via [this issue](https://github.com/autometrics-dev/autometrics-py/issues/38)
108193

109-
Autometrics makes it easy to identify if a specific version or commit introduced errors or increased latencies.
110194

111-
It uses a separate metric (`build_info`) to track the version and, optionally, git commit of your service. It then writes queries that group metrics by the `version`, `commit` and `branch` labels so you can spot correlations between those and potential issues.
112-
Configure the labels by setting the following environment variables:
195+
The library uses a separate metric (`build_info`) to track the version and, optionally, the git commit of your service.
196+
197+
It then writes queries that group metrics by the `version`, `commit` and `branch` labels so you can spot correlations between code changes and potential issues.
198+
199+
Configure these labels by setting the following environment variables:
113200

114201
| Label | Run-Time Environment Variables | Default value |
115202
| --------- | ------------------------------------- | ------------- |
@@ -121,9 +208,9 @@ This follows the method outlined in [Exposing the software version to Prometheus
121208

122209
## Service name
123210

124-
All metrics produced by Autometrics have a label called `service.name` (or `service_name` when exported to Prometheus) attached to identify the logical service they are part of.
211+
All metrics produced by Autometrics have a label called `service.name` (or `service_name` when exported to Prometheus) attached, in order to identify the logical service they are part of.
125212

126-
You may want to override the default service name, for example if you are running multiple instances of the same code base as separate services and want to differentiate between the metrics produced by each one.
213+
You may want to override the default service name, for example if you are running multiple instances of the same code base as separate services, and you want to differentiate between the metrics produced by each one.
127214

128215
The service name is loaded from the following environment variables, in this order:
129216

@@ -133,7 +220,7 @@ The service name is loaded from the following environment variables, in this ord
133220

134221
## Exemplars
135222

136-
> **NOTE** - As of writing, exemplars aren't supported by the default tracker (`AUTOMETRICS_TRACKER=OPEN_TELEMETRY`).
223+
> **NOTE** - As of writing, exemplars aren't supported by the default tracker (`AUTOMETRICS_TRACKER=opentelemetry`).
137224
> You can track the progress of this feature here: https://github.com/autometrics-dev/autometrics-py/issues/41
138225
139226
Exemplars are a way to associate a metric sample to a trace by attaching `trace_id` and `span_id` to it. You can then use this information to jump from a metric to a trace in your tracing system (for example Jaeger). If you have an OpenTelemetry tracer configured, autometrics will automatically pick up the current span from it.

0 commit comments

Comments
 (0)