Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Update documentation (README and examples) #13

Merged
merged 7 commits into from
Apr 6, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 14 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,12 @@

A Python decorator that makes it easy to understand the error rate, response time, and production usage of any function in your code. Jump straight from your IDE to live Prometheus charts for each HTTP/RPC handler, database method, or other piece of application logic.

Autometrics for Python provides a decorator that can create [Prometheus](https://prometheus.io/) metrics for your functions and class methods throughout your code base, as well as a function that will write corresponding Prometheus queries for you in a Markdown file.
Autometrics for Python provides:

[See Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for more details on the ideas behind autometrics
1. A decorator that can create [Prometheus](https://prometheus.io/) metrics for your functions and class methods throughout your code base.
2. A helper function that will write corresponding Prometheus queries for you in a Markdown file.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm actually not sure if the "Markdown file" bit is accurate? if so, i should add an example


See [Why Autometrics?](https://github.com/autometrics-dev#why-autometrics) for more details on the ideas behind autometrics.

## Features

Expand All @@ -21,23 +24,26 @@ Autometrics for Python provides a decorator that can create [Prometheus](https:/

## Using autometrics-py

- Requirement: a running [prometheus instance](https://prometheus.io/download/)
- include a .env file with your prometheus endpoint `PROMETHEUS_URL = your endpoint`, if not defined the default endpoint will be `http://localhost:9090/`
- Set up a [Prometheus instance](https://prometheus.io/download/)
- Configure prometheus to scrape your application ([check our instructions if you need help](https://github.com/autometrics-dev#5-configuring-prometheus))
- Include a .env file with your prometheus endpoint `PROMETHEUS_URL=your endpoint`. If this is not defined, the default endpoint will be `http://localhost:9090/`
- `pip install autometrics`
- Import the library in your code and use the decorator for any function:

```
from autometrics import autometrics
```py
from autometrics.autometrics import autometrics
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like we need to use from autometrics.autometrics - wondering if there's a way to make the import cleaner

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i'm thinking we might actually do something like:

import autometrics

@autometrics
...

couldn't we? half as joke but i think it's doable

Copy link
Contributor

@actualwitch actualwitch Apr 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

never mind, i forgor how python package resolution works. here's a pr that enables usage like from autometrics import autometrics


@autometrics
def sayHello:
return "hello"

```

- If you like to access the queries for your decoraded functions you can run `help(yourfunction)` or `print(yourfunction.__doc__)`
- To access the PromQL queries for your decorated functions, run `help(yourfunction)` or `print(yourfunction.__doc__)`.

- To show tooltips over decorated functions in VSCode, with links to Prometheus queries, try installing [the VSCode extension](https://marketplace.visualstudio.com/items?itemName=Fiberplane.autometrics).

- Unfortunately it is not possible to have the queries in the tooltips due to the [static Analyzer](https://github.com/davidhalter/jedi/issues/1921). We are currently figuring out to build a VS Code PlugIn to make it work.
> Note that we cannot support tooltips without a VSCode extension due to behavior of the [static analyzer](https://github.com/davidhalter/jedi/issues/1921) used in VSCode.

## Development of the package

Expand Down
55 changes: 55 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# autometrics-py examples

You should be able to run each example by executing `python examples/<example>.py` from the root of the repo.

You can change the base url for Prometheus links via the `PROMETHEUS_URL` environment variable. So, if your local Prometheus were on a non-default port, like 9091, you would run:

```sh
PROMETHEUS_URL=http://localhost:9091/ python examples/example.py
```

Read more below about each example, and what kind of features they demonstrate.

Also, for the examples that expose a `/metrics` endpoint, you will need to configure Prometheus to scrape that endpoint. There is an example `prometheus.yaml` file in the root of this project, but here is the relevant part:

```yaml
# Example prometheus.yaml
scrape_configs:
- job_name: "python-autometrics-example"
metrics_path: /metrics
static_configs:
- targets: ["localhost:8080"]
# For a real deployment, you would want the scrape interval to be
# longer but for testing, you want the data to show up quickly
scrape_interval: 500ms
```

## `docs-example.py`

This script shows how the autometrics decorator augments the docstring for a python function.

We simply decorate a function, then print its docstring to the console using the built-in `help` function.

## `example.py`

This script demonstrates the basic usage of the `autometrics` decorator. When you run `python examples/example.py`, it will output links to metrics in your configured prometheus instance.

You can read the script for comments on how it works, but the basic idea is that we have a division function (`div_unhandled`) that occasionally divides by zero and does not catch its errors. We can see its error rate in prometheus via the links in its doc string.

Note that the script starts an HTTP server on port 8080 using the Prometheus client library, which exposes metrics to prometheus (via a `/metrics` endpoint).

Then, it enters into an infinite loop (with a 2 second sleep period), calling methods repeatedly with different input parameters. This should start generating data that you can explore in Prometheus. Just follow the links that are printed to the console!

> Don't forget to configure Prometheus itself to scrape the metrics endpoint. Refer to the example `prometheus.yaml` file in the root of this project on how to set this up.

## `caller-example.py`

Autometrics also tracks a label, `caller`, which is the name of the function that called the decorated function. The `caller-example.py` script shows how to use that label. It uses the same structure as the `example.py` script, but it prints a PromQL query that you can use to explore the caller data yourself.

> Don't forget to configure Prometheus itself to scrape the metrics endpoint. Refer to the example `prometheus.yaml` file in the root of this project on how to set this up.

## `fastapi-example.py`

This is an example that shows you how to use autometrics to get metrics on http handlers with FastAPI. In this case, we're setting up the API ourselves, which means we need to expose a `/metrics` endpoint manually.

> Don't forget to configure Prometheus itself to scrape the metrics endpoint. Refer to the example `prometheus.yaml` file in the root of this project on how to set this up.
51 changes: 43 additions & 8 deletions examples/caller-example.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,54 @@
print("hello")
from prometheus_client import start_http_server
from autometrics.autometrics import autometrics
import time
import random


# This is moana, who would rather explore the ocean than prometheus metrics
@autometrics
def message():
return "hello"
def moana():
return "surf's up!"


# This is neo, the one (that we'll end up calling)
@autometrics
def greet(name):
m = message()
greeting = f"hello {name}, {m}"
return greeting
def neo():
return "i know kung fu"


# This is simba. Rawr.
@autometrics
def simba():
return "rawr"


# Define a function that randomly calls `moana`, `neo`, or `simba`
@autometrics
def destiny():
random_int = random.randint(0, 2)
if random_int == 0:
return f"Destiny is calling moana. moana says: {moana()}"
elif random_int == 1:
return f"Destiny is calling neo. neo says: {neo()}"
else:
return f"Destiny is calling simba. simba says: {simba()}"


# Start an HTTP server on port 8080 using the Prometheus client library, which exposes our metrics to prometheus
start_http_server(8080)

print(f"Try this PromQL query in your Prometheus dashboard:\n")
print(
f"# Rate of calls to the `destiny` function per second, averaged over 5 minute windows\n"
)
print(
'sum by (function, module) (rate(function_calls_count_total{caller="destiny"}[5m]))'
)

# Enter an infinite loop (with a 1 second sleep period), calling the `destiny` and `agent_smith` methods.
while True:
greet("john")
destiny()
time.sleep(0.3)


# NOTE - You will want to open prometheus
5 changes: 4 additions & 1 deletion examples/docs-example.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import time
from autometrics.autometrics import autometrics


Expand All @@ -8,4 +7,8 @@ def hello():
print("Hello")


# Use the built-in `help` function to print the docstring for `hello`
#
# In your console, you'll see links to prometheus metrics for the `hello` function,
# which were added by the `autometrics` decorator.
help(hello)
23 changes: 16 additions & 7 deletions examples/example.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,13 @@
from prometheus_client import start_http_server
from autometrics.autometrics import autometrics
import time
import random


# Defines a class called `Operations`` that has two methods:
# 1. `add` - Perform addition
# 2. `div_handled` - Perform division and handle errors
#
class Operations:
def __init__(self, **args):
self.args = args
Expand All @@ -24,29 +29,33 @@ def div_handled(self, num1, num2):
return result


# Perform division without handling errors
@autometrics
def div_unhandled(num1, num2):
result = num1 / num2
return result


@autometrics
def text_print():
return "hello"


ops = Operations()

# Show the docstring (with links to prometheus metrics) for the `add` method
print(ops.add.__doc__)

# Show the docstring (with links to prometheus metrics) for the `div_unhandled` method
print(div_unhandled.__doc__)

# Start an HTTP server on port 8080 using the Prometheus client library, which exposes our metrics to prometheus
start_http_server(8080)

# Enter an infinite loop (with a 2 second sleep period), calling the "div_handled", "add", and "div_unhandled" methods,
# in order to generate metrics.
while True:
ops.div_handled(2, 0)
ops.add(1, 2)
ops.div_handled(2, 1)
div_unhandled(2, 0)
text_print()
# Randomly call `div_unhandled` with a 50/50 chance of raising an error
div_unhandled(2, random.randint(0, 1))
ops.add(1, 2)
time.sleep(2)
# Call `div_unhandled` such that it raises an error
div_unhandled(2, 0)
3 changes: 3 additions & 0 deletions examples/fastapi-example.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,14 @@
app = FastAPI()


# Set up a metrics endpoint for Prometheus to scrape
# `generate_lates` returns the latest metrics data in the Prometheus text format
@app.get("/metrics")
def metrics():
return Response(generate_latest())


# Set up the root endpoint of the API
@app.get("/")
@autometrics
def read_root():
Expand Down
13 changes: 8 additions & 5 deletions prometheus.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,15 @@ global:
evaluation_interval: 15s

scrape_configs:
- job_name: 'prometheus'
# Use prometheus to scrape prometheus :)
- job_name: "prometheus"
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- targets: ["localhost:9090"]

- job_name: 'myservice'
scrape_interval: 5s
- job_name: "python-autometrics-example"
# For a real deployment, you would want the scrape interval to be
# longer but for testing, you want the data to show up quickly
scrape_interval: 500ms
static_configs:
- targets: ['localhost:8080']
- targets: ["localhost:8080"]