-
Notifications
You must be signed in to change notification settings - Fork 0
chore: Local loadtest with k6 #342
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
@@ -0,0 +1,48 @@ | |||
# Outpost Loadtesting Documentation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Loadtest instructions here
- **k6 Testing Scripts**: Two main testing flows: | ||
- **Events Throughput**: Publishes events to the Outpost service to test throughput capacity. | ||
- **Events Verify**: Checks the mock webhook destination to verify successful delivery of events. | ||
|
||
These scripts are coordinated using Redis to maintain test state and correlate published events with verifications. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there something that outputs:
- Throughput in messages/second: totalMessages / toSeconds(lastMessageTimestamp - firstMessageTimetamp)
- Latency:
- P50 (Median): Half of the messages were faster than this value.
- P95: 95% of messages were faster; 5% were slower.
- P99: Useful to catch tail latency (rare but potentially critical delays).
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the events-throughput one is to simulate load. You can configure the rate (rps, request = publish) using the config. It will output the number of requests sent along with avg rps.
events-verify will randomly select a few (the count configurable via MAX_ITERATIONS env) to verify that it's delivered. This script will output the event latency metrics (with max, avg, and p90, p95). Keep in mind this metrics is the end-to-end latency, with publish timestamp - received timestamp. We can potentially get the latency from when outpost receives the event til it's being delivered too, but it's not implemented yet.
implements #311