Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

Impose limit on batch request size #879

Merged
merged 1 commit into from
Sep 4, 2023

Conversation

njgheorghita
Copy link
Collaborator

@njgheorghita njgheorghita commented Aug 31, 2023

What was wrong?

Fixes #830

Batch requests to a panda-ops provider of > 100 were failing as those clients are configured to handle a max of 100 batch requests at a time.

This bug was the reason why our bridge has not been pumping out receipts. Can confirm that bridge now gossips receipts as expected.

How was it fixed?

Add a limit to haw many pandaops requests can be batched into a single request.

To-Do

  • Add entry to the release notes (may forgo for trivial changes)
  • Clean up commit history

@njgheorghita njgheorghita marked this pull request as ready for review August 31, 2023 22:53
@@ -45,6 +47,23 @@ impl PandaOpsMiddleware {
/// multiple async requests. Using "ureq" consistently resulted in errors as soon as the number of
/// concurrent tasks increased significantly.
pub async fn batch_request(&self, obj: Vec<JsonRequest>) -> anyhow::Result<String> {
let futs = obj
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let futs = obj
let batched_request_futures = obj

.chunks(BATCH_LIMIT)
.map(|chunk| self.batch_request_100(chunk.to_vec()))
.collect::<Vec<_>>();
match futures::future::join_all(futs).await.into_iter().try_fold(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd use futures::future::join_all instead of inlining

@@ -45,6 +47,23 @@ impl PandaOpsMiddleware {
/// multiple async requests. Using "ureq" consistently resulted in errors as soon as the number of
/// concurrent tasks increased significantly.
pub async fn batch_request(&self, obj: Vec<JsonRequest>) -> anyhow::Result<String> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pub async fn batch_request(&self, obj: Vec<JsonRequest>) -> anyhow::Result<String> {
pub async fn batch_requests(&self, obj: Vec<JsonRequest>) -> anyhow::Result<String> {

nit: I think that this name makes more sense given the new behavior

}
}

async fn batch_request_100(&self, obj: Vec<JsonRequest>) -> anyhow::Result<Vec<Value>> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
async fn batch_request_100(&self, obj: Vec<JsonRequest>) -> anyhow::Result<Vec<Value>> {
async fn send_requests(&self, requests: Vec<JsonRequest>) -> anyhow::Result<Vec<Value>> {
if requests.len() > BATCH_LIMIT { warn!("Attempting to send requests outnumbering pandaops limit") }

use serde_json::json;
use serde_json::{json, Value};

const BATCH_LIMIT: usize = 100;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A short docstring will be useful here to describe this const.

}
}

async fn batch_request_100(&self, obj: Vec<JsonRequest>) -> anyhow::Result<Vec<Value>> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we change the BATCH_LIMIT const this function name will not be relevant anymore.

@njgheorghita njgheorghita merged commit 00ea4e1 into ethereum:master Sep 4, 2023
@njgheorghita njgheorghita deleted the batch-limit branch September 4, 2023 18:17
# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Inspect bridge error when batching more than 100 requests.
3 participants