Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add exponential backoff to queries using RetryClient #60

Merged
merged 10 commits into from
Sep 16, 2023

Conversation

ipatka
Copy link
Contributor

@ipatka ipatka commented Sep 7, 2023

Similar to #59 but using the default ethers retry client (looks a lot cleaner)

Acquisition Options:
  -l, --requests-per-second <limit>  Ratelimit on requests per second
      --max-retries <R>              Specify max retries on provider errors [default: 10]
      --initial-backoff <B>          Specify initial backoff for retry strategy (ms) [default: 500]
      --max-concurrent-requests <M>  Global number of concurrent requests
      --max-concurrent-chunks <M>    Number of chunks processed concurrently
  -d, --dry                          Dry run, collect no data

New provider

    let provider =
        Provider::<RetryClient<Http>>::new_client(&rpc_url, args.max_retries, args.initial_backoff)
            .map_err(|_e| ParseError::ParseError("could not connect to provider".to_string()))?;

@ipatka ipatka marked this pull request as draft September 7, 2023 01:46
@sslivkoff
Copy link
Member

looking great

wdyt the defaults should be? max-retries=10 could be fine, unless that means that a broken node won't be detected for something like 2**10 * 0.5 = 512 seconds. but maybe the RetryClient is smart enough to stop early if the node is messed up. for reference these are the defaults used by the alchemy sdk (initial=1000ms max_retries=5)

@ipatka
Copy link
Contributor Author

ipatka commented Sep 7, 2023

looking great

wdyt the defaults should be? max-retries=10 could be fine, unless that means that a broken node won't be detected for something like 2**10 * 0.5 = 512 seconds. but maybe the RetryClient is smart enough to stop early if the node is messed up. for reference these are the defaults used by the alchemy sdk (initial=1000ms max_retries=5)

5 & 1000 (or maybe 500) seems reasonable. 10 & 500 came from the default values in the retryclient docs. Will update the defaults!

@ipatka ipatka marked this pull request as ready for review September 11, 2023 22:44
@ipatka
Copy link
Contributor Author

ipatka commented Sep 11, 2023

@sslivkoff defaults updated and marking this one ready for review. will close the other one

@sslivkoff
Copy link
Member

looks great

@sslivkoff sslivkoff merged commit a2941ce into paradigmxyz:main Sep 16, 2023
3 checks passed
sslivkoff added a commit that referenced this pull request Oct 10, 2023
sslivkoff pushed a commit that referenced this pull request Oct 10, 2023
* wip trying to get error type

* Add retry option to args

* format

* adapt to new fetcher pattern

* remove debug

* fix test

* Try using retry client

* fix

* update defaults
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants