Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: L1 state manager(reduce bridge latency milestone) #1018

Draft
wants to merge 18 commits into
base: syncUpstream/active
Choose a base branch
from

Conversation

NazariiDenha
Copy link

1. Purpose or design rationale of this PR

task page
milestone link

2. PR title

Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:

  • build: Changes that affect the build system or external dependencies (example scopes: yarn, eslint, typescript)
  • ci: Changes to our CI configuration files and scripts (example scopes: vercel, github, cypress)
  • docs: Documentation-only changes
  • feat: A new feature
  • fix: A bug fix
  • perf: A code change that improves performance
  • refactor: A code change that doesn't fix a bug, or add a feature, or improves performance
  • style: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc)
  • test: Adding missing tests or correcting existing tests

3. Deployment tag versioning

Has the version in params/version.go been updated?

  • This PR doesn't involve a new deployment, git tag, docker image tag, and it doesn't affect traces
  • Yes

4. Breaking change label

Does this PR have the breaking-change label?

  • This PR is not a breaking change
  • Yes

jonastheis and others added 6 commits August 29, 2024 12:34
 Conflicts:
	cmd/geth/main.go
	core/state_processor_test.go
	core/txpool/legacypool/legacypool.go
	eth/backend.go
	eth/ethconfig/config.go
	eth/gasprice/gasprice_test.go
	eth/handler.go
	eth/protocols/eth/broadcast.go
	eth/protocols/eth/handlers.go
	go.mod
	go.sum
	miner/miner.go
	miner/miner_test.go
	miner/scroll_worker.go
	miner/scroll_worker_test.go
	params/config.go
	params/version.go
	rollup/rollup_sync_service/rollup_sync_service_test.go
Copy link

semgrep-app bot commented Sep 1, 2024

Semgrep found 6 ssc-46663897-ab0c-04dc-126b-07fe2ce42fb2 findings:

Risk: Affected versions of golang.org/x/net, golang.org/x/net/http2, and net/http are vulnerable to Uncontrolled Resource Consumption. An attacker may cause an HTTP/2 endpoint to read arbitrary amounts of header data by sending an excessive number of CONTINUATION frames.

Fix: Upgrade this library to at least version 0.23.0 at go-ethereum/go.mod:144.

Reference(s): GHSA-4v7x-pqxf-cx7m, CVE-2023-45288

Ignore this finding from ssc-46663897-ab0c-04dc-126b-07fe2ce42fb2.

rollup/l1_state_tracker/l1_reader.go Outdated Show resolved Hide resolved
}

// FetchRollupEventsInRange retrieves and parses commit/revert/finalize rollup events between block numbers: [from, to].
func (r *L1Reader) FetchRollupEventsInRange(from, to uint64) ([]types.Log, error) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably going to be the main function of the L1Reader.

  1. it should perform queries in batches (like rollup verifier does iirc)
  2. L1Reader should be as easy to use as possible. That means no handling of types.Log. It would be much nicer if all that stuff is done within this method. Specifically, I think we should do this in here and return instances of RollupEvent interface or something. Each of these should be identifiable by type and then be able to used as such. There are already events here which we can repurpose and extend:
    type L1CommitBatchEvent struct {

rollup/l1_state_tracker/l1_reader.go Outdated Show resolved Hide resolved
return
case <-syncTicker.C:
err := t.syncLatestHead()
if err != nil {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we should probably retry with exponential backoff or something (see syncing pipeline) instead of just skipping and waiting for the next tick.

rollup/l1_state_tracker/l1_tracker.go Outdated Show resolved Hide resolved
rollup/l1_state_tracker/l1_tracker.go Outdated Show resolved Hide resolved
rollup/l1_state_tracker/l1_tracker.go Outdated Show resolved Hide resolved
rollup/l1_state_tracker/l1_tracker.go Outdated Show resolved Hide resolved
if reorg && sub.lastSentHeader != nil {
// The subscriber is subscribed to a deeper ConfirmationRule than the reorg depth -> this reorg doesn't affect the subscriber.
// Since the subscribers are sorted by ConfirmationRule, we can return here.
if sub.lastSentHeader.Number.Uint64() < headerToNotify.Number.Uint64() {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reorg doesn't affect subscriber but new header still should be sent I think

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what you mean here.

Let's assume the reorg depth is 3 but the subscriber is only interested in 5 confirmations and the tip of L1 is at 103.
Then this reorg at block 100 doesn't affect the subscriber nor do we need to notify again for block 95 as we already did that when we first processed block 100 (the one that got reorged).

The test should cover all those scenarios in detail as it is indeed quite tricky

Copy link
Author

@NazariiDenha NazariiDenha Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's assume that we called notifyLatest with newHeader on height 103 and reorg set to true. Then for subscriber with depth 5 headerToNotifyNumber will be 98. And even if last send number for that subscriber was less then 98 and reorg is true, we can inform subscriber that there is a new block on depth 5. Reorg doesn't affect this subscriber but there is still new block for that subscriber

Copy link

@jonastheis jonastheis Sep 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right. I'll need to implement the reorg handling differently.


// NewReader initializes a new Reader instance
func NewReader(ctx context.Context, config Config, l1Client Client) (*Reader, error) {
if config.ScrollChainAddress == (common.Address{}) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should also check for L1MessageQueueAddress

Base automatically changed from feat/sync-directly-from-da-rebased to syncUpstream/active October 16, 2024 02:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants