Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor vm calling convention to allow locals #3496

Merged
merged 5 commits into from
Dec 20, 2023

Conversation

HalidOdat
Copy link
Member

@HalidOdat HalidOdat commented Dec 3, 2023

This PR makes storing locals after fp much easier by moving the fp at the end of the stack that the caller sets up, making the frame pointer the in the middle, the locals start at fp and accessing the caller setup values we offset back instead.

This adds a local ( /register 😉 ) count variable to CodeBlock.

Checkout call_frame.rs for the new calling convention.

This makes #3194 easier and allows us to move stuff from the call frame that is only for a specific type of function (like async promise capability or generator function) reducing the size of the call frame and only paying for things the user uses.

Moves async generator object out of CallFrame to the stack. Reducing the usage of memory of a CallFrame for non-async-generator functions.

@HalidOdat HalidOdat added execution Issues or PRs related to code execution vm Issues and PRs related to the Boa Virtual Machine. labels Dec 3, 2023
Copy link

github-actions bot commented Dec 3, 2023

Test262 conformance changes

Test result main count PR count difference
Total 95,960 95,960 0
Passed 76,534 76,534 0
Ignored 18,477 18,477 0
Failed 949 949 0
Panics 0 0 0
Conformance 79.76% 79.76% 0.00%

Copy link

codecov bot commented Dec 3, 2023

Codecov Report

Attention: 19 lines in your changes are missing coverage. Please review.

Comparison is base (6ce34c7) 48.88% compared to head (8d41a22) 48.87%.

Files Patch % Lines
core/engine/src/vm/opcode/generator/mod.rs 53.33% 7 Missing ⚠️
core/engine/src/vm/mod.rs 73.68% 5 Missing ⚠️
core/engine/src/vm/call_frame/mod.rs 86.20% 4 Missing ⚠️
core/engine/src/vm/opcode/await/mod.rs 50.00% 2 Missing ⚠️
core/engine/src/module/synthetic.rs 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3496      +/-   ##
==========================================
- Coverage   48.88%   48.87%   -0.01%     
==========================================
  Files         471      471              
  Lines       48492    48499       +7     
==========================================
+ Hits        23705    23706       +1     
- Misses      24787    24793       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@HalidOdat HalidOdat force-pushed the refactor/vm-calling-convention branch from 93d7a48 to a7aea59 Compare December 9, 2023 21:56
@HalidOdat HalidOdat added this to the v0.18.0 milestone Dec 10, 2023
@HalidOdat HalidOdat requested a review from a team December 10, 2023 16:16
@HalidOdat HalidOdat marked this pull request as ready for review December 10, 2023 16:17
@HalidOdat HalidOdat force-pushed the refactor/vm-calling-convention branch from 4f33a15 to 1bd91eb Compare December 10, 2023 16:19
Copy link
Member

@nekevss nekevss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great! Always glad to see the size of CallFrame go down! 😄 Had a couple small doc nits and one general thought that I had when reviewing.

core/engine/src/vm/call_frame/mod.rs Outdated Show resolved Hide resolved
/// The position of the elements are relative to the [`CallFrame::fp`].
/// The position of the elements are relative to the [`CallFrame::fp`] (frame pointer).
///
/// ```text
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General thought on this: Would it be better to store this fp as a sp (although I'm not sure it's a true stack pointer, maybe lp for local pointer), and keep the fp and make up the memory by computing the argument count via self.sp - self.fp - Self::FUNCTION_PROLOGUE or would computing the args count be more inefficient?

Copy link
Member Author

@HalidOdat HalidOdat Dec 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be better to store this fp as a sp

I don't think calling it sp would be correct, sp is implict in the stack vec itself that being the length (pointing to the position that the next value will be put).

Native calling conventions the arguments passed are not part of the call frame of the callee. But it doesn't really matter, since we can make our own convention. 😄

The biggest reason for fp change is to reduce the computation needed to access locals before we would have to do fp + FUNCTION_PROLOGUE + arg_count + local_offset with this change we only do fp + local_offset.

I kind of hinted at the next change, switching to a register based VM 😄 Registers will be used extensively so we want fast access. I think it's a good time to switch before we are too dependent on the stack version, while working on #3037 Modeling data flow of stack based VMs is much harder, and was conflicted whether to continue development, since it would make switching much harder and it will have to be redone.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yepp! I noticed the register based shift!

Just for clarity, I meant keep fp and then local pointer/sp (whatever we call it) in exchange for argument_count. It would then exchanging restore_fp in exchange for calculating argument_count. I think we still compute the argument_start anyways, which can then just be fp + FUNCTION_PROLOGUE over fp - argument_count.

// Current dumbed down version of `CallFrame`
struct DummyFrame {
    fp: u32,
    argument_count: u32
}

// New version
struct NewFrame {
    fp: u32,
    lp: u32
}

// argument_count
impl NewFrame {
    // Trade off would be that we compute arg_count over compute fp
    fn arg_count(&self) -> u32 {
        self.lp - self.arg_start()
    }
    
    // Throwing this in as a utility
    fn arg_start(&self) -> u32 {
        self.fp + FUNCTION_PROLOGUE
    }
}

The idea was that maybe we save a bit of compute by not having to use restore_fp in exchange for calculating the arg_count. At least that was the thought I had (could be off on that), but thought I'd run it by you nonetheless. Of course, if that would somehow undercut the register shift you had in mind, then completely disregard this.

Copy link
Member Author

@HalidOdat HalidOdat Dec 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay. I decided to leave this for after we implement the register VM, added a todo so I don't forget 😅 .

I renamed the fp into rp (register pointer), calling it register pointer, instead of local pointer for consistency with the registers that will be added into the future.

core/engine/src/builtins/generator/mod.rs Outdated Show resolved Hide resolved
@HalidOdat HalidOdat force-pushed the refactor/vm-calling-convention branch from 1bd91eb to 887a3f4 Compare December 20, 2023 11:19
Copy link
Member

@nekevss nekevss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work on this!

@jedel1043 jedel1043 added this pull request to the merge queue Dec 20, 2023
Merged via the queue into main with commit af9d395 Dec 20, 2023
14 checks passed
@HalidOdat HalidOdat deleted the refactor/vm-calling-convention branch December 20, 2023 17:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
execution Issues or PRs related to code execution vm Issues and PRs related to the Boa Virtual Machine.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants