Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project Rewrite Tracking Issue #620

Closed
4 of 13 tasks
rj00a opened this issue Jun 3, 2024 · 8 comments
Closed
4 of 13 tasks

Project Rewrite Tracking Issue #620

rj00a opened this issue Jun 3, 2024 · 8 comments
Labels
enhancement New feature or request

Comments

@rj00a
Copy link
Member

rj00a commented Jun 3, 2024

Here are the steps that need to be taken to rewrite the project and get it updated to the latest version of the game, since folks were asking.

Should be done roughly in this order, one or more PRs per check.

  • Clean up the build process and use a standard set of lints throughout the project. CI scripts probably need to be updated too. Use new [lints] table in Cargo.toml #586
  • Move from criterion to divan for benchmarking.
  • Update all dependencies to the latest versions. Dependabot PRs should be closed.
  • Merge some or all of the changes from https://github.com/valence-rs/valence_nbt back into the main repo.
  • (TODO: maybe hold off on this step for now) Replace bevy ECS with evenio. To keep this step approachable, this should alter the architecture and crate structure as little as possible. This will let us identify any issues with the new ECS before moving to the next steps. Keep targeting MC 1.20.1 at this point. Some issues with evenio will have to be resolved first:
  • Update the extractor to the latest Minecraft version. Some of this work was completed in Rewrite Project #599 but things broke again after another update.
  • Update to the latest version of the game To keep this approachable, we will want to postpone updating all nonessential auxiliary crates for later PRs. We will want to rethink the architecture while we're doing this, which will likely include changes such as
    • Don't generate block registries at compile time (see next point). The generated code from build scripts takes far too long to build and is likely to be insufficient in future Minecraft updates anyway.
    • Create a standard interface for registries. One idea I had is wrapping "columns" of registry data in Arcs for easy sharing and fast lookups. Registry modification is then possible if the refcount is 1.
    • Use a lighter completion-based networking library instead of tokio. Use compio instead tokio_uring #619 mentioned using compio.
    • Simplify the crate structure. There are too many small crates that could just be combined.
  • Update auxiliary crates like valence_anvil, valence_command.
  • Port over the examples where appropriate.
  • Update the packet inspector to work with the new valence_protocol. Might need a rewrite.
  • Close most of the old issues and PRs from before the rewrite.
@rj00a rj00a added the enhancement New feature or request label Jun 3, 2024
@rj00a rj00a pinned this issue Jun 3, 2024
rj00a pushed a commit that referenced this issue Jun 19, 2024
rj00a pushed a commit that referenced this issue Jun 25, 2024
As discussed in the discord I will defer updating bevy to its own PR.
(1/2) of point num 3 of #620 . I am not sure how to close all the
dependabot PRs.
dyc3 pushed a commit that referenced this issue Jul 28, 2024
update to bevy 0.14. works towards #620  task 3.
dyc3 pushed a commit that referenced this issue Jul 28, 2024
Update egui. Ticks off #620 point 3
@luiswirth

This comment was marked as off-topic.

@kehrazy

This comment was marked as off-topic.

@OvermindDL1
Copy link

OvermindDL1 commented Sep 15, 2024

Replace bevy ECS with evenio.

May I ask why? bevy_ecs outperforms it on its own benchmark everywhere except on random access, where bevy is twice the time (because it's a dual lookup for an entity->archetype->data), but of times so tiny they are just a pointer indirection cost anyway?

Here's my benchmark times using evenio's own benchmark with the latest version of both evenio and bevy_ecs (so git for evenio and 0.14.2 for bevy, no code changes needed for their benchmark code, it all just worked) for their benchmarks that have bevy in it:

add_components

(honestly I expect bevy to be slightly slower here as archetypes styles often are due to more copies, so... weird?):

Timer precision: 70 ns
add_components                fastest       │ slowest       │ median        │ mean          │ samples │ iters
├─ spawn_many_comps_40_bevy   1.662 µs      │ 44.46 µs      │ 1.705 µs      │ 2.952 µs      │ 1001000
╰─ spawn_many_comps_n_evenio                │               │               │               │         │
   ├─ 1                       276.1 ns      │ 1.73 µs       │ 282.2 ns      │ 305.4 ns      │ 1001000
   ├─ 2                       463.1 ns      │ 2.065 µs      │ 471.1 ns      │ 506.9 ns      │ 1001000
   ├─ 3                       688.1 ns      │ 2.177 µs      │ 699.1 ns      │ 735.2 ns      │ 1001000
   ├─ 5                       1.189 µs      │ 3.053 µs      │ 1.212 µs      │ 1.253 µs      │ 1001000
   ├─ 10                      2.637 µs      │ 6.394 µs      │ 2.664 µs      │ 2.762 µs      │ 1001000
   ├─ 15                      2.67 µs       │ 11.34 µs      │ 4.419 µs      │ 4.437 µs      │ 1001000
   ├─ 20                      2.762 µs      │ 11.69 µs      │ 3.285 µs      │ 3.384 µs      │ 1001000
   ├─ 25                      3.339 µs      │ 11.59 µs      │ 3.916 µs      │ 4.176 µs      │ 1001000
   ├─ 30                      5.099 µs      │ 16.01 µs      │ 5.131 µs      │ 5.406 µs      │ 1001000
   ╰─ 40                      6.76 µs       │ 20.27 µs      │ 7.936 µs      │ 8.12 µs       │ 1001000

iter

(I expected bevy to be faster here, and it indeed was in much of it, but there seems to be a weird bit of noise... somewhere, it's not scaling for either of them as it should on the mandelbrot stuff... Also I find it odd they only look up a single component...):

Timer precision: 70 ns
iter                       fastest       │ slowest       │ median        │ mean          │ samples │ iters
├─ iter_mandelbrot_bevy                  │               │               │               │         │
│  ╰─ true                               │               │               │               │         │
│     ├─ 1                 38.87 µs      │ 106.5 µs      │ 64.03 µs      │ 66.08 µs      │ 100100
│     ├─ 2                 48.01 µs      │ 130.8 µs      │ 71.97 µs      │ 72.85 µs      │ 100100
│     ├─ 3                 63.36 µs      │ 133.7 µs      │ 97.65 µs      │ 98.05 µs      │ 100100
│     ├─ 8                 127 µs        │ 213.1 µs      │ 154.8 µs      │ 156.5 µs      │ 100100
│     ├─ 16                199.2 µs      │ 598.1 µs      │ 239.2 µs      │ 277.9 µs      │ 100100
│     ├─ 32                220.5 µs      │ 828.7 µs      │ 337.8 µs      │ 349.6 µs      │ 100100
│     ├─ 64                431.3 µs      │ 673.7 µs      │ 540 µs        │ 538.6 µs      │ 100100
│     ├─ 128               454.5 µs      │ 1.673 ms      │ 759.5 µs      │ 773.4 µs      │ 100100
│     ├─ 256               834.4 µs      │ 2.127 ms      │ 1.437 ms      │ 1.459 ms      │ 100100
│     ├─ 512               2.077 ms      │ 3.302 ms      │ 2.748 ms      │ 2.764 ms      │ 100100
│     ╰─ 1024              2.737 ms      │ 9.691 ms      │ 5.098 ms      │ 4.412 ms      │ 100100
├─ iter_mandelbrot_evenio                │               │               │               │         │
│  ╰─ true                               │               │               │               │         │
│     ├─ 1                 56.97 µs      │ 1.506 ms      │ 111.4 µs      │ 156.6 µs      │ 100100
│     ├─ 2                 63.31 µs      │ 775.3 µs      │ 186.8 µs      │ 189.3 µs      │ 100100
│     ├─ 3                 62.99 µs      │ 503.5 µs      │ 78.32 µs      │ 128.9 µs      │ 100100
│     ├─ 8                 74.49 µs      │ 375.4 µs      │ 141.4 µs      │ 159.8 µs      │ 100100
│     ├─ 16                90.65 µs      │ 978.5 µs      │ 229.7 µs      │ 242.3 µs      │ 100100
│     ├─ 32                110.7 µs      │ 919.3 µs      │ 243.2 µs      │ 247 µs        │ 100100
│     ├─ 64                145 µs        │ 1.086 ms      │ 337.8 µs      │ 343.9 µs      │ 100100
│     ├─ 128               227.7 µs      │ 800.8 µs      │ 449.3 µs      │ 465.5 µs      │ 100100
│     ├─ 256               411.8 µs      │ 2.352 ms      │ 636.3 µs      │ 662.1 µs      │ 100100
│     ├─ 512               687.1 µs      │ 1.418 ms      │ 709 µs        │ 765.7 µs      │ 100100
│     ╰─ 1024              1.291 ms      │ 2.342 ms      │ 1.357 ms      │ 1.464 ms      │ 100100
├─ iter_simple_bevy                      │               │               │               │         │
│  ├─ false                              │               │               │               │         │
│  │  ├─ 1                 7.806 ns      │ 44.63 ns      │ 11.79 ns      │ 11.98 ns      │ 10025600
│  │  ├─ 10                18.97 ns      │ 51.59 ns      │ 18.99 ns      │ 19.31 ns      │ 10051200
│  │  ├─ 100               64.47 ns      │ 135.7 ns      │ 64.48 ns      │ 65.53 ns      │ 10012800
│  │  ├─ 1000              630.3 ns      │ 2.205 µs      │ 630.5 ns      │ 685.9 ns      │ 100800
│  │  ├─ 10000             6.298 µs      │ 25.85 µs      │ 12.55 µs      │ 10.02 µs      │ 100100
│  │  ├─ 100000            53.55 µs      │ 150.1 µs      │ 125.1 µs      │ 99 µs         │ 100100
│  │  ╰─ 1000000           594 µs        │ 1.416 ms      │ 1.115 ms      │ 1.034 ms      │ 100100
│  ╰─ true                               │               │               │               │         │
│     ├─ 1                 1.789 µs      │ 51.56 µs      │ 1.958 µs      │ 5.421 µs      │ 100100
│     ├─ 10                4.938 µs      │ 41.98 µs      │ 9.783 µs      │ 11.13 µs      │ 100100
│     ├─ 100               13.95 µs      │ 74.89 µs      │ 18.28 µs      │ 21.41 µs      │ 100100
│     ├─ 1000              20.77 µs      │ 90.14 µs      │ 30.4 µs       │ 33.6 µs       │ 100100
│     ├─ 10000             25.73 µs      │ 186 µs        │ 42.01 µs      │ 44.17 µs      │ 100100
│     ├─ 100000            67.79 µs      │ 240.5 µs      │ 111.2 µs      │ 121.2 µs      │ 100100
│     ╰─ 1000000           217.9 µs      │ 937.3 µs      │ 476 µs        │ 477.7 µs      │ 100100
╰─ iter_simple_evenio                    │               │               │               │         │
   ├─ false                              │               │               │               │         │
   │  ├─ 1                 30.02 ns      │ 658.6 ns      │ 30.38 ns      │ 131.9 ns      │ 10025600
   │  ├─ 10                37.88 ns      │ 86.93 ns      │ 40.53 ns      │ 41.42 ns      │ 10025600
   │  ├─ 100               151 ns        │ 271 ns        │ 151.4 ns      │ 153.3 ns      │ 1003200
   │  ├─ 1000              1.284 µs      │ 4.563 µs      │ 1.286 µs      │ 1.465 µs      │ 100400
   │  ├─ 10000             12.57 µs      │ 33.79 µs      │ 25.07 µs      │ 19.54 µs      │ 100100
   │  ├─ 100000            106.8 µs      │ 308.9 µs      │ 220.3 µs      │ 194.3 µs      │ 100100
   │  ╰─ 1000000           1.341 ms      │ 4.775 ms      │ 2.521 ms      │ 2.478 ms      │ 100100
   ╰─ true                               │               │               │               │         │
      ├─ 1                 59.16 ns      │ 115.6 ns      │ 59.24 ns      │ 64.57 ns      │ 10012800
      ├─ 10                12.66 µs      │ 260.4 µs      │ 34.37 µs      │ 48.77 µs      │ 100100
      ├─ 100               20.55 µs      │ 252.3 µs      │ 31.27 µs      │ 39.79 µs      │ 100100
      ├─ 1000              42.22 µs      │ 201.3 µs      │ 89.62 µs      │ 94.8 µs       │ 100100
      ├─ 10000             58.51 µs      │ 438.7 µs      │ 118.2 µs      │ 152.4 µs      │ 100100
      ├─ 100000            79.11 µs      │ 710.3 µs      │ 232.8 µs      │ 257.6 µs      │ 100100
      ╰─ 1000000           289 µs        │ 2.196 ms      │ 765 µs        │ 866.6 µs      │ 100100

random_access

(I expect bevy to be twice the cost here since 2 pointer indirections instead of one for the secondary index/cache, but again, these are so fast that the times are just noise, the test has to have a million entities to even get to the times it's at, which is literally just memory pressure timings at this point as they run literally no code beyond a black_box access in the tests, doing any work at all will flatten this):

Timer precision: 29 ns
random_access            fastest       │ slowest       │ median        │ mean          │ samples │ iters
├─ random_access_bevy    1.842 ms      │ 3.721 ms      │ 2.02 ms       │ 2.18 ms       │ 100100
╰─ random_access_evenio  917.1 µs      │ 1.279 ms      │ 1.052 ms      │ 1.056 ms      │ 100100

@dyc3
Copy link
Collaborator

dyc3 commented Sep 15, 2024

I can't find the discussion right now, but this has been discussed before. We are looking to replace bevy primarily because of development ergonomics. A lot of the code we write relies very heavily on events that trigger logic.

@JackCrumpLeys
Copy link
Contributor

I personally don't think any developers are motivated to continue this rewrite. At the moment @rj00a has mysteriously disappeared and I seem to be the main person still working on it. My development effort is currently focused on 1.21 and after that I would rather add features then undertake a rewrite. If someone with a lot of spare time and motivation shows up this might happen but it seems unlikely.

@giantcow
Copy link

The evenio docs eludes to some of the reasoning https://docs.rs/evenio/latest/evenio/tutorial/#why-evenio

@OvermindDL1
Copy link

A lot of the code we write relies very heavily on events that trigger logic.

The evenio docs eludes to some of the reasoning https://docs.rs/evenio/latest/evenio/tutorial/#why-evenio

bevy_ecs also handles events, though it does it in a more controllable way with less overhead as they are batched and run at-once in a system with parallel support. What it looks like evenio does from my little of experimenting with it is it calls event handlers immediately, which is equivalent to bevy_ecs's ability to run a system on demand (many times in a frame if you so wish), except bevy's has less virtual dispatch overhead as the archetypes can be immediately known, statically if so wished (or even dynamically dispatched if you wanted to go that way, ala' how evenio does it).

I experimented with evenio but I didn't see any feature it had that bevy_ecs lacked, and bevy_ecs has a great many features that it lacks, and it had no performance boost to offset it either (and indeed you lose out in a great amount of performance in many areas with multi-threading as it can only multi-thread 'within' a system (if the user chooses to), not 'across' systems, and you cannot often multi-thread within a system and even most of the time when you can it's not worth it due to the task dispatch overhead and the low amount of iterations within most systems).

But yeah, in short if you want to run something to handle an 'event' now then bevy has run-on-demand systems for that, but honestly you shouldn't except in specialized cases (which do exist), that's needless performance overhead for no reason otherwise and makes it harder to follow 'when' something happens.

@dyc3
Copy link
Collaborator

dyc3 commented Oct 9, 2024

I'm going to close and unpin this issue since the main driver behind it (@rj00a) is MIA for now. We can always reopen it later.

IMO, the changes suggested in this issue (particularly the switch to evenio) are a non-starter because it requires too much effort. We are better off continuing to iterate on the current version.

@dyc3 dyc3 closed this as not planned Won't fix, can't repro, duplicate, stale Oct 9, 2024
@dyc3 dyc3 unpinned this issue Oct 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants