Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

init: dynamically determine necessary size for ALLOCMAN_VIRTUAL_SIZE #115

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

chrisguikema
Copy link
Contributor

The value of 72000 (used to scale guest_ram_mb to ALLOCMAN_VIRTUAL_SIZE bytes) was found by increasing ALLOCMAN_VIRTUAL_SIZE by 500000000 bytes and finding the difference in allocated RAM. Each 50000000 increased, the guest was able to allocate an additional 6960 MiBs.

* values gives a scale of 71839, which we round up to 72000
*/

/*- set guest_ram_mb = configuration[me.name].get('guest_ram_mb', 4096) -*/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it might be better to just have a function that returns the config value guest_ram_mb and don't pull in any assumptions about which allocator is used. Instead, the calculation is made the in pre_init(), because here the decision about the allocator is made.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't that the point of the templates though? Also, what do you mean by "which allocator is used"?

This was inspired by the ARM version, which does something similar in pulling the allocator_mempool_size from the template. Basically, with this version you can allocate ~4GB of RAM, which was enough for a 32-bit VM, but a 64-bit VM can have way more, so using the #define wasn't enough. I'm not sure what benefit would be added putting the function in pre_init, as this is the purpose of the template.

https://github.com/seL4/camkes-vm/blob/master/templates/seL4AllocatorMempool.template.c#L9C111-L9C111

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But there we have a config setting dedicated to the allocator pool. Maybe having this is not really a good idea at all, because this seems like an internal details one does not really care. All a user realy wants is set the VM's amount of memory, and leave the rest to some internal automation actually?
Here we have a config setting for the VM's RAM. I have some doubt that this template then gives much, it just adds another layer. Doing the calculation directly in pre_init() appears much more straight forward, as we just replace the fixed define then. All that is left in the template is a getter for guest_ram_mb. But this function could then also be added in seL4VMParameters.template.c?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for x86 only, and not ARM. ARM and x86 use different methods.

ARM:

    /* Initialize allocator */
    allocman = bootstrap_use_current_1level(
                   simple_get_cnode(simple),
                   simple_get_cnode_size_bits(simple),
                   simple_last_valid_cap(simple) + 1 + num_extra_frame_caps,
                   BIT(simple_get_cnode_size_bits(simple)),
                   get_allocator_mempool_size(), get_allocator_mempool()

x86:

    error = sel4utils_reserve_range_no_alloc(&vspace, pool_reservation.res,
                                             allocman_virtual_size(), seL4_AllRights, 1, &vaddr);
    if (error) {
        ZF_LOGF("Failed to provide virtual memory allocator");
    }
    bootstrap_configure_virtual_pool(allocman, vaddr, allocman_virtual_size(), simple_get_init_cap(&camkes_simple,
                                                                                                   seL4_CapInitThreadPD));

I don't know the difference why the methods are different. But x86 doesn't use the VMParameters template at all.

The value of 72000 (used to scale guest_ram_mb to ALLOCMAN_VIRTUAL_SIZE
bytes) was found by increasing ALLOCMAN_VIRTUAL_SIZE by 500000000 bytes
and finding the difference in allocated RAM. Each 50000000 increased,
the guest was able to allocate an additional 6960 MiBs. This allows
64-bit x86 VMs to allocate large amounts of memory.

Signed-off-by: Chris Guikema <[email protected]>
@@ -174,12 +173,12 @@ void pre_init(void)
ZF_LOGF("Failed to allocate reservation");
}
error = sel4utils_reserve_range_no_alloc(&vspace, pool_reservation.res,
Copy link
Member

@axel-h axel-h Oct 26, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think with the current parameters we should have this here. And we can't refer to ALLOCMAN_VIRTUAL_SIZE any longer because this got removed, so I adjusted the comment.

* For every 500.000.000 bytes in the allocator virtual size
 * the guest can map 6.960 MiBs. Dividing the two
 * values gives a scale of 71839, which we round up to 72000
 */
unsigned int allocman_virt_size = get_guest_ram_mb() * 72000;

My point is, that the template does not really know what we do here and it should not have to (at least for now). Your current improvement goal seem to be making things more dynamic, but we should separate this from aiming towards a config option about the allocator pool (like we have on ARM). So I consider it more simple for now if we just provide a get_guest_ram_mb(). As a follow-up we could consider aligning the x86 and ARM a bit more then

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its not trying to make things more dynamic really, its just trying to allow for large RAM allocation, replacing a predefined macro with a value calculated based on the VMs needs. ARM doesn't define the amount of RAM in the configuration, x86 does, and its used in other places throughout the VMM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants