Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RC2 fixes] fix cloudinit disk slot and pm_parallel oddities #947

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

mleone87
Copy link
Collaborator

No description provided.

@@ -456,8 +456,8 @@ func resourceVmQemu() *schema.Resource {
Schema: map[string]*schema.Schema{
"ide0": schema_Ide("ide0"),
"ide1": schema_Ide("ide1"),
"ide2": schema_Ide("ide2"),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the change from ide3 to ide2?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

am I right with this change since ide2 should be not used directly? Ora maybe it can be used but it conflicts with cloudinit is someone want to deploy a vm with a ISO file with terraform?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well ide2 used to be reserved for the iso propertie, but that was removed in #937. Currently only ide3 is reserved for cloud-init but we could move that to the new disks schema as well and let people chose on which device they'll mount the cloud-init disk. Fro now i would leave it as ide3 as it's more intuitive that ide 0,1,2 are usable instead of 0,1,3.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Somewhere in the docs it should say that ide2 is reserve, because of the change withe the cloud-init disk.

}

d.Set("reboot_required", rebootRequired)
log.Print("[DEBUG][QemuVmCreate] vm creation done!")
logger.Info().Str("vmid", d.Id()).Msgf("VM creation done!")
lock.unlock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a defer unlock on 802,803


// if vmState["status"] == "running" && d.Get("vm_state").(string) == "running" {
// diags = append(diags, initConnInfo(ctx, d, pconf, client, vmr, &config, lock)...)
// }
lock.unlock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a defer unlock on 1048,1042

@@ -1587,6 +1566,7 @@ func resourceVmQemuDelete(ctx context.Context, d *schema.ResourceData, meta inte
}

_, err = client.DeleteVm(vmr)
lock.unlock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a defer unlock on 1320,1299

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are some lock.unlock() in there that are unnecessary due to deferred unlocks. Don't know for sure what happens when you try to unlock an unlocked mutex.

Copy link
Collaborator

@Tinyblargon Tinyblargon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Confused about changing the cloud-init disk to ide2, docs don't represent this change. Found some redundant unlocks which could cause issues.

@mleone87
Copy link
Collaborator Author

mleone87 commented Mar 5, 2024

so @Tinyblargon regarding the unlock() thing, investigating the issue in #930 I found that if we use defer and return the create/update functions into read, we are not freeing the slot because the unlock is AFTER read ends... but the reads do not complete because we have no free slots.

so i leave the defer for failure INTO the execution of the create/update and i explicit unlock before read if all goes well. reading the docs about defer something() I understood is acceptable to be redundant using something()

@Tinyblargon
Copy link
Collaborator

@mleone87 i get what you mean with calling the read from inside the update and create. instead of managing all the nested locks wouldn't it be easier to create a resourceVmQemuReadNoLock() and we call that form inside the create and update function.

func resourceVmQemuRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
  pconf := meta.(*providerConfiguration)
  lock := pmParallelBegin(pconf)
  defer lock.unlock()
  return resourceVmQemuReadNoLock(ctx, d, meta)
}

func resourceVmQemuReadNoLock(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics {
  // everything that is in the current version of resourceVmQemuRead()
}

@mleone87
Copy link
Collaborator Author

mleone87 commented Mar 9, 2024

thinking about your proposed solution @Tinyblargon it will not work either since there are no free slot when entering the read function

I tested my code and it worked well in various scenarios

@mleone87 mleone87 marked this pull request as ready for review March 13, 2024 08:36
@mleone87
Copy link
Collaborator Author

I should be ready to merge this

@Tinyblargon
Copy link
Collaborator

@mleone87 still a bit confused about the change of the cloud-init disk. #947 (comment) everything else looks good,

@mleone87
Copy link
Collaborator Author

@Tinyblargon at the end I think that the only part to keep is the pm_parallel fix, if not fixed yet!

@Tinyblargon
Copy link
Collaborator

@mleone87 Yes, everything else has been superseded by other pull requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants