You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recent additions start specifying timeouts to apply either on a specific request or in general at the script entry point.
rf_logs.py specifies a timeout of 30 when instantiating the Redfish object
All other scripts specify a timeout of 15 when instantiating the Redfish object
Multipart push update applies 2 seconds per MB (determined from the file size)
Original times were a bit more strict (5 seconds for all scripts, and approximately 1 second for every 3 MB for a push update).
15 seconds could be a bit aggressive for most usage and we could likely bring this back down to 5 seconds. However, the log entry reading could easily go past this in some cases, so maybe we could push the 30 seconds down to the log entry retrieval itself.
I don't have a good sense of a "right" answer for the multipart update one; file sizes can be large, and should we penalize fast networks for being accommodating of slower networks? Is there a better solution? In Ansible, a user has to specify the timeout, but I prefer to avoid adding more options.
The text was updated successfully, but these errors were encountered:
While testing a device update using rf_update.py on a small IoT device, I noticed that for large files (over 50MB), the multipart request times out after only 70% of the file is uploaded. To resolve this, I had to manually adjust the timeout in the script.
Could a timeout argument be added, or perhaps the timeout logic adjusted based on the total file size?
Sure, that's certainly something we can add as an option.
Out of curiosity so I can better understand the scale, about how fast is the transfer to this device? We do try to make a "best guess" calculation specifically for the update script based on the file size (I think a 50MB file should have a timeout of 100 seconds).
Recent additions start specifying timeouts to apply either on a specific request or in general at the script entry point.
Original times were a bit more strict (5 seconds for all scripts, and approximately 1 second for every 3 MB for a push update).
15 seconds could be a bit aggressive for most usage and we could likely bring this back down to 5 seconds. However, the log entry reading could easily go past this in some cases, so maybe we could push the 30 seconds down to the log entry retrieval itself.
I don't have a good sense of a "right" answer for the multipart update one; file sizes can be large, and should we penalize fast networks for being accommodating of slower networks? Is there a better solution? In Ansible, a user has to specify the timeout, but I prefer to avoid adding more options.
The text was updated successfully, but these errors were encountered: