-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
document the usage of PREFECT_PROFILE when developing locally and deploying in production #15597
Comments
hi @david-gang - thank you for the issue! are you using prefect 3.x or 2.x? it would be helpful to know how you're leveraging |
hi @david-gang i would like to work on this issue and fix it would you please assign this to me?.. |
@zzstoatzz I use prefect 3.x My trigger for this issue was a bug we had in a k8s pod. When calling the prefect.get_client, a prefect server was started. This caused a higher memory usage and ultimately caused an OOM in my pod. This issue was due to the fact that i didn't set the PREFECT_API_URL. Analyzing this issue and understanding the whole settings and profile mechanism , the settingscontext and the root_settings_context took me time. I want to make it easy to add prefect as the workflow orchestration in multiple products in the company i work. In our current code bases we have a mechanism to know if we run in a local or production/(staging) environment, so i want to make a python wrapper which identifies the environment and sets the PREFECT_PROFILE accordingly, so that instead that a new server will be created i will get an informative error message
I am interested to hear if you have additional ideas to prevent such a scenario. |
hi @adityakalburgi - unfortunately this is likely not a "good first issue", we're in the midst of making some intentional changes to how settings work - thank you for your interest! feel free to follow along here hi @david-gang - thank you for the issue! This is something we are actively working on simplifying, so your feedback here is very helpful
I think we would fail as you expect if So the question for me is "why wasn't can you explain how things are happening in your case? i.e.
my expectation is that if you're using a kubernetes work pool to run a deployment, the |
@zzstoatzz I know. This was a stupid bug from my side not setting PREFECT_API_URL. But on the other hand starting the prefect server is an unexpected behavior. Just think on a theoretical application where you use postgresql and because you forgot to set the database url it starts a sqlite db which is 100% compatible with postgresql. wouldn't it be suprising? my main issue is that i am writing the library code internally in my company and i want to make it easy for people doing errors to get an informative error message which guides them to fix the issue. |
@zzstoatzz I have a suggestion for an alternative solution. When you start the server in ephemeral mode instead of writing starting server, write maybe starting server in ephemeral mode. For more information read at http:// |
Describe the current behavior
Currently the documented values here are PREFECT_API_KEY and PREFECT_API_URL.PREFECT_PROFILE is an important variable which is not documented there.
Describe the proposed behavior
PREFECT_PROFILE should be documented as explained at this github issue
it should be explained that whe working with cloud the PREFECT_PROFILE should be cloud and all other variants
Example Use
No response
Additional context
The trigger of this issue was that the call of prefect.get_client caused in my k8s pod that a prefect server will be created because i did not set the environment variable of PREFECT_API_URL Finding the root cause was very hard. It would have been better to get an exception that the url was not set. I know that my request doesn't solve the issue because i can forget to set both PREFECT_API_URL and PREFECT_PROFILE but anyway it is important to document this.
But maybe you have an additional idea on how to make this more fail safe
The text was updated successfully, but these errors were encountered: