-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipeline: fully move to S3 #67
Conversation
(Still testing this.) |
OK, dropping WIP, now tested! ✔️ |
@@ -102,18 +102,12 @@ podTemplate(cloud: 'openshift', label: 'coreos-assembler', yaml: pod, defaultCon | |||
} | |||
|
|||
stage('Fetch') { | |||
// XXX: drop `!prod && ` once we've uploaded prod builds there | |||
if (!prod && s3_builddir) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we ran this once already? or do we need coreos/coreos-assembler#545 first?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We did buildupload
once already, so we have something to buildprep
from. coreos/coreos-assembler#545 is for the case where there's nothing to buildprep
from (which is mostly relevant for devel pipelines).
this mostly LGTM. only concern I have is if we (or anyone else) want to set up another pipeline and not touch s3. Would keeping the rsync functionality around be useful ? |
Now that we have a prod build in S3, we can fully switch over to it. Most of the changes are straightforward. One note of interest: the "pruning" stage is now really just about pruning the local cache. For now, we're not pruning from the bucket at all, pending a more defined policy & mechanism. Note we're still uploading the latest build to the artifact server to make it easier for folks to download. Though soon we should replace that with a frontend.
OK, I've reworked this for now so that we still do upload the latest build to the artifact server. From comment: // XXX: For now, we keep uploading the latest build to the artifact
// server to make it easier for folks to access since we don't have
// a stream metadata frontend/website set up yet. The key part here
// is that it is *not* the canonical storage for builds. |
So as per the previous comment, I've kept it for now for another reason. Though once we're ready to move on for good, I think I'd rather we drop it completely until someone actually shows up with that use case. Note for the no-S3 devel pipeline, we do store builds in the PVC. So one should be able to just bring up e.g. simple-httpd to access it over HTTP if they'd like. |
hmm i'd prefer to just open up the bucket and make it brows-able IMHO
I'm OK with letting |
I thought consensus was to not have it browsable, right? Because we don't want users mucking around in there in the first place. See e.g. coreos/fedora-coreos-tracker#169 (comment). |
yeah should have clarified make it brows-able until we get a release browswer either way I guess we can still rsync the content out and just point people there for now. |
AFAIK, S3 doesn't have a native "web browser" (like e.g. the Apache listing). Making the bucket "browsable" would mean indexing it with something like https://github.com/projectatomic/papr/blob/master/papr/utils/indexer.py which drops a bunch of |
Now that we have a prod build in S3, we can fully switch over to it.
Most of the changes are straightforward. One note of interest: the
"pruning" stage is now really just about pruning the local cache. For
now, we're not pruning from the bucket at all, pending a more defined
policy & mechanism.