Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase ingest/publish rate to 1K granules/min #340

Merged
merged 1 commit into from
Jan 26, 2024

Conversation

chuckwondo
Copy link
Collaborator

In conjunction, decrease discover/queue rate to try to make it roughly the same rate as ingest/publish so that messages are never in jeopardy of reaching the retention period of 4 days (AWS allows up to 14 days, but Cumulus does not allow this to be configured for the background job queue), no matter how large the collection is. If we can manage to have the ingest/publish rate equal to the discover/queue rate, we can ingest a collection of any size without concern because no messages will ever be in the queue for more than perhaps a few minutes.

However, since getting both rates to be identical is impossible, it would be better to err in favor of a slightly greater rate for ingest/publish because this would never allow messages to remain on the queue for more than a few moments. If we were to err slightly on the other side, where discover/queue is slightly faster, we would ever so slowly grow the queue. Given a large enough collection, even this slow growth would eventually lead to messages exceeding the retention period, but this would likely require a collection containing several million granules, perhaps at least 10M.

Also, make error handling a bit more robust to make sure we do our utmost to retry and if all else fails we make sure we record the error for Athena queries. There have been recent disrepancies between the number of errors we see in Athena and the number of granules with status "failed", where Athena appears to be missing failures. This may be due to the RecordFailure step not more reliably capturing and writing failures to S3.

Fixes #337

In conjunction, decrease discover/queue rate to try to make it roughly
the same rate as ingest/publish so that messages are never in jeopardy
of reaching the retention period of 4 days (AWS allows up to 14 days,
but Cumulus does not allow this to be configured for the background job
queue), no matter how large the collection is.  If we can manage to
have the ingest/publish rate equal to the discover/queue rate, we can
ingest a collection of any size without concern because no messages will
ever be in the queue for more than perhaps a few minutes.

However, since getting both rates to be identical is impossible, it
would be better to err in favor of a slightly greater rate for
ingest/publish because this would never allow messages to remain on the
queue for more than a few moments.  If we were to err slightly on the
other side, where discover/queue is slightly faster, we would ever so
slowly grow the queue.  Given a large enough collection, even this slow
growth would eventually lead to messages exceeding the retention period,
but this would likely require a collection containing several million
granules, perhaps at least 10M.

Also, make error handling a bit more robust to make sure we do
our utmost to retry and if all else fails we make sure we record
the error for Athena queries. There have been recent disrepancies
between the number of errors we see in Athena and the number
of granules with status "failed", where Athena appears to be missing
failures. This may be due to the RecordFailure step not more reliably
capturing and writing failures to S3.

Fixes #337
Copy link
Collaborator

@krisstanton krisstanton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

@chuckwondo chuckwondo merged commit 2dff693 into main Jan 26, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Increase ingest/publish rate to 1K granules/min
2 participants