Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPIKE: Investigate block time ingestion #1715

Closed
zone117x opened this issue Sep 19, 2023 · 1 comment · Fixed by #1741
Closed

SPIKE: Investigate block time ingestion #1715

zone117x opened this issue Sep 19, 2023 · 1 comment · Fixed by #1741
Assignees
Milestone

Comments

@zone117x
Copy link
Member

zone117x commented Sep 19, 2023

Block ingestion is taking either ~3.5 second or around 30 seconds, roughly evenly distributed over time. The slow 30 second ingestion time is going to be an issue with the 5 second Nakamoto blocks.

There isn't much deviation from these two durations, which implies there should be a relatively straightforward cause. The first thing that comes to mind is that re-orgs may trigger the 30 second blocks.

More debugging is needed to confirm the cause and look into a fix.

@zone117x zone117x self-assigned this Sep 19, 2023
@smcclellan smcclellan changed the title Investigate block time ingestion SPIKE: Investigate block time ingestion Sep 19, 2023
@smcclellan smcclellan added this to the Q3-2023 milestone Sep 19, 2023
@zone117x
Copy link
Member Author

zone117x commented Oct 3, 2023

The spike is caused by the nft_custody materialized view refresh. The query runs concurrently, i.e. the /new_block event processing kicks off the view refresh in the background. The query can take a couple minutes, and while it's running all subsequent interactions with the postgres instance are slowed down, including ingestion for the next block.

It doesn't look like the materialized view sql can be sufficiently optimized. We likely need to move to an approach of tracking nft custody more explicitly.

Relevant code here:

async refreshNftCustody(txs: DataStoreTxEventData[], unanchored: boolean = false) {
await this.sqlWriteTransaction(async sql => {
const newNftEventCount = txs
.map(tx => tx.nftEvents.length)
.reduce((prev, cur) => prev + cur, 0);
if (newNftEventCount > 0) {
// Always refresh unanchored view since even if we're in a new anchored block we should update the
// unanchored state to the current one.
await this.refreshMaterializedView('nft_custody_unanchored', sql);
if (!unanchored) {
await this.refreshMaterializedView('nft_custody', sql);
}
} else if (!unanchored) {
// Even if we didn't receive new NFT events in a new anchor block, we should check if we need to
// update the anchored view to reflect any changes made by previous microblocks.
const result = await sql<{ outdated: boolean }[]>`
WITH anchored_height AS (SELECT MAX(block_height) AS anchored FROM nft_custody),
unanchored_height AS (SELECT MAX(block_height) AS unanchored FROM nft_custody_unanchored)
SELECT unanchored > anchored AS outdated
FROM anchored_height CROSS JOIN unanchored_height
`;
if (result.length > 0 && result[0].outdated) {
await this.refreshMaterializedView('nft_custody', sql);
}
}
});
}

pgm.createMaterializedView('nft_custody', {}, `
SELECT
DISTINCT ON(asset_identifier, value) asset_identifier, value, recipient, tx_id, nft.block_height
FROM
nft_events AS nft
INNER JOIN
txs USING (tx_id)
WHERE
txs.canonical = true
AND txs.microblock_canonical = true
AND nft.canonical = true
AND nft.microblock_canonical = true
ORDER BY
asset_identifier,
value,
txs.block_height DESC,
txs.microblock_sequence DESC,
txs.tx_index DESC,
nft.event_index DESC
`);

@zone117x zone117x linked a pull request Oct 25, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging a pull request may close this issue.

3 participants