Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attribute-aware error metrics for simplification #158

Closed
fstrugar opened this issue Jun 23, 2020 · 18 comments
Closed

Attribute-aware error metrics for simplification #158

fstrugar opened this issue Jun 23, 2020 · 18 comments

Comments

@fstrugar
Copy link

Hi! I'm playing with https://developer.nvidia.com/orca/amazon-lumberyard-bistro dataset and meshoptimizer and I've noticed this particular failcase related to the way the corners were authored.

For example, here is the original chair mesh:
image

Notice the rounded corners with shared vertices. They survive the first pass of meshopt_simplify to half the number of triangles fine:
image

However, once the triangles between two sides facing at 90deg get folded and the sides start sharing the vertices, the vertex normals can no longer be correct:
image

What would be a solution to (automatically) preventing this?

I was thinking of adding additional custom skip code in the 'pickEdgeCollapses' loop if angle between vertex normals is above certain threshold but I'm sure there's a better/simpler solution, perhaps already there? :)

(instead of preventing collapse, could also allow it but duplicate verts so normals aren't shared?)

Thanks for the great library!!

@zeux
Copy link
Owner

zeux commented Jun 25, 2020

Yeah, so there's a few ways to fix this.

One is to discard and recompute normals post-simplification, possibly splitting vertices when the crease angle is too sharp. This works around the problem in a way, but of course it's not very convenient.

Another one is to factor the normal delta into the simplification as an extra error. This can be done by comparing the normals alongside the edge that's considered for collapse, or it can be done by introducing normal into the quadric weight. It's on my list to experiment more with this, there's a simplify-attr branch in this repository from my last attempt but when I worked on this at the time it became clear that this isn't very simple so I decided to take a break and think about this more.

This isn't implemented right now though, and it's definitely good to address this but I'm not sure what the best solution is, since ideally it's not just the normals that need to be taken into account, and balancing ease of use, performance and quality here is tricky...

@fire
Copy link

fire commented Dec 24, 2020

I also encountered this problem from the

image

mesh in #206 (comment)

@fire
Copy link

fire commented Dec 29, 2020

@zeux Would you be able to look at this?

Thanks for your amazing work on meshoptimzier.

@zeux
Copy link
Owner

zeux commented Dec 29, 2020

I believe this part of the comment above accurately reflects the plan here:

Another one is to factor the normal delta into the simplification as an extra error. This can be done by comparing the normals alongside the edge that's considered for collapse, or it can be done by introducing normal into the quadric weight. It's on my list to experiment more with this, there's a simplify-attr branch in this repository from my last attempt but when I worked on this at the time it became clear that this isn't very simple so I decided to take a break and think about this more.

Since this issue is still open, you can assume I'm going to look into this at some point in the future; when exactly this point will be I can't say, as this requires some further research on how to best integrate the attribute metrics with geometry metrics in a way that is reasonably easy to tune once instead of having to tweak weights per model.

@zeux zeux changed the title Question on normals and/or custom error metrics Attribute-aware error metrics for simplification Dec 30, 2020
@fire
Copy link

fire commented Apr 8, 2021

Factor the normal-delta into the simplification as an extra error by comparing the normals alongside the edge considered for collapse.

The metric can be done by introducing normal into the quadric weight.

Since comparing attributes is not simple, would there be any other approaches?

I wanted to look into this, but a bit lost.

Edited:

I tried using your attribute branch and didn't see any major problems.

https://github.com/fire/meshoptimizer/tree/simplify-normal-attribute

@zeux
Copy link
Owner

zeux commented Apr 9, 2021

I tried using your attribute branch and didn't see any major problems.

Yeah, it needs more work to be production ready wrt metric, I think the branch predates some geometric improvements - and also needs some interface and optimization work. FWIW I plan to resume this in the next few weeks.

@fire
Copy link

fire commented Apr 9, 2021

Is there's a better way to define normal being close enough? It seems to block optimizations of any curved surface. Only flat planes get optimized.

Not sure how to allow the first pass of decimation in the chair example and then block the ones that fail.

I wish there was a way to optimize the indices with the normals on the second try.

My thoughts are using quad remeshing or isotropic remeshing, but that has a lot of work, but it gives the optimizer more room to work.

Notes:

@zeux
Copy link
Owner

zeux commented Apr 9, 2021

It seems to block optimizations of any curved surface.

That's because the metric needs work I believe; the code in that branch right now is very challenging to tune properly, which is part of why this hasn't been integrated yet. I'm not aware of existing research that's more promising than the general approach used there but since that code isn't production ready it can have all sorts of issues, and likely requires taking a path that hasn't been explicitly documented in academia (at least it was the case for geometric error, where the approach that meshoptimizer uses is inspired by prior research but doesn't follow it precisely).

Remeshing is orthogonal to simplification - it can definitely make topology-aware simplification easier, but doesn't solve the problem by itself and you still need attribute awareness within the simplifier to solve significant attribute distortion from this thread.

@fire
Copy link

fire commented Apr 9, 2021

I'll do some literature searches for vertex normal merge, collapse, and flip metrics.

If you have any keywords I can search that'll help too.

Edited:

Will list some promising papers:

https://dl.acm.org/doi/pdf/10.1145/2425836.2425911

Edited:

I'm going to use the 6 element truncated 3x3 orientation matrix to store the normal. This uses 6 attributes. It seems to work ok.

@fire
Copy link

fire commented May 22, 2021

godotengine/godot#47764

@zeux

Can you take a moment to see if this is legitimate, the Godot Engine contributors had concerns about applying patches on top of meshoptimizer that aren't merged.

I wanted some motion on this topic.

Thanks!

@Zylann
Copy link

Zylann commented Jul 4, 2021

Considering the title of this issue:

I have voxel meshes which can contain encoded texture splatting parameters in extra attributes (repurposing color and UV) in additional vertex arrays.

Problem: removing vertices in that scenario directly reduces quality even if geometry is preserved. Simplification only seems to care about vertex positions, which means there should be more information to give meshoptimizer, or some way to customize the comparison between vertices.
Tangent problem: my meshes use multiple streams (structure of arrays), but the current API seems to only take one.

I'm wondering if simplification is actually suited in that situation, otherwise it doesnt sound actually... simple (the kind of data I'm storing is packed sets of indices and weights).

Does this match the current issue or should I open another?

@zeux
Copy link
Owner

zeux commented Jul 4, 2021

Yes that’s the same problem as highlighted in this issue. Attribute aware simplification will be exposed as a separate function with separate attribute stream inputs.

@Adi-Amazon
Copy link

With risk of stating the obvious, my suggestion would be to:

  1. Have attributes aware simplification as suggested
  2. On top of this, it makes perfect sense to expose user defined threshold for each of the control parameters - for example normals crease angle, minimal distance for position weld, color and UV differentiation, etc..

@zeux
Copy link
Owner

zeux commented Nov 2, 2023

Copying the comment from #524 on some future work involved here; the issue will stay open as the algorithm improves further:

  • The attribute metric is not perfect - it's functioning correctly and is numerically stable, but it misses certain obvious visual errors. I have some ideas on how to improve this but it requires significant math modeling work.
  • The attribute quadrics are not properly aggregated across discontinuities. This is the case for Godot's fork as well.
  • The resulting error, as well as error limit, include the attribute error. Godot's fork adjusts output error to only track distance, but keeps error limit as is. I might instead add a second output error parameter, we'll see. This also requires tracking both errors, which increases the collapse list structure if done naively.
  • The attribute and geometry errors are hard to balance. There are some ideas I'd like to try around this, but right now very careful weight tuning is required for good results, and the weights strongly depend on the type of attribute involved.
  • In presence of attributes, some automatic optimizations like vertex welding are possible that would significantly improve the quality for some topology-constrained meshes.

@JMS55
Copy link

JMS55 commented Aug 16, 2024

Excited to see more progress in this area! After #737, how should we think of the resulting error metric for use with LOD selection? Position-based error is already kinda nebulous to begin with, I'm not sure what to do when using simplifyWithAttributes.

@zeux
Copy link
Owner

zeux commented Aug 16, 2024

how should we think of the resulting error metric for use with LOD selection?

I hesitate to give any guidance on this for now as there's more refinement planned for the attribute metric as well as more experiments around LOD selection.

The eventual goal is to make the result usable as a distance-like metric for LOD selection, maybe with some use specific fudge factors. Before the last update, the resulting value was decidedly useless, so much so that Godot currently uses a patch where the attribute metric is used internally but the result is just positional - this is not ideal in many respects as well. FWIW after this update I think the attribute evaluation is closer to what Nanite uses, and they use a combination of positional and attribute error pretty much directly to select LOD by treating it as a distance like metric with a tuned scaler.

@zeux
Copy link
Owner

zeux commented Aug 27, 2024

Using #158 (comment), to summarize the work that happened since then:

  • Attribute metric had issues with weighting that have been fixed. Current implementation seems to follow existing research on the subject faithfully and does not have shortcomings that have known solutions. This is not to say it's perfect...
  • I've spent some time investigating future improvements to this. For now I have concluded that this is close to what we can reasonably get out of quadrics. "Correctly" modeling attribute degradation requires at least 4-degree "quadrics" (which requires something like 50 floats vs current 12 excluding gradient data, and more to represent gradients, which is getting prohibitively expensive). I tried some ideas for how to make incremental improvements to the existing evaluation; at least one of them is viable and in theory should make things better but in practice the quality of the results with it is inconclusive. This change can be made separately/later - it does not strongly affect the resulting error or the weighting.
  • The discontinuities are now handled properly during aggregation and reasonably well during evaluation. This fixes some visual issues and overly high resulting errors.
  • The balancing of attribute error vs position is now dramatically easier due to distance based scaling; it still requires trial and error but it's fairly reasonable based on tests with colors and normals. I haven't tested incorporating UVs into the mix yet.

There is some remaining work left. Given that this issue has been open for years and a significant amount of work has went into the current algorithm, I will close this but note future work (that will be developed separately in the future as time and priorities permit):

  • The resulting error for simplifyWithAttributes still combines both positions and attributes. I suspect it is practical to use this value to derive LODs, but this will be separately investigated and documentation will be updated accordingly for the next release. If necessary, the interface can still be changed to output two errors separately and let the application combine them however it sees fit I suppose... (this change would be ABI-breaking but could be made source-compatible for C++ via default arguments, similarly to how options was added to meshopt_simplify)
  • Small further tweaks to attribute metric are possible, both as general improvements and as edge case fixes. At this point I would generally expect that these don't dramatically change the required tuning so there's no need to gate this issue on them.
  • Given attribute values, simplifyWithAttributes could be more lenient wrt attribute seams and auto-weld vertices where the attribute delta is below the limit suggested by error target and adjacent face area. Crucially, this will need to be opt-in: the simplifier currently doesn't make an assumption that the values of all attributes are known, as such in some cases it's unsafe to do this, so this will need a separate meshopt_Simplify* flag and as such doesn't need to be part of this issue.

In general, I've exhausted significant further improvements in this area for now apart from the future work mentioned above; it is of course possible that new ideas surface in the future, and they will be incorporated, but it is time to mark this issue "complete" :) The notion being that the basic capability is now present, but future enhancements are possible in, well, the future.

@zeux
Copy link
Owner

zeux commented Aug 27, 2024

In the interest of keeping future improvements to this area cross linked from one place I will keep referencing this issue in any future PRs in this area.

Note that regardless of any progress on any future work at least the next library version will keep the interface/implementation "experimental", with the aim to eventually stabilize it when the interface is known to be final. This doesn't mean it's not useful in production (in fact the previous version has been used in Godot for a year or two now!), just means the interface is not necessarily final.

@zeux zeux closed this as completed Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants