Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simplify: Improve attribute metric computation #737

Merged
merged 8 commits into from
Aug 15, 2024
Merged

simplify: Improve attribute metric computation #737

merged 8 commits into from
Aug 15, 2024

Conversation

zeux
Copy link
Owner

@zeux zeux commented Aug 15, 2024

The goal of this set of changes is to bring the attribute metric closer to its ideal state by validating and refining the math and improving the weighting logic to make the error more composable and explainable.

The main user-visible change here is the switch to unnormalized area weighted quadrics; detailed reasoning is provided in 9efadb0 but the high level summary is that this makes the error closer to the ideal "distance traveled by the vertex times the attribute deviation" (squared) which results in a better match between (normalized) position and attribute metrics, making adding them more sensible, makes attribute weight tuning more intuitive and mesh scale-invariant, and makes the error limit and resulting error actually useful for LOD selection and tuning. This may not be the final form, as there are still issues with quadric accumulation with this (and previous!) metric, but this should be a better option overall.

Curiously, this makes the attribute metric match the original Hoppe paper, although it's done for reasons that go beyond the scope of the paper, and the positional component remains different - which is crucial as that is what makes the combination more sensible.

Existing code that uses meshopt_simplifyWithAttributes with very small weights for unit-scaled attributes should be updated to use weights that are closer to 1; this change does that for a couple demo programs for consistency. Hopefully no more significant weight tuning will be necessary even if more metric updates were to follow. Based on extensive testing this seems to produce simplified meshes of comparable or slightly better quality but should make the results more coherent with a static set of weights.

As part of this change, the derivation and behavior of attribute metric was also fully validated vs original Hoppe paper (using re-derivation as well as numeric validation with the help of Eigen's matrix solvers); this code is not included in the PR, but notably this change also fixes area computation and improves precision of attribute error evaluation by reordering some computations. Other than the off-by-0.5 area and weighting issues, no mistakes have been found, but a couple tweaks were made for improved precision. The analytical formulation was found to be ~10x more precise and significantly faster (~4x in terms of overall simplification process, ~50x faster in terms of just the quadric computation) which is nice.

Finally, since with this change we should be closer to stabilizing meshopt_simplifyWithAttributes (although this likely will not happen in the next versioned release to allow room for interface changes), this change documents various advanced uses of simplification algorithms.

Contributes to #158.

This contribution is sponsored by Valve.

In most places in meshoptimizer, area is a short-hand for "2x area" and
is used exclusively as a normalization factor so the extra 2x cancels
out. In attribute metric, it is used in a way that is user-visible and
affects the results as it changes the attribute weighting behavior.

We could probably compute the real area everywhere but for simplicity
let's just update this here as it is the only place where this
distinction matters.
Slightly expanding on comments for derivation; also note a missing
optimization wrt weight scaling - this is very immaterial for
performance (moving scaling outside of the loop does not make this
faster) or precision, so let's leave it as is.

Note: as part of this validation, in addition to revalidating the
derivation (which can be error prone), this code was also validated by
computing the gradients (gx/gy/gz/gw) using Eigen as a numerical solver.
To do this we can follow the original paper by Hoppe and solve a 4x4
linear system, or solve a 3x3 subproblem that only includes gx/gy/gz
and compute gw so that gx*v0x + vy*v0y + gz*v0w + gw = a0.

This produces more or less the same results during simplification which
proves that the derivation is correct; the analytical formulation this
code uses is much much faster, numerically stable and precise (yielding
~5-10x smaller error as measured by the linear system, as in |Ax-B|).

Since we probably won't need to adjust this in the future, this change
does not include the actual Eigen code used for validation.
The math for evaluating the error is the same up until normalization /
gradient accumulation differences, and since this code is sensitive wrt
performance / precision, it's nice to share it. This should not have any
observable performance impact if the compiler makes reasonable inlining
decisions.
Instead of accumulating attribute gradients one sum component at a time,
we can compute the attribute change independently. This allows us to
remove one multiplication, which doesn't really save time in practice,
but more importantly this improves precision of accumulation as it
reduces the chance of cancellation; when tested on color+normal
attributes, we get ~1.5x reduction in floating point error, as measured
by comparing the result of quadricError with the same code using double
precision.
Before this change the attribute error used unnormalized edge length
weighted quadrics. The edge length carried over from position error
(where both edge length and area weighting work but edge length at some
point was judged to be slightly higher quality), and 'unnormalized' part
was considered a temporary limitation to be investigated.

With this change, we are officially removing normalization (with good
reason!) and switching to area weighting (with good reason!). In
aggregate this change alters the error and simplification behavior.

The reason why weight normalization is valuable for position quadrics is
it gives the error a particular meaning (a square of the distance
between vertices and surfaces) that is mostly preserved throughout
quadric accumulation. This leads to the intermediate and final collapse
errors being explainable and useful outside of the collapse ordering,
for example the result can be used to select optimal LOD switching
distance.

If applied to attribute quadrics, however, the attribute error becomes
the square of attribute deviation; this value has a limited utility in
isolation, and does not work at all when added to the position error
since the spaces are incompatible. Even if we looked at it in isolation,
measuring position deviation from surface is a measure of silhouette
deformation, whereas a significant attribute error on a very small
triangle is invisible and a small attribute error on a very large
triangle can noticeably affect the appearance.

The correct/ideal way to compute attribute error seems to be to measure
the attribute delta multiplied by the area/distance that this delta is
spread upon. Unfortunately, this does not seem to be representable in a
quadric (further research pending on alternatives) as it's a 4-degree
polynomial if we assume we need squared error * squared distance.

The reason why our previous code worked to some extent is that it scaled
the attribute error by a value that depended on the triangle size; while
this does not correctly work through quadric accumulation, this can
otherwise reasonably approximate the error... except that we used a
linear scale on top of a square of the attribute deviation, and
positional error is internally stored as quadratic, making these
incompatible.

This in turn led to inability to tune attribute weights in a
scale-invariant way or explain what they mean. With large weights, the
combined error was also basically useless for LOD selection, which in
some engines necessitated patching the simplification code to output
just distance based error for LOD selection, which has its own issues.

These issues mostly go away once we accept that unnormalized attribute
metric is a feature and lock the weight to be the triangle area (or
really any other quadratic quantity, eg we could take squared triangle
edge length along the direction of largest attribute deviation, but for
now we're using the area for simplicity).

With these, the attribute weights are easier to explain; a deviation of
1 in a weighted attribute along length L is perceptually equivalent to a
positional shift of length L. If an attribute is 0..1, a weight of 1 is
now reasonable because eg it suggests that a change from white to black
color is the same as just removing the geometry. Of course, smaller or
larger weights still make sense (for example to accentuate the change in
the attribute), and if the attribute is of non-unit scale it should be
counter-scaled via attribute weight as before.

This also fixes the relation between positional and attribute errors and
makes them meaningful to add, as attribute errors are (squared)
distances times attribute deviation. The combined error value is likely
directly usable for LOD selection whereas it was not before depending on
the selected weights.

The one remaning issue is that the metric we use only approximates our
ideal and the approximation breaks down after a series of quadric
accumulations during collapses, as we lose the sense of the aggregate
vertex movement. This might be possible to improve in the future.

This change significantly changes the meaning of attribute weights, but
mostly in a way that should be reasonably easy to adjust for:
applications that used very small (eg 1e-2) weights for unit attributes
should adjust them to be closer to 1; applications that already used
attributes around 1 for unit-length attributes can likely stay as is as
they will get a more balanced position vs attribute handling but can
choose to adjust the weights upwards.
Following the changes in the attribute metric, weights for normalized
attributes should now be much closer to 1; for the demo we default to 1
for color and 0.5 for normals with a slightly wider slider tuning range.
0.2 is no longer enough as we switched to area weighted quadrics.

Also update simplifyAttr to use weight 0.5; this is less critical as we
don't save the results but is more consistent with the expected usage.
Add documentation for meshopt_simplify options, meshopt_simplifyWithAttributes,
vertex_lock as well as guidance for multi-material simplification, which
mostly covers the commonly asked questions around simplification.

For now this avoids giving precise guidance on the attribute weights as
it remains use case dependent, although we could probably recommend
values in [0.1, 10] range as a reasonable default for any unit-scale
attribute.
@zeux zeux merged commit a023e76 into master Aug 15, 2024
12 checks passed
@zeux zeux deleted the simp-attr branch August 15, 2024 20:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant