Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distant sensor plugin #143

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

leroyvn
Copy link
Contributor

@leroyvn leroyvn commented May 19, 2020

Co-authored-by: @schunkes

This PR adds a distant directional sensor plugin similar the distant directional emitter. It is useful to record radiance leaving the scene and compute, in practice, the BRDF of scenes featuring complex geometry.

The plugin features an optional target parameter, useful to restrict the target location sampling to a single point in the scene. This feature makes this sensor equivalent to a radiancemeter pointing towards the target location and located on a bounding sphere centered at the target. It is useful to emulate one-dimensional geometries heavily used for Earth observation-related applications.

@leroyvn leroyvn marked this pull request as ready for review May 19, 2020 16:54
sensor = scene.sensors()[0]
sampler = sensor.sampler()

n_rays = 10000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use a smaller number here? Trying to keep the unit tests to run under 20 minutes ^^

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reduced this loop to 1000 iterations, but the variance of results increased as a consequence.

@Speierers
Copy link
Member

Great work! A few minor comments regarding the unit tests. The C++ looks good to me!

@leroyvn
Copy link
Contributor Author

leroyvn commented May 25, 2020

Hi @Speierers, I pushed an update with a few important changes:

  • the tests were updated and now use load_dict;
  • I ended up requiring the 3rd sample (aperture plane) again and discarding the 2nd one (film plane): this is more consistent with how samples should be mapped to the film;
  • I set the sample weight scaling to 1 in every case as this sensor is meant to record an average outgoing radiance.

In addition, I had to add an object to the test_render test: the bounding sphere is otherwise invalid and sampled rays then don't have an origin, which makes the test always fail.

@Speierers
Copy link
Member

Speierers commented May 26, 2020

the tests were updated and now use load_dict

Great!

I ended up requiring the 3rd sample ...

Ok

I set the sample weight scaling to 1 in every case as this sensor is meant to record an average outgoing radiance.

Don't we compute the radiant flux in this case?

@leroyvn
Copy link
Contributor Author

leroyvn commented May 26, 2020

Don't we compute the radiant flux in this case?

You're right, it makes more sense to do that, although we'll get the spectral flux per unit solid angle. I'll change it again and update the docs.

@leroyvn leroyvn force-pushed the distant_sensor branch 2 times, most recently from 2e7ef08 to 82f1e36 Compare May 26, 2020 11:06
@leroyvn
Copy link
Contributor Author

leroyvn commented May 26, 2020

I pushed the following changes:

  • Monte Carlo weight fix;
  • proper render test computing the flux in different illumination and observation conditions;
  • docs update to describe more accurately the recorded quantities.

Copy link
Member

@Speierers Speierers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @leroyvn,

Sorry I am only getting back to this PR now.

It would be great to have this plugin available for the upcoming release of next. If you have the time, could you please address the few comments below and rebase this branch onto next?

Then we should be able to merge this PR, finally! 🚀

ray.o = m_target - 2.f * ray.d * m_bsphere.radius;
}

ray.update();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was deprecated on next

// have differentials
ray.has_differentials = false;

ray.update();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto


MTS_IMPLEMENT_CLASS_VARIANT(DistantSensor, Sensor)
MTS_EXPORT_PLUGIN(DistantSensor, "DistantSensor");
NAMESPACE_END(mitsuba)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add empty line at the end of the file 😉

scene = load_dict(dict_scene)
sensor = scene.sensors()[0]
scene.integrator().render(scene, sensor)
img = np.array(sensor.film().bitmap()).squeeze()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible to avoid using numpy here, and use Bitmap.data() to get an enoki array instead?

sensor = scene.sensors()[0]
scene.integrator().render(scene, sensor)
img = np.array(sensor.film().bitmap()).squeeze()
assert np.allclose(np.array(img), expected)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use ek.allclose and add an empty line as the end of the file.

@Speierers Speierers added this to the Next milestone Apr 27, 2021
@leroyvn
Copy link
Contributor Author

leroyvn commented Apr 27, 2021

Hi @Speierers, thanks for reviewing this PR. I completely rewrote this plugin a few months ago. It now has optional shape-based ray origin and target control, which makes it really usable to compute the reflectance of complex geometries (the reason why we needed this in the first place). I'd like to submit that version instead. What would be the deadline for submission?

@Speierers
Copy link
Member

Hi @leroyvn ,

Sorry but I am not sure to understand what you mean by "shape-based ray origin and target". Could you maybe give a concret example? This sounds quite interesting!

There is not concrete deadline, and we could always merge this after the release so no need to hurry.

@leroyvn
Copy link
Contributor Author

leroyvn commented Apr 27, 2021

Sorry but I am not sure to understand what you mean by "shape-based ray origin and target". Could you maybe give a concret example? This sounds quite interesting!

By default, this distant sensor creates rays targeting points on the cross section of the scene's bounding sphere. If you're using it to compute the radiance leaving a scene in order to calculate its reflectance, it is very likely that you'll end up sampling radiance from "outside" the scene and this will produce wrong results. In other words, you'll shoot rays which will not intersect the scene.

Target control allows the user to specify where rays should be directed at. If your scene has a square footprint, you can target exactly that square. The current implementation we have also allows to target a single point.

Origin control is a similar mechanism used to control where rays start from. In practice, the plugin samples a target point, then projects it onto an origin shape. The default is the scene's bounding sphere, but you can specify any shape of your liking. I added this because for some reason, I had intersection accuracy problems with very large scenes when ray origins were located on the bounding sphere (see #171).

@Speierers
Copy link
Member

I see, this sounds like a nice feature to have (especially the target control) to improve performances.

I would be happy to see this code if it isn't too complicated. Those are very "primitive" plugins and so it is nice to keep them as simple as possible. Although if it isn't to difficult to implement I would be in favor of adding this.

@merlinND
Copy link
Member

@leroyvn Interesting, what you described sounds a lot like the problem of sampling rays starting from an infinite emitter in the context of light tracing (aka particle tracing).
This is the simple but inefficient solution I used (and I think Mitsuba 1 had the same):

std::pair<Ray3f, Spectrum> sample_ray(Float time, Float wavelength_sample,
const Point2f &sample2, const Point2f &sample3,
Mask active) const override {
MTS_MASKED_FUNCTION(ProfilerPhase::EndpointSampleRay, active);
// 1. Sample spatial component
Vector3f v0 = warp::square_to_uniform_sphere(sample2);
Point3f origin = m_bsphere.center + v0 * m_bsphere.radius;
// 2. Sample directional component
Vector3f v1 = warp::square_to_cosine_hemisphere(sample3);
Vector3f direction = Frame3f(-v0).to_world(v1);
// 3. Sample spectrum
// TODO: how to best construct this `si`?
SurfaceInteraction3f si;
si.t = 0.f;
si.time = time;
si.p = origin;
si.uv = sample2;
si.wi = direction; // Points toward the scene
auto [wavelengths, weight] =
sample_wavelengths(si, wavelength_sample, active);
/* Note: removed a 1/cos_theta term compared to `square_to_cosine_hemisphere`
* because we are not sampling from a surface here. */
ScalarFloat inv_pdf = m_surface_area * ek::Pi<ScalarFloat>;
return std::make_pair(Ray3f(origin, direction, time, wavelengths),
unpolarized<Spectrum>(weight) * inv_pdf);
}

@leroyvn
Copy link
Contributor Author

leroyvn commented Apr 27, 2021

@merlinND Indeed, I'm actually following what you're doing as part of your light tracer PR and I think there might have common interface issues to solve. This actually also holds true for sensor spectral sampling, which is very similar to emitter spectral sampling.

@Speierers How about I first open a new PR with the current version (branching off from master) so that you can have a quick look at the implementation?

@leroyvn
Copy link
Contributor Author

leroyvn commented Apr 27, 2021

I ended up pushing the code to this branch. The tests will require some cleanup and a few render tests would also be nice, I think.

Copy link
Member

@Speierers Speierers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @leroyvn ,

Here are a few comments (mostly style although I realized that you were going to do another pass over the code).

I am a little concerned of the numerous combinations of parameters this plugin might take, and I would be in favor of reducing that number a bit.

For instance, I am not sure of the utility of the RAY_ORIGIN feature and IMO this doesn't really comply with the distant nature of this plugin.

I understand the direction and orientation parameters, but I am wondering whether this could be defined by the to_world matrix (e.g. using a look_at, defining target and up)?

Also, when the resolution of the film is NxM, we should still be able to define the up vector right?

Finally, regarding the target strategy, I am wondering whether things couldn't be simplified a bit.

Basically we have the following combinations:

  • target=none, dir=single: that's the original plugin ✔️
  • target=none, dir=width: looks at the radiance coming from a swipe of direction, any useful? ❓
  • target=none, dir=all: looking a the scene from all around, I can see how this could be used ✔️
  • target=point, dir=single: this could be useful, although I feel like this could deserve it's own plugin
  • target=point, dir=width: same as above ❓
  • target=point, dir=all: isn't this the same as target=none, dir=all
  • target=shape, dir=single: like the original plugin, but more efficient for single shape and no missing rays ✔️
  • target=shape, dir=width: same as above ❓
  • target=shape, dir=all: looking from all around a shape, why not ✔️

src/sensors/distant.cpp Outdated Show resolved Hide resolved
src/sensors/distant.cpp Outdated Show resolved Hide resolved
src/sensors/distant.cpp Outdated Show resolved Hide resolved
src/sensors/distant.cpp Outdated Show resolved Hide resolved
src/sensors/distant.cpp Outdated Show resolved Hide resolved
src/sensors/distant.cpp Outdated Show resolved Hide resolved
src/sensors/distant.cpp Outdated Show resolved Hide resolved
}

props.mark_queried("direction");
props.mark_queried("flip_directions");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be a use case for the flip_directions parameter?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using the Dirac direction sampling strategy (1x1 film), the sensor looks, by default, in the direction opposite to the direction parameter. This is unintuitive, especially because the directional plugin uses a different convention. See a more general comment on this below.

src/sensors/distant.cpp Show resolved Hide resolved
Comment on lines +90 to +93
The positioning of the origin of those rays can also be controlled using the
``ray_origin``. This is particularly useful when the scene has a dimension much
smaller than the others and it is not necessary that ray origins are located at
the scene's bounding sphere.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would argue that we should remove this feature as a distant sensor should have an origin.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments below: this is a workaround.

@leroyvn
Copy link
Contributor Author

leroyvn commented Apr 28, 2021

Hi @Speierers, thanks a lot for checking out this code so quickly! I think the chosen parametrisation and behaviour will be better understood with additional context. The general idea is that this plugin is used to estimate the BRDF of surfaces with complex geometry and/or an atmospheric layer. The illumination for such scenes consists of a directional emitter. The emitter's direction and the reference surface's normal define the principal plane, of particular interest due to the presence of scattering lobes.

About ray direction sampling strategies

The direction sampling strategy is driven by the film size as you noticed. Here are the motivations for the 3 possible cases:

  • Dirac (1x1 film resolution): single direction, controlled by direction.
  • Plane (Nx1): effectively used to compute radiance in the principal plane, pointed at using orientation.
  • Hemisphere (NxM): effectively used to compute radiance in a hemisphere defined by direction; film orientation is controlled by orientation (I will include a sketch to explain how directions are mapped to the film in that case).

The Plane case is heavily used in practice because the principal plane contains many distinctive features and is much more quickly computed than the entire hemisphere.

About direction and orientation

At the time, direction seemed natural because I was reproducing the interface of the directional emitter; but the introduction of hemisphere coverage changed the way direction is used and I introduced the flip_directions parameter to allow for a means to restore the original behaviour when used with a 1x1 film size. The orientation parameter is most useful when using a Nx1 film size, since it then contains the target plane.

Arguably the functionality provided by those parameters can be achieved using to_world, but the parametrisation will feel less natural, especially when using a Nx1 film size. This is not really a problem for me, of course: I can transfer this functionality to a wrapper external to the plugin.

About origin control

This feature is actually a workaround #171. I agree with your comments, of course: I'm theoretically all in favour of dropping this. However, if the original problem is still here (I haven't checked if this is still problematic on the next branch), the plugin won't include a way of addressing it anymore. In our field of applications, it is commonly encountered, but I would understand that it is not a concern for Mitsuba's general audience.

About target control

The None strategy is not very useful in practice. The Point strategy is used to trace rays from the top of the atmosphere in pseudo-1D scenes (no complex geometry such as trees or topography at the surface). The Shape strategy is used for similar purposes, but when the surface also features complex geometry (a "proper" 3D scene).

Arguably, functionality very similar to the Point strategy could be achieved with the Shape strategy, e.g. by using a disk shape with a tiny radius. But why bother sampling a shape if it is not required? Also, user input is much simpler (point vs disk plugin dictionary).

Practical use case summary

Target Film [dir] Use case
None 1x1 [dirac] Adjoint of directional
None Nx1 [plane] Unused
None NxM [hsphere] Unused
Point 1x1 [dirac] Radiance for single satellite configuration over 1D* scene
Point Nx1 [plane] Radiance in principal plane on 1D* scene
Point NxM [hsphere] Radiance in hemisphere on 1D* scene
Shape 1x1 [dirac] Radiance for single satellite configuration over 3D** scene
Shape Nx1 [plane] Radiance in principal plane on 3D** scene
Shape NxM [hsphere] Radiance in hemisphere on 3D** scene

* Pseudo-1D geometry (flat surface, optical properties of the participating medium vary only along the vertical direction).
** Surface can feature complex geometry such as trees or topography.

Wrap-up

  • direction, orientation and flip_directions can be dropped without loss of functionality in favour of to_world and additional documentation, at the cost of additional mental burden for the user in (likely) niche use-cases.
  • Direction sampling options cannot be simplified without loss of functionality. The Plane strategy is very useful in our use-case.
  • Origin control is a workaround for an issue the relevance of which is to be checked again. Dropping it will result in loss of functionality, possibly for a niche use-case.
  • Target control options can be simplified to {None, Shape} without loss of functionality but the Point strategy is very easy to implement and very convenient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants