YTEP-0010: Refactoring for Volume Rendering and Movie Generation

Abstract

Created: March 3, 2013 Author: Cameron Hummels

This YTEP describes significant modifications of the camera infrastructure to enable more focus on scenes, camera paths, and movies, while still retaining functionality for individual images.

Status

Open to changes through pull requests.

Detailed Description

Background

Visualization of data is one of the primary reasons why people use yt. yt’s visualization capabilities are quite advanced, particularly in generating single images of a simulated volume. However, the tools for using the camera objects are complicated and difficult to use to generate movies of anything beyond simple camera paths around single simulation outputs. This is understandable based on the way the camera object code built up organically over the last few years, but a refactor of this code could dramatically simplify the steps for generating complex movies.

Here is a rough algorithm of how to create a rendering under the current system:

  1. The user chooses a method for breaking up the region to be rendered, either a Homogenized Volume, or a kd-tree. Homogenized volumes can be re-used from rendering to rendering, thereby saving time in re-rendering the same volume from different perspectives, as well as allowing the user to define an arbitrary geometric object to act as the rendering volume. On the other hand, the kd-tree is generally faster for individual renderings, but cannot currently be re-used from rendering to rendering, and does not allow the user to specify a subregion to render beyond an AMRRegion object (i.e. box).
  2. The user must explicitly choose whether she wants to do a volume rendering , or if she wants to do an off-axis projection, as each of these two options is a different camera class. There can be no mixing between these classes–once this is chosen, the user is locked in.
  3. The user must explicitly define the central focus point of the image to be rendered, along with the ‘width’ of the image (thereby defining the extent to be rendered), the normal vector of the from which the camera will render, and the resulting resolution of the final image.
  4. The user must explicitly define a transfer function to be used in the case of the volume rendering, and it is generally non-intuitive as to how to get this correct a priori.

Here is a sample script for generating a volume rendering under the current system taken from the docs. Note how much has to be done prior to actually rendering an output image.

>>> from yt.mods import *
>>> pf = load("Enzo_64/DD0043/data0043")
>>> dd = pf.h.all_data()
>>> mi, ma = dd.quantities["Extrema"]("Density")[0]
>>> tf = ColorTransferFunction((np.log10(mi)+1, np.log10(ma)))
>>> tf.add_layers(5, w=0.02, colormap="spectral")
>>> c = [0.5, 0.5, 0.5]
>>> L = [0.5, 0.2, 0.7]
>>> W = 1.0
>>> Npixels = 512
>>> cam = pf.h.camera(c, L, W, Npixels, tf)
>>> cam.snapshot("%s_volume_rendered.png" % pf, clip_ratio=8.0)

Problem

Here we note some of the shortcomings of the current camera implementation:

  1. Too much overhead in generating a simple rendering for the end user. Needs helper functions to use sensible defaults so the user must only call one or two commands before generating a rendering.
  2. Too much complexity in the camera object constructor due to a large number of parameters and keyword options.
  3. Volume renderings and off-axis projections are too distinct from each other.
  4. Not enough focus on time series or persistence of a camera from from one rendering to another for generating movies.
  5. Cameras are defined in a somewhat counterintuitive manner, rather than focusing on the camera as being in a physical location in a volume and moving it around that volume.
  6. Not enough integration of particle data in camera renderings.
  7. Minimal ability to move a camera in a complex path through a volume beyond simple rotations, pans, and zooms.

Proposed Solution

We propose to break up the camera infrastructure into a few different classes to enable more transparency and usability of this important functionality.

  • Make cameras just cameras. They should be very lightweight, should be situated in a scene, and should not contain references to the volumes.
  • Add a “scene” object which then contains components like data sources (i.e. volumes, streamlines, particles), cameras, transfer functions, etc. The scene remains a structure for modifying the underlying components in that scene throughout the duration of the scene.
  • Make a camera a reusable object for a given movie which can be modified in virtually any way (location, transfer function, underlying pf) through a series callback functions, or modifying the scene object directly.
  • Remove the homogenized volume method for generating volume renderings and make the kd-tree method handle all functionality that homogenized volumes provided (e.g. reusability, usability on an arbitrary geometric object – see ytep 0009).
  • Integrate all current camera classes into a single camera class, so we don’t have separate classes for volume renderings, projections, stereoscopic renderings, HEALpix renderings, etc.
  • Make the scene understand how to traverse from point A to point B in a complex way by designating keyframes where you constrain the exact rendered image (position/orientation of camera, state of transfer function, data source for rendering, etc.) and having the scene figure out a smooth transition between these keyframes.
  • Remove a ridiculous amount of complexity from the Camera and Volume objects by stripping out a large number of variables from the constructors.
  • KDTrees should be built for the volume active at any time for easy reusability in future frames (e.g. by moving the camera or changing the transfer function). If the underlying data source changes, then the old kdtree is purged and a new one for that new data source is constructed. This will dramatically reduce overhead on rendering the same volume from different perspectives.
  • By default, when one defines a Scene object from a single datadump, it sets the Timeline object to 1 output frame, whereas if one defines a Scene object from a TimeSeries, it adds keyframes for each pf in that TimeSeries uniformly across the Timeline object.

In short, we propose that by reducing complexity of individual objects and splitting them in to multiple objects, we can reduce the complexity of individual operations by adding in a slightly larger set of objects that are more flexible.

New classes:
  • Scene
    Meant to be the main class for dealing with volumetric visualization. It is constructed using a static output instance, which it uses to set up a default camera based on domain extents. It also instantiates a list of objects to be rendered, which include RenderSource instances for volume rendering and streamlines.
  • RenderSource
    Base class for rendering types. This can be (minimally) volumetric data for volume rendering, path data for streamlines, point data for particles, and other yet to be determined data types.
  • Camera
    A lightweight camera representing the location and orientation of the camera. This can be specified in a number of ways, but to uniquely define it, we need position of camera, pointing vector, and an optional north vector (which is used to determine the image “up” direction which specifies the image “up” direction).
  • Timeline
    The timeline object represents how the scene changes with time. It is valid from t=0 to t=1, but this can be mapped on to any number of output frames during the render. One can modify the Timeline object by setting events such as keyframes to change the underlying scene components at any point in the timeline.
  • CameraPath
    In dealing with movies, one can set key frames of where and in what orientation one wants the camera to be at certain times. A smoothing function (like a spline) can connect up these keyframes into a smooth camera path for application on the timeline.

In each of these following derived classes, the returned object from the __init__ function is an instance of the Scene class, capable of adding additional sources. These are meant to provide shortcuts

Derived Classes:
  • VolumeScene
    Inherits from Scene, sets up a scene with a volume rendering data source
  • StreamlineScene
    Inherits from Scene, sets up a scene with streamlines data source
  • ParticleScene
    Inherits from Scene, sets up a scene with particles data source

Sample Scripts for Proposed Infrastructure

Under the proposed changes, one could simply get a simple volume rendering by running this short script:

>>> from yt.mods import *
>>> pf = load("Enzo_64/DD0043/data0043")
>>> sc = VolumeScene(pf, 'Density')
>>> im = sc.render()

where the scene constructor uses helper functions to set up all of the default objects (volume, camera, timeline, transfer function) in order to use the entire volume, place a camera at 1.5*domain_right_edge pointing at domain_center and north vector (nx,ny,nz)=(0,0,1), make the timeline object number_of_frames=1, setting the transfer function to use the min/max of the volume and adding 4 isodensity contours.

The previous image can be grabbed using:

>>> im = sc.current_image

If one wanted to modify this scene prior to rendering, a series would allow the end user to change things through a series of callbacks:

>>> from yt.mods import *
>>> pf = load("Enzo_64/DD0043/data0043")
>>> sp = pf.h.sphere([0.5,0.5,0.5],100/pf['kpccm'])
>>> sc = VolumeRender(sp, 'Density')
### Change the camera position and orientation
>>> sc.camera.move(pos=[0,(100,'kpccm'),0], focus=[0,0,0], north=[0,0,1])
>>> sc.render()

In order to create a short movie making a rotation around the center from one side at 100 kpc out to the other side 100 kpc out while the simulation is evolving, one might run a script such as the following. It would automatically set the timeline to match the timeseries data with a framerate of 12 frames/sec.

>>> from yt.mods import *
>>> ts = TimeSeriesData.from_filenames("Enzo_64/DD????/data????")
>>> sc = scene(ts)
>>> keyframe_start = camera(pos = [0,1,0], point = [0,0,0], north = [0,0,1])
>>> keyframe_mid = camera(pos = [1,0,0], point = [0,0,0], north = [0,0,1])
>>> keyframe_end = camera(pos = [0,-1,0], point = [0,0,0], north = [0,0,1])
>>> sc.set_keyframe(time=0, camera = keyframe_start)
>>> sc.set_keyframe(time=0.5, camera = keyframe_mid)
>>> sc.set_keyframe(time=1, camera = keyframe_end)
>>> sc.timeline.set_num_frames(50)
>>> sc.render()

While all the prior examples are focused on handling a single data source at a time, a major goal of the refactor is to allow for the combination of data sources and data types, such as streamlines, particles, opaque planes, and annotations. We want to allow for the composure of a full scene containing many different sources.

For example,

>>> from yt.mods import *
>>> pf = load("Enzo_64/DD0043/data0043")
>>> sc = Scene(pf)
### Change the rendered volume to be a sphere of radius 100 kpc
>>> sp = pf.h.sphere([0.5,0.5,0.5],100/pf['kpccm'])
>>> vr_handle = sc.add_volume_rendering(sp)
### Here vr_handle is an instance of a VolumeRenderSource(RenderSource)
>>> vr_handle.transfer_function.clear()
>>> vr_handle.transfer_function.map_to_colormap(mi, ma, cmap='RdBu')
>>> streamlines = Streamlines(pf,...) # Create streamlines
>>> stream_handle = sc.add_streamlines(streamlines)
>>> stream_handle.set_opacity(0.1)
>>> stream_handle.set_radius((0.1,'kpc'))
>>> sc.add_particles(sp)
>>> particle_handler = sc.get_particle_handle()
>>> particle_handler.transfer_function.set_color_field('density')
>>> particle_handler.transfer_function.set_alpha(0.1)
>>> sc.render()
### Remove piece of the scene
>>> sc.toggle(vr_handle)
... # Type                       Tag    Status
... VolumeRenderSource(density): vr_1   off
... Streamlines(velocity):       sl_1   on
... Particles(density):          pt_1   on
>>> sc.render()
>>> sc
... # Type                       Tag    Status
... VolumeRenderSource(density): vr_1   off
... Streamlines(velocity):       sl_1   on
... Particles(density):          pt_1   on
>>> sc.toggle('vr_1')
... # Type                       Tag    Status
... VolumeRenderSource(density): vr_1   on
... Streamlines(velocity):       sl_1   on
... Particles(density):          pt_1   on

What Needs to be Decided

  • What should the syntax be for annotations (lines, boxes, orientation vectors)?
  • How do we manipulate the Scene positions (positions of all non-spatial annotations)? For example, put the transfer function display over here.
  • Probably many more things.
  • How to handle the API of running in different parallel regimes (Image plane vs domain vs time-series vs …)

Backwards Compatibility

This will break all backwards compatibility with the pf.h.camera interface. We will attempt to keep as many of the useful modifications (pitch, roll, yaw, etc.) as similar as possible to ease the pain.