YTEP-0031: Unstructured Mesh¶
Abstract¶
Created: December 18, 2014
Author: Matthew Turk
Supporting data generated by unstructured mesh codes can be implemented using a
combination of existing data selection routines, vector fields, index fields,
and pixelization routines. This YTEP also touches on decoupling the step of
constructing an image buffer from data and the data representation itself,
which applies to data types such as SPH as well as mixing different mesh types
in a single Dataset
object.
Status¶
Proposed
Project Management Links¶
Any external links to:
- WIP pull request for hexahedral mesh: https://bitbucket.org/yt_analysis/yt/pull-request/1370/wip-generic-hexahedral-mesh-pixelizer/diff
Some small discussion has occurred on-list as well, but not yet of too much detail.
Detailed Description¶
Background¶
Data in yt has until now been assumed to be either discrete, or on a regular mesh that is defined by cells. This has been the implementation for smoothing kernels as well as particle and particle depositions. However, to support broader codes (and to support more flexible visualizations of particle datasets, such as direct smoothing kernel application onto image buffers) we should develop methods for supporting unstructured mesh. This will require uncoupling the pixelization step not only from coordinates (as has been done) but also from the underlying data model. This YTEP proposes a strategy for implementing support for unstructed meshes. We will not restrict ourselves to specific element types; however, where particular geometries are required, we will first focus on hexahedral, tetrahedral, wedge, pyramid and voronoi tesselation cells. The YTEP as a whole is designed to be generic with respect to element geometry.
We will not spend much time discussing Voronoi tesselation cells here and will instead reserve discussion of them for a future YTEP, as they are similar in some ways but quite different in others. Additionally, we restrict ourselves initially to convex elements.
In conjunction with these different cell types, data may be stored at vertices, centers, edges, faces, and so on. Developing the appropriate intra-cell interpolation methods will remove the need to explicitly identify these different data “centering” methods, but see below for discussion of how these centering methods will be denoted.
Typically within an element, the value at a given position is a function of a kernel that uses as input the quantities defined in the output. So given the kernel, and the data points, the value at any internal position in the cell can be determined. Supporting these interpolation kernels is part of a longer-term roadmap. Initially, we will likely perform nearest-neighbor or IDW.
The difficulty in handling unstructed mesh data is ensuring high-fidelity visualizations as well as high-fidelity data representation. The latter is considerably more straightforward, as if the system is viewed as a sequence of discrete points (and the question of intra-element interpolation is ignored except when explicitly requested) the existing selection routines can be used by regarding the data as a sequence of discrete points (“particles”).
We assume for the purposes of this document that the datasets are defined by connectivity and coordinates (i.e., an array of indices that define each element’s vertices and an array of common coordinates into which we index) and that the ordering of the vertices is such that we can identify which vectors span faces of the elements.
Irregular Mesh Points¶
In YTEP-0001: IO Chunking a system for chunking was described, wherein attributes such as fcoords, icoords and so on are exposed. We propose to augment these, which work nicely for regular mesh data, with coordinates for vertices and edges. However, this will only be accessible for chunks which are homogeneous in element type. (Otherwise, the vector would need to be “ragged” with differing component length.)
What this YTEP proposes to do is to create a new set of fields and chunk
attributes. The fields, much like fields such as x
, y
and z
, will
reflect the position of vertices and/or faces of an element and will be vector
fields. The new chunk attributes will be the underlying data from which these
fields are generated. The new chunk system will add on these attributes to
YTDataChunk
:
fcoords_vertex
- vertex positions, of shape (N, Nvertex, 3)fcoords_face
- barycenter of the faces, of shape (N, Nface, 3)fcoords_edge
- middle of edge, of shape (N, Nedge, 3)
We anticipate that for the existing data index types, these attributes can be created on the fly, but will not often be used.
The new fields will follow the naming convention of element_type,
type_coordinateaxis
. A few examples:
("hex", "vertex_x")
- vector of shape (N, Nvertex)("hex", "face_x")
- vector of shape (N, Nface)("tetra", "face_phi")
- vector of shape (N, Nface)("wedge", "edge_z")
- vector of shape (N, Nedge)("vertex", "x")
- array of shape (N)("edge", "x")
- array of shape (N)
We will still retain existing index
fields, which will return centroids for
coordinates and will simply return invalid coordinate requests for the path
element and cell width fields, similar to when fields like dx
are requested
in non-Cartesian coordinates. This should work, although it’s not entirely
clear that it is the best bet. Namespacing could potentially fix this.
What this will provide:
- Uniform access to coordinates of all raw, unprocessed data points.
- Re-use of existing routines that query based on field type and field name.
- Avoid confusion based on re-use of names from other systems of coordinates.
Likely this will also need a method for transforming values between definition
systems; for instance, a method for converting vertex-centered values to
cell-centered. This would be akin to the method used for depositing particles
on a mesh and would mandate access to the mesh object via ValidateSpatial
.
Decoupling Pixelization from Mesh Values¶
The pixelization step is the point at which mesh values are transformed into an image. These mesh values are variable resolution, and so the operation essentially deposits (through NN interpolation with anti-aliasing) these variable mesh values into an image buffer.
In cases where the mesh values are accessible through the fields used currently
(such as px
and the like), the standard pixelization routines will be
called.
For datasets that do not, or cannot, create px
fields and the like,
separate pixelization routines will be called. In the (at time of writing) WIP
PR for hexahedral mesh datasets, and example of this can be found. This will
be implemented in the coordinate handler.
The generic pixelization routine will accept a set of vertices, an interpolation kernel (nearest-neighbor for starters) and the field (initially only support for fields defined at centroids will be added for simplicity, but with edge and face added later). The ordering of vertices that provides face values will be specified at pixelization time, and will draw from one of a set of orders.
The pixelization routine will first apply coarse bounding box checks to the image plane and all supplied elements. Each pixel that passes the bounding box check for a given element will move on to the second step of selection. In this step, the sign of the dot product of the centroid with each normal vector defining each face will be stored (this prevents the need for knowing the CW / CCW ordering of the vertices) and for each pixel in the image plane, the signs of the same dot product will be examined. If all the signs match, the point is internal to our (convex) element. This appropriate kernel will be evaluated and the resulting value deposited in the image plane.
Because of the requirements of single mesh type, the pixelization routines will iterate over each mesh type and deposit the fields in sequence. This will enable the interoperation of fields between mesh types, without requiring that they be made uniform in size.
Note also that separating out based on the type of field and data represented means that we may now be able to implement slices of particle fields directly.
Multiple Meshes for Multiple Mesh Types¶
Each mesh type – hex, tet, wedge, etc – will be isolated to a different mesh type.
For a given data object, much like particles and mesh objects cannot interact
without the mediation of a deposition step, each must be queried separately if
the vertices are to be examined. If the field values are the only items of
concern, they can be queried in concatenated form. For situations where fields
persist across mesh types, we will be unable to supply vertex information and
can only then supply x
fields and the like.
At present, there is a semi-structured mesh object, and for datasets that
expose that, it lives within the .meshes
attribute of the index. Each mesh
type will be in a separate element in that list.
Example Use Cases¶
These example use cases should just work in a successful implementation. The
dataset imagined in them contains tetrahedra (N_t
), hexahedra (N_h
),
and wedges (N_w
). The field field1
is defined at vertices and
field2
is defined at the element centroids.
Querying all of the values of field1
:
dd = ds.all_data()
print dd["vertex", "x"].shape
print dd["index", "x"].shape
print dd["field1"].shape
The first and third print statements will return the same shape, but the middle will return the total number of elements (centroids). Ultimately, much like with particle fields, the user will need to have some knowledge of the mesh (which yt can provide hints about) to know how to combine fields.
This should also work:
prof1d = yt.create_profile(dd, ("vertex", "x"), "field1")
Because our selection operators will operate on the field values as though they were discrete points, this must also work:
sp = ds.sphere([0.5, 1.0, 30.1], (1.0, "km"))
sp["field1"]
sp["field2"]
These fields will not be the same size, but will select from all different mesh
types. Querying the "x"
field will return the centroids that pass the
selector, which will be of different size than "field1"
but will be the
same size as "field2"
. This also means that it will be impossible to bin
"field1"
against "x"
without explicitly namespacing it as ("vertex",
"x")
.
Volume Rendering¶
Initial support for volume rendering will use Embree, a fast ray-tracing code from Intel, to do the ray traversal. A set of python bindings for Embree already exists. Later on, this may be replaced our own ray-tracing code to remove the external dependency.
To use Embree, we must write code that generates a Triangular polygon mesh from the unstructured mesh data yt reads in. This may involve breaking up faces into multiple triangles. Currently, this is implemented for Hexahedral and Tetrahedral mesh elements, and adding support for other mesh types should not be difficult. One then uses the functions Embree provides to cast rays at the resulting mesh.
There will be two basic “plot types” for volume renderings of unstructured mesh data. The first will be “surface plots”, where the value of the field at the intersection point with each ray will be calculated using hit data computed by Embree. The second will be more like the traditional yt volume renderings, values along each ray will be accumulated for every element the rays intersect. For example, one could compute the maximum intensity along each ray instead of the value on the surface. Both of these types of renderings will need implementations of various intra-element interpolation functions to support meshes of various types and orders.
All of this will be integrated in with the Volume Rendering refactor, so that we retain the flexibility provided there for creating movies and camera paths. This will involve (at least) defining a new type of RenderSource object for polygon meshes. This object will know how to create the Embree polygon mesh from the data_source that gets passes in, and how to do the appropriate ray tracing calls. Once this source has been created, the Camera will be able to be changed at will, as defined in the YTEP for the scene refactor. Because multiple RenderSource objects can exist in the same scene, there is no reason why different meshes with different plot types can’t exist in the same scene.
Some examples of what the volume renderings will look like are here: https://www.dropbox.com/s/xx2it8p0ivk7s69/surface_render_0.png?dl=0 https://www.dropbox.com/s/m0b9wdp6uh6h4nm/surface_render_1.png?dl=0
Explicitly Not Implemented Functionality¶
These pieces of functionality will need considerable reworking before they will be suitable for use with unstructured mesh data, and they are outside of the scope of this document:
- “Spatial” fields, as connectivity between elements is not well-defined in general (although it may be for specific element types)
- Block and tile iterators, as they are not immediately relevant to unstructured meshes
These are difficult, and we will be holding off on implementing them until this YTEP and its implementation have shaken out.
Backwards Compatibility¶
This should have absolutely no backwards incompatible changes; any backwards-incompatible changes will be considered bugs and will result in a redesign.
Alternatives¶
A few alternatives exist. For instance, instead of augmenting fcoords
and
so on with new definitions, we could either define new fields and leave
fcoords
to refer to centroids (or delete it for those objects), or we could
define vector fields for these that are of shape (N, Ncell, 3), and refer to
the vertices of the data.
Additionally, we could be more explicit about what refers to what; we could have different namespaces for vertices.
Another alternate idea would be to mimic the particle method for namespacing
and positions; this would result in things like ("field_type",
"hex_vertex_x")
and so on. Or, we could do ("hex_vertex", "x")
and
similar.
Open Questions¶
- Should we get rid of
particle_type
and replace with a classification such ascentroids
,discrete
,vertex
and so on?- How should we handle namespacing for fields that may be defined at multiple places (face and vertex, for instance)