Home Game Development c++ – Collision detection between transformed meshes (not primitives)

c++ – Collision detection between transformed meshes (not primitives)

0
c++ – Collision detection between transformed meshes (not primitives)

[ad_1]

Expanding on my comment to try to give this old question some closure: common non-primitive polygon mesh collision detection functions include…

Both of these techniques work only for convex hulls: meshes that have no holes, caves, or divots, but they’re way faster than the alternatives.

For that reason, many game engines will make (or allow the designer to author separately) a CPU-side physics-only version of the mesh that’s reduced to its convex hull for collision purposes. Meshes that have important concavities can often be broken down into parts that are each convex, so you can run these fast algorithms on each part – see approximate convex decomposition for details on those techniques.

If your mesh is concave but well represented by a heightfield (eg. a terrain without caves or overhangs), then you can reduce collision detection to a 2D sweep over the heightfield checking if the highers height in the colliding object’s footprint is above the lowest part of the object there.

If you need to find collisions with more general concave meshes that aren’t heightfields and don’t break down neatly into convex parts (meaning they have so many concavities that you’d end up with something close to one convex hull per triangle anyway), or worse, non-manifold geometry (meshes that don’t have a clearly identified interior vs exterior), then unfortunately the only solution is to test for intersection against each polygon in the mesh. If you have to collide two such meshes, then you need to test for an intersection of each pair of triangles (yuck!). About the best you can do here is break up the triangles into some kind of acceleration structure like a spatial hash, BSP, or octree that lets you quickly find relevant triangles without visiting every single one.

If you can afford to approximate to some fixed resolution, and your mesh doesn’t cover “too huge” an area, you can also create a signed distance function (SDF) from the mesh, and store it in a 3D array where the value at each lattice point is the distance from there to the closest point on the mesh (with negative values for points “inside”. These are expensive to create, but fast to query, especially for small colliders. When you transform the mesh, you transform your query by the inverse of that transform, so you don’t have to recalculate the SDF every time you move the mesh.

All of these approaches get much more expensive the more polygons are in your mesh, so it’s common for the CPU-side physics mesh to be significantly lower-poly than the display mesh, whether it’s crafted by hand or automatically decimated by some mesh simplification algorithm on import.

[ad_2]