[ad_1]
And I frankly find it fascinating.
Usually when doing terrain blending you make your trees, stones and whatnot blend into the terrain. This can be done in a lot of different ways, blending to the ground texture the closer you are, making the normals align at the edge, or fade the transparency when close to the ground.
All these techniques are done on each and every terrain object, could be a lot of objects on the screen at the same time, that needs to blend to the ground seamlessly.
They work really well! Just look at the new Battlefront games, Farcry games etc.
Here’s an example of terrain blending in one of the farcry games, here they shape the bottom of the trees as well as blend the textures based on the heightmap data for that specific location (source:https://www.gdcvault.com/play/1025480/Terrain-Rendering-in-Far-Cry):
Nintendo needed to have a similar technique, for their terrain in Zelda botw, but they probably didn’t want to spend all that expensive CPU and GPU calculations when their hardware is a bit more limited.
So instead of drawing the terrain and then blending objects one at a time to the ground they do the reverse. First they render every object that needs to blend to the terrain, then they render the terrain and blend the terrain to all the rendered objects!
To be able to do this they are sampling values from the depth buffer. That’s why we need to render the objects before the terrain, otherwise the terrain will not have a clue as to what it should blend into.
In this glitched Zelda video you can see for yourself what it looks like if the terrain never renders. (pro tip: pause the video and use the two buttons just right of the letter M on your keyboard to framestep)
I’ve grabbed these two frames to show the blending in action (look closely at the brick road):
Ok, so to try it out myself I created a simple scene in unity and to show off what I’m doing I used amplify shader editor for the work (It’s often easier to get an idea across if it’s not so much code but more visual).
The first thing I notice is that using the depth buffer “raw” to blend my objects works, but the result is heavily dependent on the angle of the camera to the ground. If I look straight down on a flat plane under my terrain the distance between my terrain surface and the object beneath is shorter than if I look at a steep angle.
Notice the depth between the red and black line along the view direction (red = object beneath terrain, black = terrain).
So I thought I didn’t understand what Nintendo was doing since there’s a lot of objects aligned with the terrain where the blend is happening (just as the bricks in the example above but also for overhangs on mountains in the terrain). And this, simply, wouldn’t cut it.
But after some thinking I realized we could simply reverse this effect by calculating the tangent of the angle between the terrain normal and the view direction (camera vector) and use this to divide the earlier used distance (between terrain and object beneath).
It’s not perfect but it’s way better than the result before, and visually no different from what Nintendo is doing in botw.
This is the same scene again but with the fix:
And here’s a picture of the nodes in amplify shader editor if you are curios about the math:
http://emilmeiton.com/Blog/ZeldaTerrainBlending/ZeldaBlendingNodes.JPG
I absolutely think this technique is great and it’s obvious to me that to make this work really well they need to have some great terrain tools to sculpt the terrain to match the placed objects (especially overhangs). Probably with some kind of brush that pushes the terrain up/down to the point where the objects geometry is.
Cheers!
[ad_2]