Difference between revisions of "Behind the Scenes - Escape Room"

From Verge3D Wiki
Jump to navigationJump to search
(Initial Edit)
 
Line 1: Line 1:
'''The Escape Room is an interactive short''' created using Blender 2.93 and Verge3D 3.8.  The following is a Behind the Scenes look at the making of the Escape Room.
'''The Escape Room is an interactive short''' created using Blender 2.93 and Verge3D 3.8.  The following is a Behind the Scenes look at the making of the Escape Room.


The Escape Room was developed by Route 66 Digital as an internal R&D project and was released to the public so they could join in the fun.  When we start this project we had a few specific items we were trying to learn more about: memory management, redraws, lighting and light probes.


The Escape Room was developed by Route 66 Digital as an internal R&D project and was released to the public so they could join in the fun.  When we start this project we had a few specific items we were trying to learn more about: memory management, redraws, lighting and light probes.   




For us memory management was one of the biggest concerns we had with respect to making very large experiences.  Most of this was my own personal fault.  My background in video animation made for some very large models and textures, not suited for online use.  I was  making overly detailed models with hundreds of parts per model with equally large number of independent textures or buying models online and using those without optimization.  For this and all future projects I knew I needed a new process.  For the room, I ditched the idea of modeling in nurbs and used the native polygon modeling in Blender.  I purposefully created only low poly models knowing I would rely on textures to make up the detail where applicable. 


For us memory management was one of the biggest concerns we had with respect to making very large experiences.  Most of this was my own personal fault.  My background in video animation made for some very large models and textures, not suited for online use.  I was  making detailed models with hundreds of parts per model with equally large number of independent textures or buying models online and using those without optimization.  For this and all future projects I knew I needed a new process.  For the room, I ditched the idea of modeling in nurbs and used the native polygon modeling in Blender.  I spent time trying to create high resolution models with my usual detail and had planned to bake those textures and normals down to a lower poly. But the room is mostly comprised of simple cubes so I skipped this step and created only low poly models adding in beveling and subdivisions surfaces where I needed the details for the maps,  knowing I would rely on textures to make up the detail where applicable. 
After completing the initial build I had roughly 1.4 million triangles and approximately 1200 unique objects, 86 lights and roughly 700Mb in textures.  Yes, you are right...that would never work.  Because our goal was to determine the effects of atlas textures, I kept a lot of detail in for the texture baking, knowing that I would be reducing the poly count and combining objects once the textures were baked.  This process took the longest.  Initial testing and noise levels in the maps required that I bake the textures at 8K at 10,000 samples using Cycles.  Over the next three months I tested many different baking plugins and found that at least for me, the best results were just setting them up by hand.  There are several youtube videos about how to bake multiple textures to a map.  What they never seem to tell you is the a very important secret - Noise.  More importantly, they don't mention how to overcome it.  I thought Blender would have just done it during the rendering of the bake but it does not.  The good news and the secret to removing the noise is built in Blender.  Blenders compositor has a denoiser that works quite well as long as the textures being denoised have their smallest detail 10x larger than the noise, otherwise the denoiser will effectively erase the small details. This was especially apparent with text. To avoid loosing the text during a denoise filter, I ended up removing all the items that had text and kept those as independent maps using PNGs vs JPG to keep the quality up.


After completing the initial build I had roughly 1.4 million triangles and approximately 1200 unique objects, 86 lights and roughly 700Mb in textures.  Yes, you are right...that would never work.  Because our goal was determine the effects of atlas textures, I kept a lot of detail in for the texture baking, knowing that I would be reducing the poly count and combining objects once the textures were baked.  This process took the longest.  Initial testing and noise levels in the maps required that I bake the textures at 8K at 10,000 samples using Cycles.  Over the next three months I tested many different baking plugins and found that at least for me, the best results were just setting them up by hand.  There are several youtube videos about how to bake multiple textures to a map.  What they never seem to tell you is the a very important secret - Noise.  More importantly, they don't mention how to overcome it.  I thought Blender would have just done it during the rendering of the bake but it does not.  The good news and the secret to removing the noise is built in Blender.  Blenders compositor has a denoiser that works quite well as long as the textures being denoised have their smallest detail 10x larger than the noise, otherwise the denoiser will effectively erase the small details. This was especially apparent with text. To avoid loosing the text during a denoise filter, I ended up removing all the items that had text and kept those as independent maps using PNGs vs JPG to keep the quality up.




Once the maps were completed, I reduced and optimize the scene geometry and applied the baked maps..  I had captured all the lighting and shadows in the baked textures so I was able to delete all the lights.    The final scene has 72,331 vertices and 134,253 triangles, 134 objects and 0 lights.  Zero lights was key to lowering redraws which in turn affects the performance of the application.    As an added bonus the quality of rendered scene looked more inline with a video than had we used physical lighting.  So thats a win win.   
Once the maps were completed, I reduced and optimize the scene geometry and applied the baked maps..  I had captured all the lighting and shadows in the baked textures so I was able to delete all the lights.    The final scene has 72,331 vertices and 134,253 triangles, 134 objects and 0 lights.  Zero lights was key to lowering redraws which in turn affects the performance of the application.    As an added bonus the quality of rendered scene looked more inline with a video than had we used physical lighting.  So thats a win win.   




With three of the four goals completed, it was time to move on to light probes.  The glass walls in the middle of the room reflected the HDR background and we needed it to reflect the room.  There were other objects in the room that needed reflection attention as well.  I started by placing small reflection cubes through out the scene but quickly realized the performance price.  Although not perfect, I settled for one large reflection cube.   
With three of the four goals completed, it was time to move on to light probes.  The glass walls in the middle of the room reflected the HDR background and we needed it to reflect the room.  There were other objects in the room that needed reflection attention as well.  I started by placing small reflection cubes through out the scene but quickly realized the performance price.  Although not perfect, I settled for one large reflection cube.   




Line 20: Line 25:


The game interaction is very standard Verge3d things, on drag, on rotate, on click, etc.  We did notice that for drag events....its better to have the camera off angle than straight on otherwise the rotation of the combination lock would rotate in unpredictable directions making it impossible to enter the combination.    I utilized raycasting to navigate, along with a constraint material to keep the camera in the bounds of the room.  Since this was going to be available on mobile phones, I setup different tween targets for mobile and desktop.  
The game interaction is very standard Verge3d things, on drag, on rotate, on click, etc.  We did notice that for drag events....its better to have the camera off angle than straight on otherwise the rotation of the combination lock would rotate in unpredictable directions making it impossible to enter the combination.    I utilized raycasting to navigate, along with a constraint material to keep the camera in the bounds of the room.  Since this was going to be available on mobile phones, I setup different tween targets for mobile and desktop.  




Luckily for me I was able to hand off the design of the menu system to our programming team and they spit out some great javascript that allowed me to implement a preloader with video and a menu system.  Game inventory items, hints, camera rotation controls and sound control were added.  We kept the instructions to a minimum.  The video was created using Apple Motion using actual screen shots of the game.     
Luckily for me I was able to hand off the design of the menu system to our programming team and they spit out some great javascript that allowed me to implement a preloader with video and a menu system.  Game inventory items, hints, camera rotation controls and sound control were added.  We kept the instructions to a minimum.  The video was created using Apple Motion using actual screen shots of the game.     


We hope you enjoy the game.
We hope you enjoy the game.

Revision as of 15:33, 14 October 2021

The Escape Room is an interactive short created using Blender 2.93 and Verge3D 3.8. The following is a Behind the Scenes look at the making of the Escape Room.

The Escape Room was developed by Route 66 Digital as an internal R&D project and was released to the public so they could join in the fun. When we start this project we had a few specific items we were trying to learn more about: memory management, redraws, lighting and light probes.



For us memory management was one of the biggest concerns we had with respect to making very large experiences. Most of this was my own personal fault. My background in video animation made for some very large models and textures, not suited for online use. I was making detailed models with hundreds of parts per model with equally large number of independent textures or buying models online and using those without optimization. For this and all future projects I knew I needed a new process. For the room, I ditched the idea of modeling in nurbs and used the native polygon modeling in Blender. I spent time trying to create high resolution models with my usual detail and had planned to bake those textures and normals down to a lower poly. But the room is mostly comprised of simple cubes so I skipped this step and created only low poly models adding in beveling and subdivisions surfaces where I needed the details for the maps, knowing I would rely on textures to make up the detail where applicable.


After completing the initial build I had roughly 1.4 million triangles and approximately 1200 unique objects, 86 lights and roughly 700Mb in textures. Yes, you are right...that would never work. Because our goal was to determine the effects of atlas textures, I kept a lot of detail in for the texture baking, knowing that I would be reducing the poly count and combining objects once the textures were baked. This process took the longest. Initial testing and noise levels in the maps required that I bake the textures at 8K at 10,000 samples using Cycles. Over the next three months I tested many different baking plugins and found that at least for me, the best results were just setting them up by hand. There are several youtube videos about how to bake multiple textures to a map. What they never seem to tell you is the a very important secret - Noise. More importantly, they don't mention how to overcome it. I thought Blender would have just done it during the rendering of the bake but it does not. The good news and the secret to removing the noise is built in Blender. Blenders compositor has a denoiser that works quite well as long as the textures being denoised have their smallest detail 10x larger than the noise, otherwise the denoiser will effectively erase the small details. This was especially apparent with text. To avoid loosing the text during a denoise filter, I ended up removing all the items that had text and kept those as independent maps using PNGs vs JPG to keep the quality up.


Once the maps were completed, I reduced and optimize the scene geometry and applied the baked maps.. I had captured all the lighting and shadows in the baked textures so I was able to delete all the lights. The final scene has 72,331 vertices and 134,253 triangles, 134 objects and 0 lights. Zero lights was key to lowering redraws which in turn affects the performance of the application. As an added bonus the quality of rendered scene looked more inline with a video than had we used physical lighting. So thats a win win.


With three of the four goals completed, it was time to move on to light probes. The glass walls in the middle of the room reflected the HDR background and we needed it to reflect the room. There were other objects in the room that needed reflection attention as well. I started by placing small reflection cubes through out the scene but quickly realized the performance price. Although not perfect, I settled for one large reflection cube.


At this point the research was complete and the projects are usually backed up and stored for reference but I had another idea, the Escape Room. Once the project was exported to a GLTF the focus became all about puzzles and javascript.

The game interaction is very standard Verge3d things, on drag, on rotate, on click, etc. We did notice that for drag events....its better to have the camera off angle than straight on otherwise the rotation of the combination lock would rotate in unpredictable directions making it impossible to enter the combination. I utilized raycasting to navigate, along with a constraint material to keep the camera in the bounds of the room. Since this was going to be available on mobile phones, I setup different tween targets for mobile and desktop.


Luckily for me I was able to hand off the design of the menu system to our programming team and they spit out some great javascript that allowed me to implement a preloader with video and a menu system. Game inventory items, hints, camera rotation controls and sound control were added. We kept the instructions to a minimum. The video was created using Apple Motion using actual screen shots of the game.

We hope you enjoy the game.