Behind the Scenes - Escape Room

From Verge3D Wiki
Revision as of 16:29, 14 October 2021 by Xeon (talk | contribs)
Jump to navigationJump to search
Escape Room by Route 66 Digital
Escape Room by Route 66 Digital

The Escape Room is an interactive short created using Blender 2.93 and Verge3D 3.8. The following is a Behind the Scenes look at the making of the Escape Room.

The Escape Room was developed by Route 66 Digital as an internal R&D project and was released to the public so they could join in the fun. When we start this project we had a few specific items we were trying to learn more about: memory management, redraws, lighting, and light probes.

For us, memory management is always one of the biggest concerns. It affects which devices can run the application, the frames per second (FPS) and the load times. Up to this project, most of the memory consumption was based on how we made and textured models. My background in video animation made for some very large models and textures, not suited for online use. I was making detailed models with hundreds of parts per model with an equally large number of independent textures or buying models online and using those without optimization. For this and all future projects, I knew I needed a new process. For the room, I ditched the idea of modeling in NURBS and used the native polygon modeling in Blender. I spent time trying to create high-resolution models with my usual detail and had planned to bake those textures and normals down to a lower poly. But the room is mostly comprised of simple cubes so I skipped this step and created only low poly models adding in beveling and subdivisions surfaces where I needed to get the details for the maps. The other reason I skipped this process was the normal maps. They were going to very large in size, and I would have needed several. So I opted to not use normal maps for this project.

Blender wireframe
Blender - wireframe

After completing the initial build, I had roughly 1.4 million triangles and approximately 1200 unique objects, 86 lights, and roughly 700Mb in textures. Next came the process of baking the atlas maps. This process took the longest. Initial testing and noise levels in the maps required that I bake the textures at 8K at 10,000 samples using Cycles. Over the next three months I tested many different baking plugins and found that at least for me, the best results were just setting them up by hand. There are several youtube videos about how to bake multiple textures to a map. What they never seem to tell you is the very important secret - Noise. More importantly, they don't mention how to overcome it. I thought Blender would have just done it during the rendering of the bake but it does not. The good news and the secret to removing the noise are built-in Blender. Blenders compositor has a denoiser that works quite well as long as the textures being denoised have their smallest detail 10x larger than the noise, otherwise the denoiser will effectively erase the small details. This was especially apparent with text. To avoid losing the text during a denoise filter, I ended up removing all the items that had text and kept those as independent maps using PNGs vs JPG to keep the quality up. Another big-time saver was UV Packmaster. It allowed the optimization of the UVs and packed them in tighter and faster than if I had manually positioned them. It also provided a method to set the size of the islands, giving more space to map areas with small details or text. I only wish I had learned that before I had spent days rendering out the first few maps.

Atlas Maps
Atlas Maps

Once the maps were completed, I reduced and optimize the scene geometry and applied the baked maps. I had captured all the lighting and shadows in the baked textures so I was able to delete all the lights. The final scene has 72,331 vertices and 134,253 triangles, 134 objects, and 0 lights. Zero lights were key to lowering redraws which in turn affects the performance of the application. As an added bonus the quality of rendered scene looked more in line with a video than had we used physical lighting. So that's a win-win.

With three of the four goals completed, it was time to move on to light probes. The glass walls in the middle of the room reflected the HDR background and we needed it to reflect the room. There were other objects in the room that needed reflection attention as well. I started by placing small reflection cubes throughout the scene but quickly realized the performance price. Although not perfect, I settled for one large reflection cube.

At this point the research was complete and the projects are usually backed up and stored for reference but I had another idea, the Escape Room. Once the project was exported to a GLTF the focus became all about puzzles and javascript.

The game interaction is very standard Verge3d things, on drag, on rotate, on click, etc. We did notice that for drag events... it's better to have the camera off-angle than straight on otherwise the rotation of the combination lock would rotate in unpredictable directions making it impossible to enter the combination. I utilized raycasting to navigate, along with a constraint material to keep the camera in the bounds of the room. Since this was going to be available on mobile phones, I set up different tween targets for mobile and desktop.

Luckily for me, I was able to hand off the design of the menu system to our programming team and they spit out some great javascript that allowed me to implement a preloader with video and a menu system. Game inventory items, hints, camera rotation controls, and sound control were added. We kept the instructions to a minimum. The video was created using Apple Motion using actual screenshots of the game.

We hope you enjoy the game.