- 2020-03-02 at 7:45 pm #24125
I’m trying to make sure I’ve optimized my application for performance. Looking over the section in the manual on optimization and I have a question regarding draw calls.
When dealing with a model with multiple shaders, is it best to break up the meshes so that each object uses a single material, or is it better to combine as many meshes as you can into a single object and just assign multiple materials to that object using poly selections?
I’m attaching two screen shots from the console for my application. One for each approach. I can’t tell which would be the better for performance and load time.
Attachments:You must be logged in to view attached files.2020-03-03 at 9:20 am #24184Yuri KovelenovDeveloper
from what I see, the second scene (with more meshes) will work slightly more efficiently. But in fact, there will be no much difference as the number of draw calls is quite low in both cases.
What matters a lot, is the number of unique shaders in your scene. Shader compilation is low in WebGL. For that reason it is always better to have, e.g. 1 shader and make variations to it by modifying colors/ swapping textures with Puzzles.
Another thing is textures. Big textures will slow down both loading and rendering. Sometimes it is better to add more geometry rather than use normal maps, etc.
Finally, avoid too complex shaders. Procedural nodes such as Noise are slow. Also post-processing is slow.
2020-03-03 at 4:34 pm #24219
- This reply was modified 3 weeks, 6 days ago by Yuri Kovelenov.
Thank you for looking at that Yuri. I’ll try to keep all of this in mind as we continue to develop Verge content.
The one suggestion I think I’d have the most difficulty with would be limiting shaders and changing variations with puzzles. This would make for a difficult look dev process I would think. Does anyone here use this approach? If so, any suggestions on how best to work that way? I’m not even sure how to use separate properties for shaders that are shared among different mesh objects.2020-03-03 at 6:23 pm #24223jemLicensee
I do use the approach that Yuri described. I think that the best way is to pack the UVs from multiple objects onto one map. You can color code that map and use it in a shader to differentiate objects and feed values into shader parameters. The industrial robot demo uses a similar technique. In the case of the robot, multiple objects share the same shader and the color/roughness/metallic values are stored in shared maps.
As long as you are frugal with the size and quantity of your image maps, the shaders that use them should outperform a complex procedural shader. A tool like Substance can help with the production of these combined PBR maps.
As Yuri points out, you can always use puzzles to swap the maps on-the-fly if that makes sense for your scene.
FWIW, the shader compile times in your screenshot look reasonable to me. I certainly have seen much worse.
2020-03-03 at 9:38 pm #24229
- This reply was modified 3 weeks, 5 days ago by jem.
Thank you for elaborating on this Jem. I agree, this example I’m currently working on is pretty tame. Some of the projects we have queued up though will be much larger models with more shaders. I’m just trying to start implementing good practices going forward so that we don’t hit any performance issues.
Thanks again!2020-03-05 at 6:24 am #24235
You must be logged in to reply to this topic.