Best practices for texture usage

Hi.

I know from game engines that it is a better way to work with fewer maps which have a high resolution (up to 8K) and combined textures than to use more smaller individual maps because this would increase the amount of draw calls. Is this the same if the model is shown with Shapespark / WebGL in a browser?
I think to achieve a really good looking model individual textures are the main ingredient. But this would mean to work with programs like Substance Painter and textures which are not tileable.
I think that the amount of triangles which are displayed is not the main problem but the number of textures. So how to best deal with them?

And I realized that textures are not shown in reflecting surfaces. For example, if I have a tree with surfaces for the smaller branches and leaves they are not shown in the reflection, just the trunks which are 3D. Will this change some day?

Hi,

From our tests reducing the number of draw calls is indeed very important for WebGL performance. If, for example, you have 100 objects covered with the same material, the engine will automatically merge all of them into a single object, to draw them with a single call.

Shapespark also automatically packs non-tileable textures into 4Kx4K texture atlases (8Kx8K textures are not widely supported by WebGL). It also downsizes large textures based on the area that the textures span in the scene. There is no need to do such optimizations manually.

Note that in the editor some optimizations are not applied, so it is best to test the performance in the preview mode.

And I realized that textures are not shown in reflecting surfaces. For example, if I have a tree with surfaces for the smaller branches and leaves they are not shown in the reflection, just the trunks which are 3D. Will this change some day?

This is a bug we missed and will fix it, it seems to affect textures with transparency (I’m guessing that your leaves have transparent border), opaque textures do show in the reflections.

Hi Jan.

Thanks for making this clear.
Any idea when this bug will be fixed? Since the last fix was done within 24 hours the expectations are quite high! :wink:

I just realized that not rendering transparent object while generating light probes was intentional. Light probes are HDR and use the alpha channel to encode the luminance multiplier (RGBM encoding), because of this alpha is not available to store transparency.

But a workaround to treat all pixels with transparency lower than some threshold (say 0.2) as opaque could work well enough. In this way transparent objects will be visible in reflections, but what is behind them won’t be visible.

To answer your question, we will try to create and push the workaround tomorrow for scenes hosted on our servers and also include it in the next desktop app release (next week or two weeks from now - not sure yet).

Okay. Thank you. Today I finished the exterior project already. So no need to rush.