Lighting and Rendering / Blender

This page contains the information about lighting, rendering and background properties which can be used with Verge3D for Blender.


Verge3D is designed to represent Blender's EEVEE renderer as closely as possible. It supports physically-based shading, lights, shadows and image-based lighting (IBL).

Environment Lighting

Environment lighting is a very important component of Verge3D graphics pipeline. You can illuminate your entire scene with just an environment map alone, without using any light objects. See the Scooter demo as an example of this approach.

The default cube template provides an HDR texture for image-based lighting. You can replace this texture with your own file, or setup environment lighting from scratch. Here is the basic World nodes setup which uses the same texture for both environment lighting and background:

When using HDR textures, make sure you set the Color Space setting to Linear.


In some cases, using just image-based lighting to illuminate your scene is not enough. If you'd like to simulate some additional light source, need dynamic shadows, or if you need to move your lights (as with car lights), you may use direct light sources.

Verge3D supports the following light types:

In addition, you can assign Shadow properties on your point, sun, or spot lights. See the corresponding section for more info.

Reflection Cubemap Light Probes

Reflection Cubemap Light Probes are objects intended for adding indirect lighting locally by generating a local reflection cubemap. This type of light probe objects add specular indirect lighting to a scene.

For usage example, check out the following demo from the asset store: Light Probe.

The following properties are supported:

general probe settings:
type of the influence volume: Sphere or Box. Only objects located inside this volume are affected by the probe's lighting.
controls the size of the influence volume. You can also change object scaling and make the shape of the influence volume non-uniform.
the intensity of the indirect lighting. Any value different from 1.0 is not physically correct.
Clipping Start
near clip distance. Objects located closer than this value won't be rendered into the reflection cubemap.
Clipping End
far clip distance. Objects located further than this value won't be rendered into the reflection cubemap.
object visibility settings:
Visibility Collection
limit objects that should appear on the reflection cubemap to this collection. If not specified all scene objects are used.
Invert Visibility Collection
invert the selection of objects visible to this probe if Visibility Collection is specified.
Custom Parallax
enable custom settings for the parallax correction. This group of settings defines a parallax volume, which is used to project the lighting captured by the probe. If Custom Parallax not enabled the parallax effect is calculated based on Type and Radius/Size of the influence volume.
type of the parallax volume: Sphere or Box.
the size of the parallax volume.
Custom Influence
enable custom influence settings. This group of settings allows defining a collection of objects that will be affected by this light probe. Influence Collection (if specified) will be used instead of the Type and Radius/Distance general probe settings.
Influence Collection
limit objects that should be affected by this light probe to this collection. If specified it is used instead of the Type and Radius/Distance general probe settings.
Invert Influence Collection
invert the selection of objects affected by this probe if Influence Collection is specified.

In order to see in Blender's viewport the results of using Reflection Cubemap objects you need to bake their cubemaps first via the Bake Cubemap Only or Bake Indirect Lighting buttons both located in the Indirect Lighting panel, which is in the Render Properties tab.

Also the Cubemap Size property controls the size of the cubemap texture used by Reflection Cubemap objects.

The IBL Environment Mode setting also affects cubemaps generated by Reflection Cubemap objects.

Due to implementation specifics there are differences of how Reflection Cubemap light probes behave in Blender and in Verge3D:

Reflection Plane Light Probes

Reflection Plane Light Probes used to apply real-time reflections (indirect lighting) to planar objects, such as mirrors, floors, walls, etc.

For usage example, check out the following demo from the asset store: Light Probe.

The following properties are supported:

Influence distance of the probe.
Controls how fast the probe influence decreases.
Clipping Offset
Near camera clipping for objects rendered in the light probe.
Visibility Collection
Collection of the objects visible for the probe.

Planar reflection probes can greatly reduce performance of your scene, since they multiply the number of draw calls by a factor N+1. To make rendering faster, specify a limited set of objects as the Visibility Collection property.


By default Blender and Verge3D render the same image for background and environment lighting. To render them separate, use the advanced World nodes setup based on Is Camera Ray output of the Light Path node. For example, to set the background color to solid grey and continue using the HDR map for environment lighting:

Global Rendering Properties

Global rendering properties accessible on the Blender's Render Properties panel.

Cubemap Size
texture size to use for environment lighting:
64, 128
do not use, 256 is the minimum value supported by Verge3D.
optimum quality with low memory consumption (recommended).
better quality with moderate memory consumption and reduced performance. Use it to render high quality reflections e.g for rendering jewelry.
best quality with high memory consumption and low performance (generally not recommended).
2048, 4096
do not use, 1024 is the maximum value supported in Verge3D.
View Transform
additional color correction applied to Verge3D renderings:
no additional color correction applied. Switch to this method if you don't need color correction as it works slightly faster than Filmic.
Blender default method.
Filmic Log, Raw, False Color
unsupported, Verge3D will use Standard instead.
Enable Shadows and Shadow Map Side
shadow properties, read more about these here.
select what anti-aliasing algorithm to use for the scene:
use system default method.
prefer multisample anti-aliasing with 4x samples if the target hardware supports it.
prefer multisample anti-aliasing with 8x samples if the target hardware supports it.
MSAA 16x
prefer multisample anti-aliasing with 16x samples if the target hardware supports it.
force fast approximate anti-aliasing (FXAA).
disable anti-aliasing.
Use HDR Rendering

enable high-dynamic-range rendering.

If activated, Verge3D will use 16-bit float textures as rendering buffers. This feature can significantly improve rendering of the Bloom post-processing as well as smoothness of node-based gradient textures. The downside of this — increased GPU memory consumption and reduced performance.

This feature is not related to HDR textures which are commonly used to produce image-based lighting, thus activating it won't improve rendering of such textures.

IBL Environment Mode
PMREM (slow)
high quality (default value).
Light Probe + Cubemap (medium)
reduced quality of image-based specular reflections, better performance.
Light Probe (fast)
disabled image-based specular reflections, highest performance.
Outlining Effect
see below.

Ambient Occlusion

Ambient Occlusion is a rendering technique that improves a scene's realism by adding soft shadows from indirect (ambient) lighting based on how much the point is exposed to the light sources.

Blender uses Ground Truth Ambient Occlusion (GTAO) (link) and Verge3D Implements the same technique under the hood.

Verge3D supports the following AO settings which can be found in the Ambient Occlusion section on the Render Properties panel:

Ambient Occlusion
Enable Ambient Occlusion in the scene.
The radius (in system units) within which to calculate ambient occlusion. Higher values make the effect more noticeable by over-darkening and expanding the area of it, but also can decrease performance. Lower values make occlusion less noticeable.
The strength of the occlusion effect.
Trace Precision
Higher precision means more accurate occlusion at increased performance cost. Lower precision means better performance but the effect appears less prominent.
Bent Normals
Use modified (or "bent") normals to sample the environment instead of the original ones. The modified normals represent the least occluded direction and make environment lighting a bit more realistic.

Outline Rendering

Outline rendering (aka silhouette edge rendering) is a common non-photorealistic rendering (NPR) technique that can significantly enhance the visual perception of your scene. This effect can be used for various applications such as e-learning, games, architecture visualization, and technical drawing.

To use object outlining (and optional glowing) in your Verge3D application, first enable the effect on the Blender's Render Properties panel:

then use the outline puzzle to apply it to your object(s).

The outline rendering does not work inside AR/VR sessions. Use other methods to highlight your objects, such as animation or changing material's color.

You can tweak outlining using the following properties:

Enabled — enable/disable the effect.

Edge Strength — outlining strength factor.

Edge Glow — intensity of additional glowing (rendered beyond the main outline edge).

Edge Thickness — outline edge thickness factor.

Pulse Period — pulse period in seconds. Specify to make the effect animated.

Visible Edge Color — visible edge color.

Hidden Edge Color — color of the outline edge being rendered behind any other scene objects.

Render Hidden Edge — enable/disable rendering of the outline edge behind other scene objects.

Though it's possible to render glowing objects, in the most cases the outline rendering is used to improve visual clarity of your scene. If you need glowing from lamps or another bright objects, consider using the bloom post-processing instead.

Per-Object Rendering Properties

Verge3D supports the following additional rendering properties for your geometry objects:

Rendering Order
Modifies the rendering order for a particular object. The smaller the index, the earlier the object will be rendered. In most cases, you need to tweak this value when using Blend transparency to eliminate transparency artifacts.
Frustum Culling
Enables/disables frustum culling optimization for the object. Uncheck this option if you have some skinned object which can move beyond the screen space to prevent it from being culled.
Receive Shadows
Render or not shadows on the given object. See here for more info.
HiDPI Compositing
Render object using HiDPI compositing pass. See below for more info.
Fix Ortho Zoom
Apply inverse orthographic camera zoom as scaling factor for this object. Enable this property for object parented to ortho camera, so they don't move/scale when the user zooms the camera.

If your object is still zoomed in/out, clear its Parent Inverse matrix:

Fit to Camera Edge
See here for more info.
Visibility Breakpoints
Enable object visibility breakpoints. See here.

Rendering on HiDPI (Retina) Screens

As of today, most mobile and many desktop screens have high pixel density (so called "Retina" displays). These displays allow you to substantially increase quality of your renderings. The downside of rendering many pixels is reduced performance.

There are two approaches how to make your content look better and do not make your scenes really slow:

The latter approach can be easily achieved by enabling the HiDPI Compositing property located on the Object Properties panel:

For usage example, check out the following demo from the asset store: Ring.

Fit to Camera Edge

Fit to Camera Edge is a technique to draw screen-space UI elements based on Blender models. This approach to UI design is more “native” to the 3D artist than using HTML/CSS, and does not require external tools. But there is more in it: since the UI elements are genuine 3D objects, you can apply shaders, lighting, animation, morphing – you name it – making them truly interactive and seamlessly integrated into the scene.

When you parent some object to the camera, the following settings appear on the Object Properties panel:

Horizontal canvas edge to fit object to. None — no horizontal fit, Left — fit to left edge, Right — fit to right edge, Stretch — scale object horizontally to fit on the screen.
Vertical canvas edge to fit object to. None — no vertical fit, Top — fit to top edge, Bottom — fit to bottom edge, Stretch — scale object vertically to fit on the screen.
Fit Shape
Canvas fit shape. Box — use object's bounding box, Sphere — use object's bounding sphere to fit the object on the screen.
Fit Offset
Additional offset used to fit object on the screen. Effectively, this value extends object bounding (box or sphere) by the specified absolute value.

To fix possible issues with camera fit, clear the object's Parent Inverse matrix:

Visibility Breakpoints

Visibility Breakpoints allow you to show/hide content depending on 3D viewport width/height or orientation settings. The most important use case of this feature — adapting your scene to different screen sizes and orientations. E.g you may have two different models for portrait and landscape screen orientations.

If assigned to the current camera, tries to switch to an alternative camera (must have acceptable visibility breakpoints) in the scene, if no alternative camera is found, does nothing.

Use the Duplicate Linked Blender feature to share geometry among two objects. One object will be rendered in portrait mode while another in landscape mode. This way you can save a lot of GPU memory and decrease app loading time.

You can configure the breakpoints on the Object Properties panel:

Min Width
Minimum canvas width the object stays visible.
Max Width
Maximum canvas width the object stays visible.
Min Height
Minimum canvas height the object stays visible.
Max Height
Maximum canvas height the object stays visible.
Screen orientation the object stays visible.

Line Rendering

With this feature you can render Blender objects by using lines. The most common use case of Line Rendering is drawing curve objects, which do not have any geometry on their own. However, you can also apply this technique to regular meshes and surfaces:

Line Rendering is activated in Verge3D Settings located on the Object Data Properties panel:

Here you can also assign Emission color and width of the rendered lines.

Clipping Planes

Clipping planes (aka section planes, cross-section planes, mesh sections) is a technique used to show internal arrangement of complex objects, such as buildings, cars, appliances, gadgets, machines etc.

To add a new clipping plane, use the Clipping Plane menu item from the Blender's Add Object menu:

The objects on your scene will be clipped in the negative Z direction of the clipping plane object.

For usage example, check out the following demo from the asset store: Clipping Planes.

Clipping planes have the following properties:

Affected Objects
Collection of the objects clipped by the plane. If empty, all scene objects will be clipped.
Swap clipped and unclipped sides.
Clip Shadows
Clip shadows cast from the clipped objects.
Union Planes
Construct a union from all the clipping planes, affecting the object, not their intersection.
Filled Cross-Section
Fill cross-section between the clipping plane and the affected objects.
Cross-Section Color
Cross-section diffuse color and opacity.
Render Side
Cross-section render side. Specify Double-sided to render complex geometry with cuts and holes.
Cross-Section Size
Cross-section plane size. Increase this value if you use larger scene size.

Got Questions?

Feel free to ask on the forums!