We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.

jem

Forum Replies Created

Viewing 15 posts - 61 through 75 (of 173 total)
  • Author
    Posts
  • in reply to: How to fix ground plane #26073
    jem
    Customer

    Hello @mrburns,
    Please post your test Blend file to the thread and we can take a look at it.
    1. I am not sure if you are saying that the cube is clipping the camera or if it is improperly framed by the web page in sneak peek mode. I think that this relates to your #3 question.
    2. I suspect that you have an HDRI lighting your scene in addition to the point light that is parented to your camera. HDRIs do not move with the camera, so that might explain your color shift. You can test this. Go to the world tab. Deselect use nodes. Try sneak peek again.
    3. The standard behavior of the v3d-container div is to fill whatever HTML container it is placed within. When you view the 3D scene in a browser, the container is a browser and it fills your screen. You could embed the V3D application in a simple 800x800px iframe and you will be guaranteed that your scene would render at 800x800px. There are other HTML techniques to do the same thing (and better) if you want to play with CSS. I suspect that this issue (#3) is the cause of your first issue. Your viewport in Blender has a different aspect ratio than your viewport in the browser. You have blender set to 1:1 and browsers are often close to 2:1 (depending on your monitor and a bunch of other factors). That could easily explain the undesired framing. You should also check your camera limits. If they are set to tight, your camera will be pushed towards the subject.
    Again, if you could post your file, I could take a look.

    Jeremy Wernick

    in reply to: How to fix ground plane #25941
    jem
    Customer

    @mrburns, try this set up:
    1. Add a ground plane to the default cube scene.
    2. Move the default cube to a location above the ground plane.
    3. Point the camera at the center of the cube.
    4. Parent both the ground plane and the light source to the camera.
    5. Disable panning in the Verge3D camera controls
    6. Press sneak peek.

    Technically, the cube is stationary and the world moves, but to the end-user, it seems that the cube is moving.

    Here is a quick screen capture of these steps:

    Here is a video of the result:

    …(while tenting hands) excellent…

    Jeremy Wernick

    in reply to: progressive rendering #25824
    jem
    Customer

    Hi Yuri. Yes, your solution works for me. Thank you.
    I still think about JavaScript before puzzles, but I am trying to break that habit.

    In the long term, I want to come up with a workflow to automate the production and export of a scene with various LODs with a single click. This process has many manual steps right now. Also, I need to develop a set of puzzles that loads the LODs more intelligently. My current solution is very manual and brittle. This is a criticism of my solution, not of Verge3D.
    Thanks again.

    Jeremy Wernick

    in reply to: progressive rendering #25713
    jem
    Customer

    Hi Yuri,
    In the scenario that I was describing, I had a 3D scene with a large number of triangles. The customer generated the geometry from CAD, and it had a high level of detail. It would have been very time consuming to clean up all of the geometry in the CAD files to reduce the size of the BIN file. The goal of the project was to reduce the initial load time that the user would experience and preserve the high level of detail. To do this, I created two 3D scenes: a high LOD and a low LOD.
    -The high LOD scene is the original scene as exported from CAD.
    -The low LOD scene was produced from the high LOD scene. I made a very rough reduction of LOD using the tools in Blender, such as decimate and delete.
    The low LOD file was about 10% the size of the high LOD file. Verge3D was able to load the low LOD file in just a few seconds, but the appearance was poor (as should be expected).
    Once Verge3D renders the low LOD file to the screen, I use JS to initiate the loading of the high LOD files. This process gives the user something to look at and inspect while his computer is downloading and compiling the high LOD file.
    The code is trivial. I added a function call to the runCode() function that looks something like this:

    function runCode(app) {
        app.ExternalInterface.loadHighLOD();
    }

    It calls a function in the puzzles that loads the highLOD. See attached screenshot. This is an older example, and the puzzles and JS need to be updated to work with 3.1, but the idea should still work.
    The tricky part is making your puzzles work correctly, depending on whether the low or high LOD file is loaded.
    It would be nice to have a toolchain and framework that automated some of these steps and was able to load different LODs intelligently, but, as I said, this is not urgent. Thanks!

    Jeremy Wernick

    in reply to: progressive rendering #25628
    jem
    Customer

    There is a related function that I would love to see in the product, multiple levels of detail support. It would be nice to be able to load low LOD models quickly first and then load high LOD in the background.
    I can do this today, but it takes a little coding and puzzles to achieve this. It would be nice to have a standard solution for this. This is not an urgent request.

    Jeremy Wernick

    in reply to: 3D Passenger Drone Configurator #25515
    jem
    Customer

    @GlifTek, try enabling HDR rendering under the Verge3D export settings. I think that this is the secret. Since the headlights and rotor lights are not clamped to 1.0 when using HDR, the bloom threshold can be set much higher. Set the lights to a very high value. You can then choose a bloom threshold that picks up the lights but not the reflections.

    Jeremy Wernick

    in reply to: Web GL Issues with Version 3 #25457
    jem
    Customer

    @cadtot, glad the suggestion helped. Keep in mind that your battery life will be terrible now. That does not matter to me. 99% of the time I work with the AC adapter plugged in. I do turn the switchable graphics back on if I need to use my laptop on a long flight. You cannot plug the PSU from these workstations into an AC outlet on an airplane because they draw too many watts.

    Jeremy Wernick

    in reply to: Web GL Issues with Version 3 #25445
    jem
    Customer

    @cadtot, I have a similar Dell workstation, and I have seen errors like this. Your workstation has two GPUs. You have the discrete Nvidia Quadro and the GPU integrated into the Intel CPU die. Dell typically sets these machines up to use switchable graphics to maximize battery life. So, most of the time, they use the integrated GPU and only switch to the discrete GPU when needed. I was never impressed with the switching functionality.
    In my case, I went into the BIOS and disabled switchable graphics. My workstation only runs off of the Quadro. This change eliminated WebGL errors for me. Of course, my battery life is terrible, and the machine is a toaster now when running 3D graphics, a small price to pay.

    Jeremy Wernick

    in reply to: Verge3D 3.1 pre2 available! #25440
    jem
    Customer

    This is exciting. I especially like react.js support. That works well with the rest of our technology stack. Great work guys!

    Jeremy Wernick

    in reply to: Verge3D 3.0 for Blender Released! #24280
    jem
    Customer

    Great work! I am going to get it now! :yahoo:

    Jeremy Wernick

    in reply to: Fine tuning performance (draw calls) #24223
    jem
    Customer

    Hi Brandon,
    I do use the approach that Yuri described. I think that the best way is to pack the UVs from multiple objects onto one map. You can color code that map and use it in a shader to differentiate objects and feed values into shader parameters. The industrial robot demo uses a similar technique. In the case of the robot, multiple objects share the same shader and the color/roughness/metallic values are stored in shared maps.
    As long as you are frugal with the size and quantity of your image maps, the shaders that use them should outperform a complex procedural shader. A tool like Substance can help with the production of these combined PBR maps.
    As Yuri points out, you can always use puzzles to swap the maps on-the-fly if that makes sense for your scene.
    FWIW, the shader compile times in your screenshot look reasonable to me. I certainly have seen much worse.

    Jeremy Wernick

    in reply to: Old Watermill #23697
    jem
    Customer

    Hi Mikhail,
    Are you suggesting that I use puzzles to animate the normal map or is there another technique? I imagine that I could animate the magnitude of the water waves and move the UV origin with puzzles. Thank you.

    Jeremy Wernick

    in reply to: Old Watermill #23681
    jem
    Customer

    I agree this is fantastic! Thanks for sharing. I also appreciate that the artist shared some of the screenshots from Blender.
    Does anyone care to share how the artist created the water ripple effect? I have been trying to create this effect for some time.

    Jeremy Wernick

    jem
    Customer

    I have one more hint for using this script. If you want a specific Verge3D export option to be applied to all of the GLTF files, set that Verge3D export option in the gltf_mass_export_example.blend file. All of the .blend files and .gltf files will inherit that option.
    For example, you might want to enable LZMA compression for all exports. In that case, check the LZMA checkbox in the Verge3D section of the render properties tab in the gltf_mass_export_example.blend. This technique avoids having to remember to enable that option on hundreds of separate blender files.

    Jeremy Wernick

    jem
    Customer

    @GlifTek, sorry for my delay in responding.
    I have attached an archive of a simple project that uses this script. You will probably need to edit the python script to suit your requirements. I think that it is reasonably simple. I am not a programmer, and I was able to cobble it together.

    How to use this script.
    1. Place all of your objects in the object_library.blend file.
    2. Edit the CSV file. The CSV has two columns, ObjectName and BlenderFileName. This allows the GLTF file name and the object name to differ if needed.
    3. Open gltf_mass_export_example.blend
    4. Edit the gltf_export_script.py script. At a minimum, you will need to change the value for the fp variable to match your environment.
    5. Run the script.

    For each object listed in the CSV, the script will append the object from the object_library.blend, save it at a separate .blend file, save it as a separate .gltf file, and then delete the object from the working blend file.

    You will end up with a directory of blend files and another directory of gltf files.

    The script is flexible. I have used it to perform batch transformations. There are some commented examples included.

    Give it a try and see what you can do with it. I find this type of scripted batch processing to be very powerful.

    # application directory
    fp = "D:\\verge3d\\verge3d\\applications\\gltf_mass_export_example\\"
    # subdirectory where exported blender files will be placed
    dir_blender = "blender_files"
    # subdirectory where exported gltf files will be placed
    dir_gltf = "gltf_files"
    # name of file containing all of the objects to be extracted and saved as gltf files
    combined_object_data_file = "object_library.blend"
    # CSV file with the names of the objects to be extracted and the name of the gltf file to create from each object
    csv_file_name = "my_export.csv"
    with open( fp + csv_file_name ) as csvfile:
        reader = csv.DictReader( csvfile, dialect='excel')
        for row in reader:
            # Deselect all objects
            bpy.ops.object.select_all(action='DESELECT')
            obj_name = row['ObjectName']
            file_name = row['BlenderFileName']
            directory = fp + combined_object_data_file + '\\Object\\'
            filepath = directory + obj_name 
            bpy.ops.wm.append(filepath=filepath, filename=obj_name, directory=directory)
            #set the imported object active
            bpy.context.view_layer.objects.active = bpy.context.selected_objects[0]
            #print(bpy.context.active_object.name) 
            #bpy.context.active_object.name = bpy.context.active_object.name.upper()
            #if you need to apply any transformations to each object, ues functions such as the following:
            #bpy.ops.transform.resize(value=(0.0254, 0.0254, 0.0254))
            #bpy.ops.object.transform_apply(location=False, rotation=True, scale=True)
            #bpy.context.active_object.name = bpy.context.active_object.name.upper()
            bpy.ops.wm.save_as_mainfile(filepath=fp + dir_blender + '\\' + file_name + '.blend')
            bpy.ops.export_scene.v3d_gltf(filepath=fp + dir_gltf + '\\' + file_name + '.gltf')
            bpy.ops.object.delete() 

    Jeremy Wernick

Viewing 15 posts - 61 through 75 (of 173 total)