Blender Tutorial: How to Render a 3D VR Video from Blender
Could VR give more realistic experiences? If yes, how? Those are questions every VR creator has been working on. The highest form of immersive media gives its viewers the feeling that they are viewing the scenes in person and directly with their own eyes. We perceive the real worlds around us with two eyes that are 2~3 inches apart. Whatever object we look at, its image gets projected on our left and right eye retinas at slightly different positions, and such binocular disparity helps us perceive the scale and the depth; the larger the disparity is, the closer we feel the object is to us (depth). For a certain perceived size, the further away the object is from us, the larger we know it actually is (scale).
When we watch contents in VR mode, VR goggles or VR headsets use two separate input channels for each of our eyes and thus ‘immerse’ us into the scene. If the content is in 2D, the system automatically generates two scenes onto two display screens so that the left display presents a fraction of the scene slightly more on the right than the right display does.
the left display presents a fraction of the scene slightly more on the left than the right display
In contrast, 3D is produced by using 2 cameras offsetted from each other to capture materials with different binocular disparity for each eye and can thus give viewers a much more real sense of depth and scale. So this Blender tutorial will walk you through how you can use a pair of virtual cameras to render a piece of 3D VR content with Blender.
Assuming you’ve got your 3D scene ready, here is a brief summary of the workflow we’d adopted:
1. Configuring the render engine
- Change the render engine to ‘Cycles Render.’
- Set the output format.
2. Set the stereo 3D display mode
3. Configure the camera
- Change the camera type to a 360-degree one
- Make your camera a stereo pair
- Set the interocular distance
- Consider where the convergence plane should be
- Set the convergence plane distance and finalize the position of the stereo pairs
4. Render the scene out and upload onto VeeR!
Step 1: Configure the render engine
First, set the render engine from ‘Blender Render’ to ‘Cycles Render’: for various reasons this new engine is much more powerful in rendering photorealistic 3D scenes than the classic one. Go to ‘render layers’ on the properties panel, check ‘views,’ select ‘Stereo 3D,’ and then check both left and right.
Then click on the camera icon and go to ‘output.’ Choose a destination folder. If your final product is a still 3D image, select an image format. If you are making an animation, you can export it in a video format or export individual frames as images. The first option is easier but also a little bit risky because you’d lose the whole render if some error happens half way. Images require you to do more legwork to bring together the images although it’s safer. Knowing that, under ‘Views Format,’ select ‘Stereo 3D.’
Moving on to set the output format – because our final product would be two renders – one for each eye, we’ll need to decide on what layout the two renders should be present in. Because we are rendering for VR, we’d choose some format that presents the renders for two eyes in parallel and without overlapping with each other. Thus, you can choose Stereo Mode as either ‘Top-Bottom’ or ‘Side-by-Side.’ Platforms like VeeR recognizes the top/left render as the left eye input channel and the bottom/right render as the right eye input channel.
Step 2: Set the stereo 3D display mode
However, you don’t usually use a VR headset to preview the scene when you are creating it, and so you can click on ‘Window’ > ‘Stereo 3D,’ and select ‘Anaglyph’ to preview your work. An anaglyph 3D video contains two differently filtered colored videos (overlapping with each other), one for each eye, and you can watch it by wearing a pair of red-cyan glasses so each of your eyes only sees a single image, like you do when watching a 3D movie in the cinema.
Step 3: Configure the camera
Change your camera type to ‘panorama,’ and then ‘equirectangular,’ as described in this blog. This allows you to have your render results in 360-degree.
Then, select your camera, go to ‘data’ in the property panel, and go down the ‘Stereoscopy.’ Select ‘Off-Axis’ under Stereoscopy: this is the ideal format since it is the one closest to how the human vision works.
Then we’ll need to set the ‘Interocular Distance’ and the ‘Convergence Plane Distance.’ You can set interocular distance based on how big you want your 3D objects to be perceived. Click on one of your objects, press ‘N’ to check its dimensions. Calculate the ratio between the scale that object has in reality over the model’s scale. The interocular distance you set for your stereo pairs should be that ratio times a normal human pupillary distance. It’s safe to use Blender’s default interocular distance value 6.291 as a normal human pupillary distance.
In Blender, the convergence plane is the grey plane you can see in the 3D viewport after changing the camera to a stereo pair. It’s where the two cameras converge. Visual discomfort or brain fatigue may easily occur when viewers stare at some virtual object that is too far from the convergence plane you set, and a larger distance between the object and the plane the viewers can withstand is associated with smaller interocular distance and larger convergence plane distance. And thus, Blender recommends that you set your convergence plane distance at least 30 times the interocular distance.
So far we’ve got all the parameters covered but haven’t yet decided where exactly we want the camera to be: it’s still at the default position. To position the camera properly, we’d first consider where we want the convergence plane to be. There’s no standard answer, so just keep two facts in mind:
- There is only a limited zone around the convergence plane where viewers can look at objects comfortably.
- As illustrated by the image below, virtual objects that are closer to the camera than the plane is would have a pop-out effect(right), while objects that are behind the plane would be perceived as ‘deep into’ the screen(left).
Therefore, consider what your main characters/objects are, or, say, where your main story happens, and drag your camera to move the convergence plane to that place (the plane moves with the camera given a fixed convergence plane distance). To avoid visual discomfort for viewers, keep in mind: a) where important characters are the densest/where users would focus on for the longest period of time, and b) what objects you would like to apply a pop-out effect on and what objects you would like a “deep into” effect.
After you reached your perfect position for the convergence plane through moving the camera, press ‘0’ on the Numpad to view from the camera’s perspective. If it doesn’t capture all you want, you can make the convergence plane larger and drag the camera pair further away from the scene to keep the convergence plane at the same position as before.
Step 4: Render the scene out and upload onto VeeR
So far we’ve got all the configurations done. It’s recommended that you render a piece of low-resolution test to check if the final product is what you want. After you’ve got your final 3D VR work rendered out, remember to upload it on to VeeR. Just select 3D top-bottom or 3D side-by-side when uploading and VeeR would handle everything else to have your work ready to be viewed in VR mode. We are looking forward to enjoying your 3D VR content with millions of users on the platform!
For more about Blender, also check out Why Blender Is The 3D Animation Software You Need For Your VR Projects.
VeeR VR is a leading VR content platform with the mission of empowering everyone to create and share virtual reality content. Within a year of its establishment, it’s become a phenomenon sweeping through the 360 community, and has been featured on Google Daydream, Samsung Gear VR and HTC Vive. While we believe that VR is the future of storytelling, we want to encourage all VR lovers to create beyond boundaries.