The Oculus Connect 2 conference for virtual reality developers in September gave Oculus the chance to announce a number of surprise partnerships. They also showed off some fresh content, new demos as well as new talks. But without a doubt the most anticipated talk was given by the legendary John Carmack, whose résumé includes everything from laying the groundwork for every first-person shooter in existence to being a rocket scientist CEO.
As a newcomer in the cinematic Virtual Reality (VR) field, I was rather surprised to hear Carmack come out in favor of a 180 degree 2-camera, wide-angle stereoscopic video rig, as opposed to the 3D 360 degree format. Clearly 360 videos which encompass your entire view will offer a greater level of immersion. Why was Carmack in favor of a lesser format?
See Carmack’s talk on cameras here:
Apart from the obvious problems of stitching videos seamlessly, Carmack goes on to explain that today’s hardware decoders only allow for 4k video at 30fps to be played with a reasonable bit rate, which you can stretch around a 360 sphere for a monoscopic 2D 360 video of acceptable quality (watch the video linked above for more information on this). And while this works ‘okay’ for scenic captures, the problem arises when 30fps video is simply too slow for fast paced action sequences. Because, contrary to the film industry standard of using only 24fps, in the world of VR experiences, higher fps does in fact translate to a more comfortable viewing experience.
With that in mind, shooting at 60fps in 4K would mean having to cut the bit rate in half, resulting in significantly reduced video quality. And if you want to shoot in stereoscopic 3D rather than monoscopic 2D as well, you would again have to cut the bit rate in half, ultimately leaving you with a quarter of the the bit rate of our original video in 360.
Until new encoders or view-dependent 360 videos come along, 180 degree 2-camera rigs seem to be the ideal choice for non-scenic captures in order to maximize pixel fidelity. In order to test this , let’s take a look at how to hack together a 3D camera rig without breaking the bank (see here for more information on 3D camera rigs).
I chose the Rokinon 8mm lens for it’s high performance at a low price
One of the challenges in creating a 3D shooting rig is finding a way to mount the cameras close enough to each other so that the camera’s interpupillary distance or IPD (the distance between the two cameras) is close enough to match that of the average human. More on that later, but for now let’s start with a $59.99 Ikea BOSSE Bar stool.
Next, take a quick trip to your nearest hardware store and pick up a two large L-brackets, a couple washers and 1/4”-20 UNC screws which will attach to the tripod mount on the bottom of our camera. Cost: $7.42.
I used a digital caliper to measure the distance between my own eyes. Ideally you would want to try and match the IPD of your eyes to the IPD of the camera, but often this will not be possible due to camera size. As a result, you may have to increase or decrease the IPD between your cameras which will cause a shift in scale for the environment you are capturing. Lowering the IPD will cause everything to look bigger, while raising the IPD shrinks the scale of the world. Ultimately, I decided on an IPD of 69mm, which results in negligible effects on changing scale (for reference, I have an IPD for 62mm).
Now for the fun part.
After I had my holes punched out and brackets mounted, I put two layers of white heat shrink over top to prevent the brackets from scratching the bottom of the camera.
Purposely missing a screw to pivot and remove the camera when needed
The next and arguably the hardest challenge is to Genlock your two cameras in order for them to record in sync. Traditional 3D filmmaking requires expensive production-ready cameras which have a Genlock port as well as a time-code sync port. This enables every camera and microphone to record in phase and not drift apart. Consumer grade DSLRs like the ones I am using do not allow for this so we have to find an alternative solution. By wiring two remote shutter triggers together, it’s possible to trick the camera into creating a temporary Genlock.
In my experience, this solution is only temporary and the cameras will lose sync after around 10–15 minutes of recording. If you buy cameras manufactured in the same batch, drift can be reduced.
$5 each on Amazon.ca
A little less conversation, a little more action……..
Tools everyone should have.
A bit of soldering with some heat shrink and we have our janky-Genlock system!
After you start up video recording on both cameras, simply hit the remote shutter. Both cameras will take a picture while still recording video and that should give you an approximate point to sync in post production.
And there you have it.
After importing your clips into After Effects, zoom in on your timeline until you can navigate frame by frame. Then hit “L” twice with the clips highlighted to bring up the waveform display and sync the two videos using the sound peak(or gap) caused by your remote shutter click.
Even though my Canon DSLRs can only record 1080p at a maximum of 30fps rather than 60fps, I still wanted the end product to be at 60fps in order for a smoother experience. I thought this would be impossible until a good friend of mine introduced me to the Smooth Video Project, which plays 24fps videos in 60fps by interpolating the frames in between. In order to do this in After Effects, I used a plugin called Twixtor, which is traditionally used for creating a slow motion effect by warping and interpolating missing frames. By asking After Effects to interpret my 30fps video on a 60fps timeline, I could then use Twixtor to fill in the missing frames. While not always the case, using Twixtor can also allow for slightly better pseudo-Genlock as you have finer control over your frames.
Using Twixtor like this is extremely taxing on most systems. I have a pretty decent desktop computer and a two minute video still took 8 hours to render at measly 25mbps bit rate.
Now that your frames are synced up and Twixtor has been applied, you can merge your clips into one video by using the 3D Glasses plugin included in After Effects. Feel free to play around with the alignment settings until you find what works best for you. Remember to set the 3D View to ‘Stereo Pair (Side by Side)’.
Set ‘3D View’ to ‘Stereo Pair (Side by Side)’ NOT Balanced Colored Red
I use MaxVR to playback my videos in ‘180 Panoramic Dish’ mode and with ‘Side by Side 3D’ turned on. But other free programs like Whirligig should be able to play these videos just fine. One thing to note is that on my Samsung Galaxy S4’s Cardboard Theater app, I am unable to playback videos at 60fps. I have to re-encode the video at 30fps for smooth playback. This may not be the case for other newer phones, but I haven’t tested this outside of my personal phone.
By recording in only 180 degrees, it’s possible to maximize pixel fidelity and incorporate the use of 3D, without resolution declining immensely. When comparing my footage to a GoPro rig, it’s clear to see that the resolution is not quite as good (4k vs 1080p), but the larger sensor on DSLR allows for much better dynamic range. For future rigs, it should be pretty simple to replace the Canon DSLRs with 4K cameras(A7R II, GH4, etc) and achieve a much better picture quality. In order to get cameras produced in the same batch, I would suggest contacting the manufacturer or closest distributor for the brand of camera you select.
Download the 1080p fugazi-60fps demo here for Oculus Rift and GearVR(51 seconds|166MB): https://drive.google.com/a/tobiasch...
Download the 1080p 30fps demo here for Cardboard(51 seconds|164MB):https://drive.google.com/a/tobiasch...