So, from the michaelkammes.com camp: big news – Kate Miller has agreed in principal to spend the rest of her life with me, and that suits me just fine. It’s a good thing, or so I’m told. I personally think she’s nuts for spending 3 years with me, let alone the next 70. But as soon as ink hits paper, she’s mine and I’m hers, and our geek powers shall set you free.
And while I’ve known for a while she was the one, how I was going to do it that would be good fodder for dinner parties and for post production industry acronym discussions – that angle eluded me. GEEK, NERD, + MARRIAGE PROPOSAL Google searches seemed to pop up cool ideas, but nothing that married (ahem) my life choice of Post Production with my life mate choice of Kate.
Then it dawned on me: 3D. It’s future viability in this world aside, it offers a technical and logistical challenge, and would be something she wouldn’t see coming. 2 things which could be a whole ‘lotta fun to exploit.
My concept was simple: Insert my 3D self into 3D scenes, and have the last scene culminate with my proposal. 3D would enable my arm to reach out ‘towards’ her, which in turn, gave me the idea of using the old trick of 4D for some humorous purposes. 3D scenes of weather allowed for the 4D gimmicks, and my frequent industry lectures (and they are lectures, I can drone) gave me the scenario that she wouldn’t see coming.
So, I did it, and here is the video. But if you’re here, you probably want to know the geek portion of it. Jump past the video for a more in depth description.
For the shoot, we used a Cowboy Green Screen. Because we used more duct tape, clamps and C-47s to get even a half decent backdrop, I can’t recommend enough not to use it. Seriously, get something good. I used an Arri Softbank light kit to illuminate the green screen as flatly as possible. I lit myself as evenly as possible, so in post I could more easily fit into the soft light of each scene.
Last year, my employer Key Code Media purchased a consumer grade JVC 3D Camcorder (GS-TD1) for some workflow and multimedia development, so I was able to put it into good use. The camera has HDMI out, which I used an AJA HA5 to convert the signal to HD-SDI. I then ran the HD-SDI to a TriCaster 855 Extreme unit. Overkill? You betcha. But it was there. Why not use it? Inside the TriCaster, I placed a keyer on the channel, and ran the output to a preview monitor. Why? This allowed me to do a RT previz to see if the lighting was working on me, how it looked with the background footage, as well as if the key I could pull would work. I then recorded in-camera in an MVC format, in an MP4 container. This was the highest resolution the camera offered.
The camera didn’t have an acceptable audio input, and the on camera mic was less than desirable, so we actually ran a countryman wireless mic via XLR to a JVC 790 camera we happened to have. So, it was sort of a ‘dual system’ and a simple clap before each take allowed for a common sync point in post.
Vince Rocca volunteered his time, advise, and funny bone as I couldn’t do all production stuff myself. We shot the four scenes in a few hours, at which point post production began.
To begin the edit: for those of you unfamiliar with MVC stereoscopic encoding – MVC encodes both eyes into 1 file, which contains the full resolution left eye and only the *difference* between the two eyes for the right eye. This allows for a much smaller file size, and is one of the ways 3D is encoded on a Blu-ray disc. Sadly, MVC is relatively new(er), and being stereoscopic based, it’s use is still niche. I did, however, know of MVC to AVI Converter, which is a simple, no frills way of extracting both individual eyes to a more common codec.
I chose Cineform as my mezzanine codec, as I knew it was a great balance of quality loss via transcoding (i.e. virtually none) and file size. Once I had the 2 individual files, I knew I would be safe in whatever compositing and editorial platform I used. Plus, Cineform endorses this workflow.
3D Stock footage – especially for the sight gags I wanted to pull – was tough. I scoured the interwebs for useable buyout footage, but only found 1 clip, and it was still lacking – the snow scene. The clip was actually of a snow capped stream, and we added snow blowing in at an angle in After Effects to match the flying paper from production to help sell the illusion. I then remembered several stereoscopic speaking engagements I shared with Bob Kertesz, a brilliant blue/green screen production guy. I dropped him a Hail Mary email, and Bob hit up the CML – a cinematography mailing list.
David Cole, who subscribes to the CML, happened to be in the path of Hurricane Issac, and shot a few quick scenes of the hurricane. I was able to use these for both the wind and tropical / rain scenes. I then used the same MVC to AVI workflow to create the left and right eyes, as I had with my green screen footage.
My attempts to do the 3D composite worked, but left much to be desired for the level of quality I wanted. Christian Glawe, a damn good guy with 3D post chops offered to help me sell the effect. Christian used After Effects to bring in each eye and for each backdrop scene. He keyyed out the green screen footage, and placed the environment behind me. He also adjusted the Z axis so I was in the same depth as the scene. The addition of rain and snow particles, blurring of the background, and a color pass on my mediocre production light job also helped sell the concept. He’s got massive grey matter, yo.
The final scene was a found 3D conversion clip of Jurassic Park. Kate’s favorite movie by far is Jurassic Park, so it only made sense to incorporate it into the absurdity of the last scene I am immersed into.
For the title cards, I began creating my own, but nothing really popped. I happened to come across Enhanced Dimensions online, and a very reasonable cost for ScreenCandy 3D gave me some pre-built templates that not only popped, but I could manipulate in After Effects to suit my needs. Once I had these, I made the text changes in After Effects, and exported each individual eye.
I’m a much better Final Cut Pro (classic) editor than an Avid Media Composer hack, but FCP, even with this plugin or this add-on, doesn’t have the far superior flexibility for 3D work that Avid offers. So, I decided on Media Composer 6.0 (Mac) as my editorial platform. My first attempt was AMAing to the cineformed clips, but this proved to be riddled with hiccups and performance issues, so I instead imported each eye into DNX175x, then created a stereoscopic clip inside Media Composer. Normally I wouldn’t recommend composting work in Media Composer, but one green screen composite for each of four 15 second clips seemed like a rule I could break and still live myself. I did the composite in Avid, with a title card between each clip. The 3D aware title tools in Avid also enabled me to add the 3D special thanks credits at the end extraordinarily easily – and view the 3D content in any way I needed during the edit and out to the 3D confidence monitor for checking the edit.
Next, I couldn’t let a project go through my fingers without sound work. I exported an AAF from Media Composer, as well as a 2D h.264 MOV. I then did my stereo mix to pix in Pro Tools to the H.264, and replaced the camera audio with the audio from the dual system JVC recording. This is where there was a slight hiccup. I didn’t bring in the dual system audio into Avid for syncing prior to the edit (Why? Brain Cramp.) So, I had to do the sync in Pro Tools. Since the piece was edited, I had no common sync point or slate. Luckily, I knew what takes I had used, so it was just a mater of a nudge here and there to get sync. Again, having a short piece made this oversight only a slight frustration.
I exported a stereo mix, and imported it into Media Composer, then did a DNX175 QT Ref exports of both a side-by-side, and each individual eye. I used Telestream Episode to encode the side-by-side into a high bitrate MPEG2 for my home media player that Kate would watch, and each eyes into a DNX175x MOV. After I recorded Kate’s hidden reaction in 3D, I repeated the MVC to AVI process, and Media Composer import to DNX145 with that footage, and (fast) imported the previous proposal video DNX175 of each eye back into Media Composer, so the picture in picture of 3D video could be manipulated.
At home, I have a 65″ LG Passive 3D Television (65LW6500), connected to an LG BD670 Blu-ray player. The MPEG2 file could then be streamed via Mezzmo on my home PC to the Blu-ray player (DLNA). Easy, right?
I had a great time doing this; getting back to my creative roots, as well as the thrill of seeing Kate’s face light up. Now, I just have to see how to top this for the wedding. Wireless 4K 3D streaming of the ceremony? FUTURE I AM IN YOU.
Edit: There’s been some awesome coverage of this…if you come across a link, share it!
Right This Minute (Skype Interview)
When Geeks Wed
Art of the Guillotine
New on TV
Make Use Of
Viral Video News