Skip to content

Little Frog in High Def

Musings of an NLE ronin…
description of the photo

GoPro Hero cameras are everywhere lately. It seems like there isn’t a production I am working on that doesn’t utilize this camera in some way. They are mounted in cars to either see the driver and passengers, or aimed at the road. They are mounted on back hoes as they dig, mounted on drills as they burrow into the ground. They are mounted on people as they do crazy things. They get angles that you normally cannot get.

First, let me mention the three models currently available from GoPro:

Hero 3 White Edition can shoot video at 1080p30, 960p30 and 720p60, and 5MP photos at up to 3 frames per second. It can shoot timelapse from half second to 60 second intervals. It has built in WiFi, and can work with the GoPro WiFi remote or a free smartphone app.

Hero 3+ Silver Edition does all that, but shoots up to 1080p60 and 720p120, and shoots still photos at 10MP up to 10 frames per second.

Hero 3+ Black Edition does all that the Silver Edition does, but adds 1440 at 48fps, 960p100, as well as 720p100 and 720p120.  It also shoots in ultra-high resolution, going to 2.7k at 30fps and even 4k at 15fps. And it has an option called SUPERVIEW, which enables ultra-wide angle perspectives.  It can shoot stills at 12MP stills,  30fps.  All cameras have built in WiFi and work with the remote, or smart phone app, and all perform much better in low light situations than their predecessors.

For this post, I was provided with a Hero 3+ Black Edition camera, and a slew of accessories.  What is really handy about the Hero 3+, is that it can shoot in a wide variety of ways that might suit various aspects of production. For example, The ultra high speeds is shoots makes it great for smooth slow motion conformed shots.The ultra-HD frame size it shoots allows for repositioning the shots in post to focus on the areas of interest we want to focus on. They all can be controlled wirelessly from an iPhone or Android device with a free app…and you can change the settings in those apps, far easier than the in-camera menus.

OK, so the GoPro Hero 3 line of cameras prove to be very useful cameras, enabling you to get all sorts of useful footage. But the point of this post is to showcase workflows for ingesting the footage into various edit applications so that you can take advantage of these advanced shooting modes.

AVID MEDIA COMPOSER

Let me start with Avid Media Composer, only because that is what I have been using the most lately. If you set up the camera to shoot in normal shooting modes, like 1080p30 (29.97), 1080p24 23.98 or 720p60, then importing is easy. Simply access the footage via AMA, and then transcode to DNxHD…either full resolutions like 145, 175 or 220…or an offline codec like DNxHD36, DV25 or 15:1 so you can cut in low resolution, and then relink to the original footage and transcode to a higher resolution when you go to online.

First, go FILE>AMA LINK and you’ll get the following interface. Select the clips you want to link to:

Once you have all your clips in a bin, go to the CLIP menu and choose CONSOLIDATE/TRANSCODE:

If you shot 720p60, so that you can use the footage either normal speed, or as smooth slow motion in a 29.97 or 23.98 project, then you need to first import the footage in a project that matches the shooting settings…720p60. Then copy the bin over to your main project and cut the footage into the sequence. You will note that the footage will appear with a green dot in the middle of it, indicating it is of a different frame rate than the project:

The footage will play at the frame rate of the project, or you can adjust it to smooth slow…take all of the frames shot and play them back at a different frame rate. First, open the SPEED CHANGE interface, and then click on the PROMOTE button:

That enables more controls, including the graph. When you open the graph, you’ll note that the playback speed is different. If you shot 60fps and are in a 29.97 project, then the percentage will be 150%. Change that number to 100% and now the clip will play back in smooth slow motion.

If you shot at a higher frame rate and want it to be slow motion…say 720p 120fps, then you’ll have to use the GoPro Studio app to convert that footage. The cool thing about that application is that it’ll conform the frame rate, and convert the frame size to suit your needs. I’ll get to that later.

NOTE: You can edit the footage native via AMA. When you bring it into the main project, and drop it into the timeline, it’ll be 60fps, or 120fps (note the image above of the timeline and green dots…those are AMA clips, thus why one shows 119.8fps). So when you promote to Timewarp, and adjust the percentage, it will play in slow motion. But know that editing MP4 native in Avid MC is anything but snappy. It will cause your system to be sluggish, because there are some formats that Avid MC doesn’t edit natively as smoothly as it can Avid media.

One trick you can do is to AMA the GoPro footage, cut it into the sequence, promote to Timewarp and adjust the playback speed…and then do a Video Mixdown of that. Then you’ll have a new clip of only the portion you want, slowed down. The main issue with this trick is that any and all reference to the master footage is gone. If you are doing an offline/online workflow this might not be the best idea. It’s a simple trick/workaround.

Now let’s say you shot a higher frame size, such as 2.7K or 4K, and you want to reframe inside Media Composer. First thing you do is use AMA to access the footage. But DO NOT TRANSCODE IT. Once you transcode, the footage will revert to the project frame size…1920×1080 or 1280×720. Avid MC does not have settings for 2.7K or 4K. I’ll get to the workaround for that in a second.

Once you add the clip to the timeline, you’ll notice it has a BLUE DOT in the middle of the clip. Similar to the GREEN dot, except where green indicates a frame rate difference, blue indicates frame size difference. If you then open the EFFECT MODE on that clip, FRAME FLEX will come into play.

You can then use the Frame Flex interface to reposition and resize the shot to suit your needs. If you shot a nice wide shot to make sure you captured the action, Frame Flex will allow you to zoom into that action without quality loss. Unlike zooming into footage using the RESIZE or 3D WARP effects on regular 1080 footage.

One drawback is you cannot rotate the area of interest. The other is that you cannot convert the footage to an Avid native format…something I mentioned earlier. So you can either work with the 4K MP4 footage native…which might prove to be difficult as Media Composer doesn’t like to work with native MP4 footage natively, much less at 4K. So one workaround is to do your reposition, and then do a VIDEO MIXDOWN. This will “bake in” the effect, but at least the footage will now be Avid media:

ADOBE PREMIERE PRO CC

The workflow for Premiere Pro CC is by far the easiest, because Premiere Pro will work with the footage natively. There’s no converting when you bring the footage in. Simply use the MEDIA BROWSER to navigate to your footage and then drag it into the project.


(the above picture has my card on the Desktop. This is only an example picture. I do not recommend working from media stored on your main computer hard drive.)

But I highly recommend not working with the camera masters. Copy the card structure, or even just the MP4 files themselves, to your media drive. Leave the camera masters on a separate drive or other backup medium.

So all you need to so is browse to the folder containing the media, and drag it into the project, or drag the individual files into your project. Bam, done.

CHANGE IN FRAME SIZE

Ok, let’s say you shot 720p60…but you want to use your footage in a 1080p project. When you add the clip to the timeline, you’ll see that it is smaller:

That’s an easy fix. Simply right-click on the clip, and in the menu that appears select SCALE TO FRAME SIZE:

But what if you want this 720p 120fps footage you shot to play in slow motion? Well, that’s very easy too. Right-click on the clip in the Project, and in the menu select MODIFY>INTERPRET FOOTAGE:

Then in the interface that appears, type in the frame rate you want it to play back as. In this example, I choose 23.98.

Done…now the clip will play back slow…even if you already have it in the timeline.

FINAL CUT PRO X

Importing is really easy; File > Import > Media. You can either work natively, or choose the OPTIMIZE MEDIA option. Optimize media will transcode the footage to ProRes 422.

You get a nice box to import with an image viewer.

Now, as I said before, you can work with the footage natively, but I’ve found that GoPro, because it’s H264, it likes to be optimized. I haven’t worked with GoPro native extensively in FCPX so I cannot attest to how well it works compared to how it does in Premiere Pro CC. Premiere has the advantage of the Mercury Engine and CUDA acceleration with the right graphics cards.

OK, so to transcode all you need to do is right click and choose TRANSCODE MEDIA:

Get these options:

You can create ProRes master media, and proxy media at the same time if you wish. Or just full res optimized media (ProRes 422), or just Optimized Media (ProRes Proxie) that you can relink back to the masters when you are done editing, that you can transcode to full res Optimized Media when you have locked picture. When you create the optimized media, or proxy, the frame rate of the footage is retained.

When it comes to speed changes, unlike FCP 7 and earlier that required you to use CINEMA TOOLS, you conform the GoPro footage internally in FCPX. As long as you set the timeline to the desired editing frame rate, 23.98 for example, then you can conform any off frame rate clip to it by selecting it and choosing Automatic Speed from the retime menu.

OK, lets say you shot 4K, but want to use it in a 1080 or 720 project. FCPX has what is called Spatial Conform. When set to NONE the clips go into a timeline at the natural resolution. For example, a 4K clip will be at a 100% scale, but will be zoomed in. All you need to do is scale back to like 35% to see the entire 4K image.

GoPro STUDIO

All right, let’s take a look at the tool that GoPro provides free of charge…GOPRO STUDIO. I use this application quite a bit, not only to pull selects (only portions of clips), but also to convert the footage into a easier to edit codec. H.264 works OK in Premiere, better if you have CUDA acceleration. But my laptop doesn’t enable that, so I choose to use the CINEFORM codec that GoPro Studio transcodes to. I also use it to convert higher speed rates for use in Avid Media Composer…like I mentioned earlier. If I have a 120fps clip, I cannot bring that directly into Avid and transcode it to that same frame rate. So I will convert it here first, to match the frame rate of the project….then AMA link and transcode.

Importing is easy. In the main window, on the left side, simply click on the “+” button, that allows you to import the clips. Grab as many clips as you want to. And then when you click on a clip to select it, it opens it into the center interface, and that allows you to mark IN and OUT points…if you only want portions of the clip:

To adjust the speed of the clip, click on the ADVANCED SETTINGS button. You’ll be presented with the following interface:

In here is where you change the speed to what you want. Simply click on the frame rate drop down menu and choose the one you want:

You can also remove the fish eye distortion from the footage if you want.

If the speed change is all you need to do, then click on ADD TO CONVERSION LIST and be done with it. But since the 120fps frame rate is only available at 720p, and most of my projects are 1080, you can also up convert the size to 1080 in GoPro Studio as well. And the conversion is pretty good. For that you go into the Advanced Settings again, and in the Frame Size drop down menu, choose the frame size you want:

If you want to convert 720p 120fps to 1080p 23.98, then the settings would look like this…I also removed FishEye:

So there you have it. Some of these workflows are just the basics, others go into more detail. But I’m sure there are lots more tips and tricks out there that some of the more “power users” of the edit systems employ. My hope is that these tips will enable you to use your GoPro Hero cameras to their fullest.

(Thanks to Scott Simmons (@editblog on Twitter) of the EditBlog on PVC, for helping me with the FCPX workflow)

A GoPro Hero 3+ Black edition was provided to enable me to test various aspects to the workflows. GoPro was kind enough to let me keep the unit, enabling me to shoot family activities in super slow motion, or in “Ultra HD” resolutions.  It was used to shoot a sledding outting, including a couple crashes…and a couple cat videos. They weren’t interesting enough to post…my cats are boring.


(EDIT DESK BEFORE)

It’s time. It’s time I took my 2008 MacPro out of regular use and start using my newer computer, a 2012 non-retina MacBook Pro. Why now? Well, I was onlining a series for MSNBC using Avid Symphony (in 64 bit mode) and working with some effects took longer than with my laptop. Rendering the show took longer on the tower than on the laptop. Some tasks were lagging on the tower, but not on the laptop. By the end of the series, I was finishing everything on the laptop.

Now, I do happen to have some things on the laptop that helped. The MacPro has a Kona 3 card, and I happen to have an AJA IoXT (and BMD Ultrastudio Mini Monitor) that connect to the laptop via Thunderbolt. So that part is covered. The IoXT has Thunderbolt loop through, so I am able to then connect a Thunderbolt to DVI adapter and have a second monitor. That leaves me with a Firewire 800 port and a USB 3 port for connecting hard drives (I use one USB port for a keyboard and mouse). I do happen to have an eSATA to USB drive adapter, so I can still connect drives via eSATA, but the speeds aren’t quite the same. Things have been going smoothly thus far with USB 3 and Firewire 800 drives. The onlines I just wrapped up all were on small 1TB USB3 bus powered drives.

But if I want to edit larger projects with lots of media, I’m going to have to be able to connect larger arrays to my system. My 4-bay eSATA bay…my CalDigit HDOne 8TB Tower (next to my tower in the above pic). And if I want RESOLVE to really crank out the renders, I’ll need to get a compatible graphics card. Unlike a MacPro that has extra slots to install such cards…eSATA, MiniSAS (HDOne) or secondary graphics card…my laptop can’t do that.  Nor can you do that with the Apple iMac, MacMini…or new MacPro Tower. Apple is betting everything on Thunderbolt…as you can see with the six Thunderbolt connectors on that new MacPro. So what are we to do now?

PCIe Thunderbolt bridges.

Because this transition happened slowly, there have been several companies that have come out with these bridges.  There’s the Magma ExpressBox 3T, mLogic mLink R, a couple options from Sonnet Tech and more.  (Sonnet just announced the first Thunderbolt 2 expansion chassis, with three slots. Perfect for the new MacPro, that is shipping with Thunderbolt 2 connectors) It turns out that a friend of mine happened to have a chassis that he was trying to offload. It was to be used in a DIT station he was going to build for a client, but never happened.  So he sold it to me for half price.

The one I bought is the Sonnet Echo Express Pro II…which is currently discontinued. But Sonnet makes an updated version of the Echo Express, and a few others (full list of supported PCIe cards found here).  What’s great about this, and the other options, is that they all have two Thunderbolt connectors. This means that you can connect your computer to the bridges, then possibly to a Thunderbolt RAID, an IO Device, a second computer monitor (if it’s Thunderbolt compatible) and so on.  I have one IO device that only has one Thunderbolt connector, so I can’t use it and an external monitor…unless I add a graphics card to the bridge. But the IoXT has loop through, the Sonnet has loop through, and then I connect to the DVI adapter…done.

I unpacked the Echo Express and read the instructions. Pretty simple…take off the cover, add the cards, put the cover back, add power, connect via Thunderbolt to my laptop.  No drivers needed…at least not for the bridge.  You still need to install the drivers for the cards you install…just like you would need to if you installed them into an older MacPro tower.

Now, when I installed the cards, I had an issue arise. The CalDigit FASTA4 card I installed…it’s a 4 port eSATA card…worked fine. My drive connected to it via eSATA showed up on my desktop. But the HDOne did not.  It lit up like it should, but nothing appeared on the desktop, nor in the Disk Utility software.

I emailed both CalDigit tech support and Sonnet Tech support, explaining the situation.  CalDigit was the first to respond asking for more details, and providing a different driver for me to install. This driver was for their HD PRO 2 tower. This one is called THUNDER EXPRESS…and makes their SAS cards compatible with Thunderbolt bridges.  Sonnet emailed next stating that the PCIe card makers need to write drivers that make them “Thunderbolt compliant.”  Makes sense…and is exactly what CalDigit did with their driver.  I installed it, and sure enough the HDOne mounted fine.  How did the eSATA one work right away? Well, I downloaded the latest driver from the site and apparently that one made the card  Thunderbolt compliant.

This might be true for all the bridge options. I do know that the mLogic one was designed with the Red Rocket specifically in mind, so the driver might not be needed for that. But since I don’t have either a Red Rocket or the mLogic…I can’t say for sure.  All I can say is that if you plan on upgrading to a newer computer that has Thunderbolt ports, and you want to bring over some of your devices that connected via PCIe…make sure that the company makes drivers for them to make them Thunderbolt compliant. Don’t get stuck, like I thought I did…when I bought the box, and the HDOne didn’t mount…I was pretty frustrated. I didn’t do my homework properly. Thank goodness for CalDigit being on top of things.


(yes yes, I still have that second monitor behind the laptop. I still need it on the tower, and it’s the best place to keep it for now)

The forty-fourth episode of THE EDIT BAY is now available for download. Editors aren’t button pushers, we are story tellers. This is a story about when I had issues with a script…

To play in your browser or download direct, click here.

To subscribe to this podcast in iTunes, CLICK HERE.

(For the sake of this post, I’m going to speak in terms of editing documentary or reality or corporate presentation type projects, not scripted. The approach to music in scripted projects is a little different)

More often that not lately, editors in my end of the production spectrum have been tasked with using library music in our shows. Meaning that there isn’t a composer. Well, there MIGHT be a composer, but they simply provide us with stock music, or music used in previously scored shows. Sometimes they might be utilized to score part of a show, so we have some original music. But lately, more often than not, the music us editors add to the cut is THE music that ends up in the final project.

And this brings me to the realization that there are many editors that simply don’t know how to edit music.

This is an issue that pops up all the time, specifically when working on a show with multiple editors. Very often I’ll be watching a cut, and something odd happens midway through a scene or near the end…the music will “jump” suddenly, or shift to a different tempo mid stream. When I solo the tracks I’ll note that either the music simply cuts from one cue to the next, rather clumsily, at a point where the editor wanted to change the mood of the piece. Or there might be a simple dissolve joining the parts of the music cue together. I understand how people can have trouble with this, as music can be very hard to edit, especially to a specifically timed scene. The music needs to change when you want it to change. But you cannot accomplish this simply by adding a dissolve.

This will not do.

You need to find a natural cut point in the music. Those typically happen on the beats. And not just any beat…you can’t cut to a down beat when an upbeat is coming up….they need to both be down beats. This is VERY HARD to explain in a blog, and when I lack the music language knowledge to know what all the vocabulary is. My daughter will be shaking her head in shame right now. But if you listen to music, you’ll hear it have upward movements and downward ones…and beats. You can’t suddenly change direction on your beats…say two downward beats, or it will sound odd. You need to find the similar beat to cut on. This will mean that you need to adjust the timing on your cut…for sure. You might need to space your narration further apart, or the dialog, but if you do it right, if you can get it to land on the right beat, then the music can actually accentuate the statement that someone makes.

On documentary projects (and some types of reality), one trick that I have come to employ is to cut the music to the narration and interviews and recreations right away…in the “radio cut” phase. In essence, really make it a RADIO cut. Make it sound like a piece you might hear on THIS AMERICAN LIFE (don’t know this show? I can’t recommend it more!). Make it work as a radio show that you later add images to. So first I’ll string together my narration and interviews, then I’ll hunt for the music cue I think fits best and cut it in. I’ll listen to the rough with the music, and if I’m lucky, there’ll be hits or rises that happen that might be perfect for when someone says something major. If it takes a while for that beat to hit, then I’ll adjust the music so the impact happens after the statement. I’ll need to edit the music. So I’ll see if there is a repeating movement that I can simply cut out, or make blend properly. If I want it to have more impact, I might add a sound effect to punctuate. Or…sometimes other musical instruments like a rising cymbal to slowly signal a coming change to the music.

It’s also tricky when you want to edit a section with the impact at the end of a music cue, but it doesn’t “back time” properly to meet up with the first part of the cue. Then you need to get tricky and creative, and really hunt for beats that match, and adjust timing to they match well. And then there times when you want a cue with one tempo to start, say a nice slow moment, then boom, cut to a fast paced exciting moment, you need the music to blend. And not just with a long dissolve, they might need to have a common beat. At times like this it’s like I’m a club DJ that needs to transition from one song to the next. But while they can adjust the speed of the songs slightly to compensate, I really can’t.

This REALLY is hard to blog about. This needs to be heard to be understood.

I guess all I can really convey is try to blend the music better, see if you can get a good radio edit of your footage. Having the right cadence and pauses in the right place, and even adding sound effects to make a point impact more, will really help you figure out visuals. This is why finding the right cues matter, and why finding music that works just right can take time…a lot of time. Many times I find myself spending more time finding the right music cue for a scene, then actually cutting the scene itself. And once I find the right cue, I’ll need to adjust the scene to accommodate it.

Finding the right music cue is VERY important. It is often the difference between a scene working, and it not working at all. One day your producer might watch your cut and hate it, and the next day love it, and all you did was change the music cue. In a screening not long ago a new editor came aboard an existing show and was…new to the show style. When his act was screened, we hit this one section where the music cue really hit the producer wrong. While it was a moment of celebration, the cue used was…well, he said “Oh my god, I can’t take it. This sounds like a graduation cue! No…stop it, I can’t watch this scene…I can’t…stop it.” The right wrong cue can make a good scene unwatchable.

OK, one final note I’ll make…make sure your music “buttons.” That means…make sure it ends, not just fades out. It needs to resolve, end on a “bahm BAHM bahm!” or some other musical thing that has it end. It might fade after that…meaning it won’t just cut to silence, but have that last note slowly die off. Unlike many songs you might hear on the radio where it’s just the chorus fading to silence as the song ends (I hate that), the cue needs to have an ending. Needs to button. Watch commercials and documentary shows to see what I mean.

If this isn’t difficult enough, revisions throw a wrench into the works. If we are told to cut a line here, or swap things around…that messes with the music timing. Or we are asked to move chunks of story from one act to another, or swap scenes. Now we need to do real damage control. Blend the music with the new scene, maybe find entirely new cues so they match better, because what you had before no longer works. Fixing that can take a while. Good producers know this, and allow for that time.

That’s it for this blagh post. If I get enough of you commenting, asking “what the hell do you mean? Can you show us what you mean?” then I might be persuaded to make a podcast about this. If I can find a scene that I can show to people. I’ll try to dig something up.

PS – I know one production company that specifically asks if you play a musical instrument, because they require people to do a lot of music editing, and understand how music works together. It took a lot of convincing to get that job, as I don’t play an instrument.

The forty-third episode of THE EDIT BAY is now available for download. In this one, the cutting room isn’t only my favorite place to work, it’s also my producer’s.

To play in your browser or download direct, click here.

To subscribe to this podcast in iTunes, CLICK HERE.

Let me start with a preface…I’ve been working with the same company since last December. In that time I’ve worked on four series and one pilot. So I have footage from all those shows floating in my head. Three of those shows are on the same ISIS system where I’m working, so I have access to all that footage.

Which can be dangerous. I’ve been on feature docs that are full of ‘temp images’ ripped from YouTube or some other online resource, and I’ve needed to find replacements…and the sticker price on those shock the producers. “But, that other shot is perfect, and it’s there already, can’t we just use it?” No. Or we can use it, but the quality of the footage is 360×240, and this is an HD show. “Can you bump up the quality to match?” No…I can clean it up, but only so much. and 240 to 1080…that’s quite a leap! There are many reasons you don’t do this.

Today I starting doing stuff that would drive me insane if I was the online editor or assistant editor on the show. I’m on a series that just started, so we don’t have a lot of stills and stock footage to draw from just yet. The fact that we started a week early because we have a very short time before this starts airing doesn’t help. So I’ve been assigned an cct to cut, but have darn little footage to add to it. Normally what I need to do in cases like this is add a slate stating FOOTAGE TO COME and what that footage should be…say “FIRE” or SHOVEL DIGGING IN DIRT, CIRCA 1530.” And then I prepare a list of footage needs and give those to my producer and researchers.

But see…slates drive me nuts. I want footage there, even if it’s temp. And…well, I have this ISIS full of footage from other shows, and since I worked on those other shows I know that, for instance, in one series we have a bin full of fire stock footage, and on another show, I know that we have recreation footage of someone digging in the dirt that I might be able to make look like it’s from the 1530’s, even though it’s supposed to take place in the late 1780’s. So I KNOW we have this…but I also know that I can’t use it. Because the producers and researchers can’t track it properly, and some of it was shot specifically for another show. I KNOW I CAN’T USE IT…

…but I do, because I want to see something. I did slap TEMP on it…with the intention of saying “I want something like this, but not this.” But this stuff has a way of accidentally sneaking it’s way through the show’s edit and ending up in an online where suddenly we find that we can’t use it and need to replace it (this has happened before).

I emailed my researcher asking, “OK…what will be the consequence of grabbing b-roll from, say, SHOW X for this show? Or a shot of a shovel digging take from SERIES Y? I know I shouldn’t, but here I am, needing to put SOMETHING on the screen, and know ‘Hey, I saw that in X,’ or ‘I know we have this shot in Y and it’ll be almost perfect.’ Shall I throw TEMP on it? Or just not EVEN go there and just slate it?”

His response?

“I will cut off a finger for each transgression.”

OK then…slates it is.

Here’s a quick little demo on the new mixer in Avid Media Composer 7, and how the UI (user interface) is now customizable.

Let’s talk stock footage.

I work in documentary TV and film, therefore I see and use stock footage. On the latest two TV series I am cutting, they are pretty much ONLY stock footage. Very little original footage is shot for them, other than interviews and some b-roll of locations or objects.  Everything else is sourced (a term meaning “obtained from”) stock footage libraries, or past TV shows the network has produced.

So I’m familiar with using stock footage, and issues pertaining to them, such as the “rights” to that footage…meaning how it is licensed. Some you can license for one time use, some for festivals only, some for domestic TV, for a set number of years, but mostly the networks I work for want it for domestic and international, in perpetuity (basically forever).  And the images you use must be clearable…meaning that you have the rights to show the footage, and possibly the people or things in that footage…everything in the shot.

This is where a big issue arises. Let me give you a few examples:

1) A feature documentary I onlined had a section that talked about the subject’s childhood. What era they were raised in, what part of the country, that sort of thing.  Well, at the time they didn’t have a movie camera (super 8 was the camera of choice when they were growing up) so they didn’t have footage of their life. Thus we needed to rely on stock footage.  So they searched a few companies for what they needed, found some great Super 8 from the area and era they grew up in, and downloaded it.  All was grand, until we had a shot pan across a 1960’s era living room, and there, on the TV, was THE FLINTSONES.  This presented a big problem. Sure, you licensed the rights of the film from the person who shot it, but what is playing on the TV…they don’t have the rights to that. For that, we’d need to contact CBS (the network THE FLINTSTONES aired on) and pay a separate fee for.

You know how sometimes in the credits they show at the end of movies and TV shows, “TV SHOW X FOOTAGE COURTESY OF” and then list the network?  No? I guess I’m one of the few that notices that. Anyway, that is because they got permission, and paid for that permission, from the network, and then needed to credit them. So if we wanted to use THE FLINTSTONES, we’d need to pay CBS, and I ‘m sure it is no small fee, and we couldn’t afford that…so…I blurred the TV.  Simple solution.

2) I’m working on a TV doc about presidential assassins, and of course the assassination of JFK is featured.  Now, the most famous bit of film from that incident is called The Zapruder Film. That’s the iconic super 8 color film shot by Abraham Zapruder that we’ve all seen, and that was featured in Oliver Stone’s JFK.  Now, I have worked on a Kennedy assassination doc before this, and I know that that particular film is very expensive to license. So much so that on the Kennedy doc, we used every single angle of the assassination BUT the Zapruder film.

So, here I am on this TV doc and working on a section about Kennedy, when what should I see, from the same stock footage company as in example 1…but the Zapruder film. Now, this company is known for selling the stock footage they have for cheap…cheaper than the competition. So here was this iconic footage, not full frame, but in the center of the picture, about  50% the size of the full frame, surrounded by all sorts of sprocket holes and clutter and stuff to stylize the image. Well, not matter how much lipstick you put on it, it’s still the Zapruder film.  You still need to pay the Zapruder family that huge fee in order to use this footage on a TV show.  Sure, you could BUY that footage clean, but LICENSING it…that was the problem.

3) Example three comes from the same stock footage company in example 1 and 2.  I’m beginning to see why they are cheap…they must not be staffed with enough people to catch these issues.  Today I needed to use footage of how crystals are used in current, leading edge technologies.  So I used a shot of someone using an iPad.  Simple enough, right? Nope…in that shot the first thing they access is SAFARI, and then the main GOOGLE splash page shows up. Sorry, but if you want to use the GOOGLE page, you gotta pay a license fee.  So I look later in the clip and what do they look up? iPAD! So the next shot is the Apple page for the iPad.  Another image we’d need to license.

Dude, what’s up with that?  Sell a stock shot that you cannot clear? Someone’s not paying attention.

We did find a better example…someone using the iPad to look at schematics and then a spreadsheet (not Excel), so generic that it worked.  That shot was sourced from a different company.

The other issue I have with this same stock footage company is so different I can’t call it #4, it’s not about licensing.  No, this is about COMPLETENESS….if that is a word. If not, I hereby coin it.  Nope, it isn’t underlining in red, it must be a real word.  This issue is that a LOT of the footage this one company has, say of a crowd cheering, or a car racing down the street, or a forest scenic shot…does NOT have any audio on it.  So this crowd is cheering, looking very loud, but are in fact, very quiet.  The trees in the scenic move in the breeze, but there is no audio for that breeze, for the wind whipping through the trees. There is no traffic noise as the car drives down through Hollywood. That’s bad. That means that I now need to pay for sound effects, or look in my grab bag of sound effects I already own to see if I can build the audio to fit picture.

This could easily have been avoided if they just included the audio in the image. You KNOW they recorded it.  And audio is very important. If you see an image of people screaming and cheering a football team, but don’t hear it…even if it is happening under narration or an interview…if you don’t hear it somewhat, it’ll throw you. It’ll take you out of the moment where you are engrossed in the story, and have you wondering why that shot is odd. Why are you distracted by it? Your body might guess it and figure out that its the audio. Or it might not and just send a “something is wrong with this” signal. Audio is important.

Want to know another issue with stock footage? This issue has nothing to do with the company seen in the top 4 examples. NO! This is a website that is known the world over, and is an issue that plagues independent docs, and some TV ones.

YouTube.

I cannot say how many times I’ve worked on a doc, or show pitch, and have been asked to source YouTube videos. People seem to think they are free…public domain. People put this footage up for all to see, therefore we can get it to use in a doc.  Well, no, you can’t. You still need to license the footage from the owner.  Even if it is for a single event with a small audience.

This brings up a great example of FREE footage.  Footage that you can ask for, and use…for free! And it all comes from our government.  NASA footage…all free to use. Now, they might have low res versions on the web, but if you call and ask, they will provide you full quality clips.  Why?  it’s YOURS! You pay taxes, your taxes pay for NASA…therefore it is yours to use.

Same goes for the Library of Congress. Any images or docs contained within, that they own (they store some items/footage that they don’t own, but are safeguarding for the owner, because they are important), is also free. We …the citizens of the United States…own it (remember those taxes again!), so it’s free for us to use those images on TV.

OK, back to editing, and searching through bins and bins of stock footage.

I’m working on a series of posts that are less about the technology used in the projects I work on, and more about the workflows involved. Tips and techniques that can be used no matter what edit system you happen to use.

I have blogged about going back to Avid and editing with a single user setup on the series, A HAUNTING…now to talk about the challenges with editing a TV series with multiple editors working on one episode at the same time.

I will mention that Avid Media Composer is involved only to illustrate that we are working from shared storage (ISIS) and using a shared project file…all accessing the same media and able to share sequences. Beyond that, it doesn’t matter, as what I am going to talk about is more generic.No  matter what editing software you use, these are the challenges one faces when multiple editors work on a single show. Most of this advice can be applied to narrative shows as well as reality and documentary. In this case, I’m referring to documentary TV and some reality, since that is the majority of what I cut.

SHOW STYLE

When you edit a TV series, you need to work within the show style. You might have your own sense of style and story that you’ve used on other shows or projects, but now you need to conform to style of the series you are now working on. Who sets that style? Typically the lead editor. The lead editor might have edited the pilot, or the first episode…or just served as the main editor for a group of editors. Whoever it is, they set the style. When you join that pool of editors on that series, it’s your job to conform to that style. It’s very important for the episode, if not the whole series, to look seamless, as if it were edited by only one editor.

The first way to figure out that style, is to watch previous episodes. Take note of the music used, how dramatic emphasis is added, how visual effects (VFX) and sound effects (SFX) are used. Whenever I start a new show, that is what typically happens on the first day. Fill out the start paperwork, get the script, and get access to previous episodes so that you can familiarize yourself with the show. I will watch the episodes first, then read the script, so that I can get the show style in my head while I read, and picture how it should be cut. I might even make notes about what sort of b-roll I picture in certain spots. And if I don’t have it in the project already, then I’ll request it.

One big part of the show style is the VFX…the plugins used and how they are used. This is what I call the “language of the VFX.” Some shows will have a certain style…an approach to the subject that will dictate how the VFX are utilized. A civil war era show might have smokey transitions, or flash explosion transitions. Robot reality shows might have transitions and SFX that try to convey something robotic, like we are looking at shots in a robot workshop. Like mechanical steel doors closing and opening as a transition. All the SFX being mechanical in nature. Another show might want to present itself as though you, the observer, are reviewing files on a government computer and surveillance system, so the effects are geared towards camera sounds, picture clicking and shutters, spy cameras and scan lines with vignettes. Or a show that explores the paranormal so there are ghostly SFX and flash frames, light ray transitions, eerie sic-fi music beds and transitions.

One way I make sure to stick to the show style is I will copy and use the effects the main editor, so that I can mimic what they do. I might use an effect they use, so be a reoccurring theme, or modify something they do so that it is similar, yet different enough to keep the viewer from thinking, “I saw that same effect 10 min ago.” It might draw them out of the story. I will also find the music they use, and match back to the bins where that music is and see if cues next to it are similar. If not, I’ll search for cues that closely resemble the style, yet are different enough and fit the story I’m trying to tell.

As I mentioned before, music is also key. How long does the music typically last? One one series, I had the music change every 20 seconds, pretty much every time a thought was concluded and we moved onto a different topic. Music sting happened, layered SFX and WHOOSH light ray transition and we were onto the next topic. Very fast paced. Another show might be more investigative, more mysterious. So the music cues are dark, mysterious, with hits. It might last 1 min or so. Used to underscore a particular thought, and again, end with a hit to punctuate that thought and transition to the next music cue for the next thought. Or, at times…no music to add particular emphasis whatever is being said next. Sometimes the lack of music, when it is almost constant, punctuates a statement more than having music at that time. It might seem more important…”Oooo…there’s no music here…what they are saying must be so important, they don’t want to distract us.”

VFX

A bit more on working with VFX…meaning filters and transitions…in a show. One thing that I find very important is not to have the VFX distract the viewer from the story. The VFX are to emphasize the story point, punctuate what I am trying to say. If it is too flashy, or too fast, or happens on top of what the person is saying then I’ve distracted from the story and taken the viewer out of the moment. I’m lucky that many of the producers I work with feel the same way. Story is king…let the story happen. TELL the story. The story is the cake…the VFX are the frosting. The cake is good on it’s own, but frosting makes it better. A pile of frosting with no cake is too sweet (although my wife will disagree with me on this). Too much sweet with little to no substance. Filters and transitions used well, will add to your story.

Now, that’s not to say that I haven’t done over the top VFX. I most certainly have. I’ve worked on many reality shows and clip shows that lack a lot of substance, and to make up for that, we add flash. We will take one picture and milk it for all it’s worth…push in here FLASH, wipe there WHOOSH, pull out here BAM BAM BAM flashbulbs going off push in to have it settle on the girls face. Although a bit gratuitous, it might serve a point. “Britney Spears leaves the Mickey Mouse club…and moves on to pursue a career in music….BAM BAM FLASH FLASH WHOOSH BANG BOOM!” The VFX is there to punctuate the moment, and it has a language…paparazzi, stardom. And sometimes to cover up the fact that we really have no substance.

RECYCLE PAPER, NOT FOOTAGE

One of the challenges of working on a show where it is divided up among the editors, say one editor per act, is that we might end up using the same shot or still in Act 4 that someone used in Act 2. You can avoid this by occasionally watching or skimming the other acts to see if that shot is used. Or, if a shot really works well for me, I’ll ask the other editors if they are using it, or plan to, and if so…plead my case as to why I should be able to use it. And even when we do this, when we all watch the assembled show for the first time, we’ll see duplicate footage, or hear duplicate music. At that point we’ll discuss who gets to use what, and who needs to replace it. in a perfect world, this would happen BEFORE the first screening with the EP, either we screen it with the producer, or he watches it alone and finds the shots…but doesn’t always happen. Be hopeful that your EP (executive producer) understands the issues and just mentions the duplicate footage…rather than throwing a fit. “WTF?!?! I just saw that two acts ago! REMOVE IT!”

COMMUNICATION

Of course the biggest thing in working in a multi-editor per episode show is communication. Besides the “are you using this shot” type stuff. I will go to the lead editor and ask for them to watch my act with me, and give me feedback. they know they show, they are in charge of the show style, so they will give me pointers to make my act cut together more seamlessly with the others. Sometimes I’m the lead editor that people come to for advice. One thing I found too is that often after the first screening, when all us editors are milling about after getting our acts picked apart by the EP…we tend to discuss our acts, and the style used. “Hey, I really liked that clever wipe transition you used in Act 5…mind if I steal that for Act 2?” Or, “I really liked how you amped up the drama in that reenactment. I can’t figure out how to do what you did…can you help me with that?” Or we’ll ask where they found a certain sound effect, or music cue and play off of each other. It can, at times, be like the Shakespearian era playwrights…each taking an idea and modifying it to make it better. Only in our case, we tried to tell a story in one way, but see how someone else did it, and try their approach.

One thing I forgot to mention is that sometimes the lead editor will go through all of the show…all of the acts…and do a “style pass.” They will take all the separate acts by separate editors and make it all conform to the style of the show. This does happen in docs on occasion, but I see it more in reality. I myself have been hired specifically as the “style editor,” or ‘finishing editor.” I might have an act of my own, but also be in charge of the overall look of a show.

To close on an anecdotal note…I once worked on a doc series and we were very behind. There were two of us editors on this one act, and the producer would write on page, give it to me and I’d go to work on it. Page two he’d hand off to my partner and he’d work on that. Page 3 was mine, and so on. This was tough because we weren’t editing separate acts…not even separate thoughts separated by a music cue. We were just picking up on the next page. To deal with this, we’d edit without music and effects, just get the story down and filled with b-roll and some slight pacing. And when we had assembled the act, or at least two separate thoughts, we then divvied them up and tackled them, adding music and effects. And when we finished the whole act, the other editor would take it over and smooth out all the edits and make it into one cohesive piece (they happen to be the lead on that show).

Note that narrative shows also have a show style that all the editors need to conform to. CASTLE has a very unique look and style, as do BURN NOTICE, PSYCH, LAW & ORDER SVU, MAD MEN and THE BIG BANG THEORY. Those editors also need to fit within the show style, and make it appear as though one editor cuts the whole series. And a few of these shows also happen to have two or more editors (note this in the TV series, LOST).

If you happen to follow me on Twitter, you were no doubt privy to the barrage of tweets I did while at the LACPUG meeting on Jan 23. Dan Lebental was showing off this cool editing app for the iPad, TouchEdit, and I live tweeted interesting points he made, and pictures I took.  I’d like to go a bit more in depth here.  More than 140 characters for sure.

The reason this app came about is because Dan bought an iPad and when he moved the screen from one page to another…he went “hmmm, there’s something to this.” And then he would browse through his photos, moving them about with gestures of his hand like he would if he was holding them, and he said, “hmmm, there’s something to this.” Eventually he figured out that this could mimic the tactile nature of editing film. Being able to grab your film strips and move them about, and use a grease pencil to mark your IN and OUT points. So he went out and found a few people to help him develop this. No, he didn’t do it on his own, he went through about 14 coders (if I’m remembering right) to eventually come up with version 1.o of his software.

Who is this for? Well, he honestly said “This is designed for me. For what I want…my needs.” And I like that attitude. Because if you like something, chances are you’ll find someone else that likes that something. And that is a great way to develop a product. To fulfill a need/want/desire that you might have.

Anyway, moving on.

He showed off the basic interface:

The film strip above is the source, and the film strip below is the target…your final film. Now, the pictures of the frames don’t represent actual frames. You don’t need to advance to the next picture to be on the next frame…that’s just a visual reference to the film. Slight movement advances the film frame by frame…and there’s a timecode window on the upper left (sorry for the fuzzy picture) and the clip name on the upper right. So you can see what clip you have, and what the timecode is. You’ll scroll through the footage, or play it, until you found the section you wanted, and then mark your IN and OUT points. To do this you swipe your finger UP on the frame you want to make a grease pencil like mark for the in point. Now, the pencil mark won’t be on the frame you selected, it will be on the frame BEFORE the one you selected. Because you don’t want grease pencil on your actual frame. A swipe down marks the out point, and then you drag it down into the target where you want to put it.

There are a couple big “V” letters to the left of the footage on the timeline. The big “V” means you are bringing audio and video. Click it to get the small “v” and you will bring over only picture.

When you do this, you’ll note that your cut point, where your footage was dropped into the timeline, is marked with a graphic depicting splicing tape:

One thing to note too is that the GUI (graphic user interface) of the film appears to run backwards when you play or scroll it. That’s because it mimics the way actual film moves through a KEM or STEENBECK editor. Really meant for the film people. But, if it’s too distracting, Dan said he would take all comments on the matter and make it an option to play the opposite way…in case it’s too distracting.

OK, Dan flipped the iPad vertically and the interface changed:

Now we see just the source strip, and 8 tracks of audio. This is where you’d be doing your temp audio mix to picture. And with the tap of a button…

And you have a mixer, to allow you to adjust your levels.

I did mention that I felt that 8 channels wasn’t quite enough for the temp mixes I was required to do. He replied that he could perhaps add a second bank of tracks so that you could then have 16…or 24…or 32. This is a possibility on later versions.

BINS.

Dan didn’t call them bins…he said the more accurate term was “collections,” as they are the place that holds the collection of clips you have to work with. That area looks like this:

There is also the main project window. That, interestingly enough, does like like a bin type thing, with film strips hanging down, representing your projects. In graphic only…they are actually below in the window:

IMPORTING

Here is the import interface:

There’s even a help menu for importing:

Importing footage can be done via iTunes Sharing, iPad Video (which is called Photos on the iPad) or Dropbox. For maintaining Metadata you use iTunes Sharing or Drop Box as iTunes Videos tends to drop some Metadata. The footage can be low resolution proxies, like 640×360 MP4 or H.264…or full resolution…but in a format that the iPad can work with…thus MP4 or H.264. So you can use the app as an offline editing machine, or for editing your project at high resolution for exporting to the web straight from the device.

STORING YOUR FOOTAGE

The question I had for Dan was…how do you store the footage? Well, it’s all stored on the iPad itself. There currently are no external storage options for the iPad. So you are limited in the amount of footage you can store at one time. How much depends on how compressed the footage is. A lot a low res, not much at high res. Yes, I know, VERY specific, right? Specifics weren’t mentioned.

I did ask “what if you are editing, say, THE HOBBIT, and have tons of shots and takes…a boatload of footage. What would you do then?” His answer was “Well, you can have the footage loaded in sections…for certain scenes only. Or have multiple iPads.” I pictured a stack of iPads in a bay….one with scenes 1-10, another with 11-20, and so on. Not altogether practical, but the loading of sections seemed OK. And Dan did have three iPads present, including a Mini…so he might just be headed that second way. (joke)

Dan mentioned that he loaded an entire indie movie on a 64gig iPad at 630×360 with room to spare.

EXPORT

It eventually gets to a point where you are done editing…now what? Hit the export button and you have a few options: Export final MOV to iTunes sharing, export the final MOV to Dropbox so you can share it with others, export it to your PHOTOS folder, or export the FCPXML to iTunes Sharing or Dropbox.

FCPXML you ask? Yes, that is the current way to get the “edit decision list” out of the app and have you reconnect to the master footage. It exports an FCPXML meaning that it interfaces with FCP-X. But that is only in version 1.0. The TouchEdit folks did mention that a future update, Version 1.1, will feature FCPXML Input/Output and AAF Input/Output (AAF support is for Avid). Good, because I was wondering how you’d edit this feature film on your iPad and then deal with it in FCP-X. That’s just temporary…other options are in the works. But Dan did say that the application is based on AV Foundation, and not Quicktime..so that points to working tightly with FCP-X…and working well with the future Apple OS’s.

In addition to all of this, TouchEdit has partnered with Wildfire studios in Los Angeles. Wildfire is providing an a large number sound effects library to TouchEdit free of charge in version 1.0. You heard it…free SFX. In version 1.1 or 1.2, TouchEdit will add a SFX store where you can buy SFX rather cheaply.

TUTORIALS

Yes, there are already YouTube tutorials on the TouchEdit YouTube Channel, to get you up and running. Nice guys…thinking ahead!

COMPATIBILITY & PRICING

TouchEdit works on any model iPad 2 or higher…including the iPad Mini. And it will be available in early February for a price of $50.

Let’s start off 2013 with a review of a really cool product…the AJA T-TAP.

Until recently when you wanted to send a signal to an external monitor from your edit system, you needed to get an “I/O Device.” I/O meaning “In and Out,” and device being either a card that you installed internally on a tower computer, or an external box…or combination of the two. These devices allowed one to capture incoming video signals (from tape or directly from cameras or switchers), and output video signals (to client and color correction monitors). In the age of tape this was the way to get footage into your system.

But in the current age of tapeless capture, the “I” part of the “I/O” is no longer needed. All we want/need/desire is output to a client monitor…or broadcast color correction monitor. So instead of shelling out $500 to $8000 for an I/O device…you can get the AJA T-TAP for a mere $299.

The device is remarkably simple. It connects to your computer via Thunderbolt (so unfortunately it won’t work on Mac Pro towers or PC towers as they lack this connection type) and then outputs full 10-bit video via SDI or HDMI with 8 channels of embedded audio. And it’s so small, it can fit into a small compartment in your backpack, or in your pocket, and allow your edit system to be very lightweight and mobile. The T-TAP is also very versatile. It is compatible with the three major editing systems: Avid Media Composer 6 and 6.5 (and Symphony), Adobe Premiere Pro CS6 and Final Cut Pro (X and 7). Unlike other options that AJA has, the audio out of this device is only available via HDMI or SDI, so you will have to monitor audio from the client monitor, or patch audio from that monitor to your mixer…depending on the edit software you use. FCP 7 and Adobe Premiere Pro allow you to route audio through the computer speakers, while Avid Media Composer locks the audio output to the device.

The T-TAP supports resolutions from SD (525i NTSC and 625i PAL) all the way up to 2K. Frame rates from 23.98, 25, 29.97, 50 and 59.94.

I ran three real world tests with the T-TAP, and had great success with all three tests.

First…the out of date, end of line Final Cut Pro 7. After I installed the driver I got a call from a client to do changes to a sizzle reel that I had cut in FCP. So I opened it and worked on it for two days. With this option, I was able to play audio out of my computer headphone jack directly into my mixer. The video offset was similar to what I used with the AJA Kona 3 and AJA IoXT. The video output was very clean…similar to what I get from other I/O devices. And I got all the flexibility of output I have come to expect from this…now discontinued software. It worked well.

Next I tested it with Adobe Premiere CS6. For this I used it with a family video project. Now, prior to this I hadn’t used and I/O device with CS6. I had tried to use CS5.5 with the AJA Kona 3, and it was less than solid. You had to use custom AJA settings, and I could see the canvas (program monitor) output, but not the Viewer (preview). I had used CS6 to edit, but not monitor externally. So when I launched it with the T-TAP attached, I was very pleasantly surprised to find that it worked, and worked VERY well. No longer did I need custom AJA settings, the base T-Tap driver and Adobe plugin was all that I needed and I got a solid signal from CS6. Viewer, Canvas…zero latency and no audio drift. No slowdown in performance. It simply worked, and worked well. And like with FCP 7, I could either monitor audio via the T-Tap, or route it through the direct out (headphone jack). It was perfect.

The final test was with Avid Symphony 6.5. And this was a full on, frying pan to fire test. I was hired to do a remote edit…travel to the location to edit footage being shot on location, and turn around the edit in one day. The shoot was tapeless, shot with XDCAM EX cameras. The footage came in, I used AMA to get it into the system, and then edited on my 2012 MacBook Pro, and I monitored externally via the T-Tap and the hotel’s HDTV. For the first part of the edit I didn’t use the device, I did everything with the laptop. That’s because Avid locks the audio output to the AJA T-Tap….meaning that audio followed video, and I’d have to monitor audio via the HDTV. A tad difficult as it was bolted to the dresser. Unlike FCP 7 and Adobe Premiere CS6, I couldn’t choose an alternate output for the audio. So I did the initial edit without the T-Tap, but when it came time to show the client my cut, I connected it to the TV and was able to play back (with zero latency and frame offset) for the client at full quality. All while I was confined to the really small hotel table. My computer, hard drive and T-Tap barely fit…but nothing was really crammed in, there was elbow room. And the edit went smoothly.

Unfortunately I did not test this with FCP-X, as I do not have that on my system. However, I do know that it works with FCP-X, and the latest update of FCP-X and the T-TAP drivers make external viewing very solid.

Bottom line is…the AJA T-Tap is amazingly simple, and simply works. It’s great no-fuss no-muss video output for the major editing systems. The simplicity, the price point, small footprint and the flexibility of this little box make it a must have in my book. It works with any Thunderbolt equipped Mac and it perfect for low cost, high quality video output monitoring. AJA has a reputation, and track record, for compatibility and stability…and that tradition is carried on with the AJA T-TAP.

(NOTE: The T-Tap review unit was returned to AJA after a 4 week test period).

Well, I’m done with A HAUNTING.  I sent off my last episode a couple weeks ago. The good news is that the ratings started out good, and only got better and better. So that means that another season is a strong possibility.  Although if it happened it might not be for a while…pre-production and writing and then production.  But now I’m getting ahead of myself.

If you want to see the episodes I edited, they can be found on YouTube.  Dark Dreams and Nightmare in Bridgeport.  My favorite episode that I cut as yet to air.  It airs on Friday, December 7 on Destination America.

The show was fun to work on. Cutting recreations that were more like full scenes with interviews interspersed throughout…instead of using them as b-roll over VO and interviews.  This was more like cutting narrative, which I really enjoy cutting.  I had scripts that were short, so I had cuts that came in a minute short…and then needed to struggle to not only find the extra minute, but the other 4:30 for the international “snap ins.”  I also had scripts that were 20 pages long, and thus my cuts were 20 min long. This presented it’s own issues…sure, I now had plenty of footage for snap ins, but with that much extra, I’m faced with cutting really good scenes, and often cutting scenes that help tie the whole story together.

We did use a lot of Boris Continuum Complete effects…and I relied a lot on the Paint Effect tricks I learned years ago.  We did have an editor who was an effects wiz, so he made some presets we could drop onto clips..and that really helped. Tweaking those effects allowed me to familiarize myself with Boris a bit more.

On the technical side, I started the show cutting on Avid Symphony 6.0 on my MacPro Octo 3.0Ghz tower (with AJA Kona 3), but then almost immediately began beta testing of Avid Symphony 6.5 and resorting to the new MacBook Pro 2012 non-retina with AJA IoXT…and the ability to have more “voices” of audio so I could layer in more audio into my temp mix.  And the AAFs exported perfectly to ProTools.  I also needed to resort to working in my bedroom, as my home office is my garage, and it isn’t insulated. And we had two very hot months here in LA.

The only issues I had with Symphony 6.5 was a Segmentation Fault error when I tried exporting H.264 QT’s after working for a while on a project. It would export fine if I just opened and then exported.  But work for a while, and export…I’d get that error.  And during the entire time I used Symphony 6.5…including the two month beta testing period…I only crashed twice.  Pretty stable system.  As opposed to the current Avid 6.0.3 system I am editing with on my current gig.  Shared Storage setup running EditShare, on an iMac.  Crashed 2-3 times a day…segmentation faults that would cause Avid to quit.  Updating to 6.0.3.2 helped greatly…now I only crash once a week.

So yes, I’ve moved onto my next show. In an office with multiple editors and assistants.  Shared projects and shared storage.  I’ll be working on Act 4 of show 103 on day, then Act 2 of show 105 the next, then re-arranging show 106 for the rest of the week.  Reality show, so I’m getting my toes wet in that field again.

Denise Juneau and the Montana Native American Vote

Last week I was enlisted to help edit a news package for Native American Public Telecommunications (NAPT) that would also end up on the MacNeil Lehrer NewsHour.  This was a rush job, as it pertained to the 2012 election, and that was in less than a week.  We had to work quick to get this done in order to air.  Very typical for news…but something I hadn’t done before.  It was a whirlwind edit.

First off…the story.  Click on the link above to watch the end result.  Basically it is about how important the Native American vote is to the elections in Montana.  While we did showcase one candidate (who was the first Native American to be voted into a statewide post), the main story had to be about the vote itself.  Because if you make a piece about one candidate, and air that, you need to provide equal air time to the opposing candidate.  So we had to do this properly.

How did I get this job? Well, the producer is a Native American producer out of Idaho, and I have a lead into that community on several fronts.  Mainly because I too am Native American (1st generation Salish descendant, part of the Flathead Nation in northwestern Montana).  But also because the camera operator runs Native Voices Public Television, and I was an intern there in college. And he is my stepfather…but that’s besides the point.  I’m a decent shooter and good editor (so I’m told), and they wanted my talent.  So on Tuesday I flew from LA to Great Falls…a trip that took 11 hours, mainly due to the layovers in Portland and Seattle.

I tried to pack light.  I packed my 2012 MacBook Pro, AJA IoXT, mouse, assorted cabling and 500GB portable hard drive and clothing into my backpack.  And then in the camera bag I packed my Canon 7D, GoPro, headphones and various accessories.  Then a pelican case with a 2TB CalDigit VR.  All perfectly sized for carry on…nothing needed checking. The camera operator was bringing along a Sony HDCAM camera…tape based (one reason I was bringing my IoXT…to capture the tape)…as well as an audio kit with shotgun mic, wireless and wired lavs, Lowell lighting kit and a Sachler tripod.  While he was slated to be the main camera guy, I brought along my 7D and GoPro to shoot extra stuff.

Now, while I was landing and staying in Great Falls, we needed to go to Havre Montana…120 miles away.  So we were up early and headed out.  I mounted the GoPro on the roof of the car to get driving scenics, and shot a bit out the window as we drove with the 7D.  When we arrived we needed to go to a few locations to get  some interviews before the rally that evening.  I’ve never worked in news, but because I have seen a few reports, I noted that often they have a wide shot of the reporter talking to someone before the interview, or a second camera shooting the interview, so I did the same.  Shooting a wide of the interviews to use as intros or cutaways. Between getting interviews and the rally, we also got as much b-roll as possible: campaign signs, scenics, town shots, as well as the reporter/producer standup.  I was glad that I was there with the 7D, as pulling over to get a quick shot of a sign or a poster was really easy…a lot easier than pulling out the big HDCAM camera and sticks.

When we got to the rally I was relegated to audio duty.  Handed a boom mic and the wired lav and a small mixer, and charged with getting the audio, and riding the levels.

The rally wrapped at 7PM and we needed to get back to the hotel.  While we drove back I offloaded the 7D and GoPro cards to my portable hard drive (loving the SD card slot in my laptop now), and then transcoded them into Avid Symphony.  The vehicle we were in had an DC outlet so I didn’t have to worry about power. I was very glad to have this “down time” to transcode the footage.

When we got back to the hotel we ordered pizza and set up my remote edit station.  I connected the camera to the IoXT via SDI, and that to my MBP via Thunderbolt.  Then the CalDigit was connected via Firewire 800…fine for capturing and playing back DNxHD145 (1080i 29.97).  I was lucky enough to have an HDTV in the room, so I used that as the “client monitor,” connecting it to the IoXT via HDMI.  We watched the tapes as we captured, and then the producer wrote the story (he had to write a print version, a radio version and a web/broadcast version). We did have the first part of the story written, he did it as a stand up in the field.  The rest of the story he recorded as temp with a Snowball mic and Garageband.  And then he and the camera guy went to bed…long exhausting day.  I edited a “radio cut,” just audio stringout of the standup, narration and interview bites.  That took about an hour for a 5:30 run time.  Then I too hit the sack at 12:30.  We agreed to meet at 6:30 AM to finish the rest of the cut.

At 6:30 met in my room, drowned ourselves in coffee and continued to edit.  After an hour we had the piece done, with a run time of 5:17.  I did a quick audio pass to even things out, very rudimentary color pass using the HDTV…and then compressed a file and posted it for the clients (NAPT) to review and give notes.  We hoped to have it delivered by that day, but since the Exec Producer was traveling too, they didn’t get a chance to see it until later.  So, I packed everything up, backed up the media onto the external drive and the CalDigit VR (redundancy!) and headed to the airport (11:30 AM flight).  I received notes while on the road, and when I landed (9:55) I got home, set up the drive on my main workstation, addressed the minimal notes, did a proper audio pass and color correction using my FSI broadcast monitor…and compressed it for YouTube per the clients request. I had that uploaded to their FTP by 1AM, and it was online by 6AM…YouTube, NAPT website and Facebook.

This certainly was a down and dirty edit.  And I’m sure it took longer than most news stories do.  I also know that the ability to edit at least the tapeless formats native would have sped things up, but I did have time to transcode as we drove back.  Although, if we shot entirely tapeless, I’m sure I could have had the rough cut done during the trip back.  And I know that using another NLE, say Adobe Premiere, would allow me to edit the formats native and save on transcode time. But I needed solid tape capture, and Avid with the IoXT gave me that.  Yes, I could have captured with the AJA tool as ProRes and brought that into Premiere (I say, anticipating y0ur comments).  I used Avid as that is what I was used to and it’s best to use what you know when you have a quick turnaround.  One of these days I will learn that app better.

Sorry, it has been a LONG while since I posted anything about A HAUNTING.  I was going to get into the FINE cut stage of the process when I was on my first episode, but then I got buried in things like the fine cut for that episode, beginning the rough cut of episode 2…prepping other episodes for audio mix and online.  This show took a lot of my time. One big reason is that, while the show needed to be 43:30 for the domestic version…and 48:00 for the international cut (an extra 4:30 of material called SNAP INS that we put at the end of the sequence to be added by someone later…cut into the show).  Anyway, the schedule we had was for cutting shows of that length. However, some of the scripts were a little longer and the rough cuts ended up being 62 min for my first episode, and 68 minutes for my third.  That means that I needed to take extra time to cut that extra footage.  I average about 3-4 minutes a day (pushing 10-12 hours a day) so that is a few days more of work.  Which is fine….gives us options to cut, and options for snap ins.

My second episode? Yeah, that was a tad short.  42:50 for the rough, so I had to extend scenes and draw out moments to make it to time, and for one long edit session, my producer and I (she moved back to LA after production wrapped, so it was nice to have her in my cutting room…er…garage) figured out the extra four and a half minutes of program time for the international cut.

So now I want to talk about the FINE cut process.  This is what happens after the producer gives me notes…although if it is my segment producer that might just end up being the second rough cut, and when the EP (executive producer/show runner) gives notes, THAT is the fine cut.  And that is what we send to the network.

The Fine Cut is one of my favorite parts of the editing process.  Because that is where I can go back and finesse the scenes, add moments, tweak the cut, do any special transitional effects that the networks love.  See, for me, the rough cut can be a chore.  I have to take this pile of footage and assemble it into something that makes sense.  Look for the best parts, put them together in some semblance of order.  Sure, I do try to finesse the scenes so they work, but I don’t spend a lot of time on this as I need to just get the cut out for the producer/director to see.  “Git ‘er done” as Larry the Cable Guy would say.

Then I get notes…and can start on the fine cut.  I can go back, look for better shots or angles (since they tend to ask, “isn’t there a better angle or take for this line?”)…mine the footage for something I might have missed. Spend time making the cut better.  Tweak the music better, add more sound design to make it sound richer, or to sell the cut (or in this case, the scary moments) better. That’s the phase I just finished up now…on my third episode.  And it’s one of my favorite parts because I can go back and look at the other options…find great looks or moments to add to the cut to make it better. Where the rough cut might have you hacking at the block of wood to get the general shape, the fine cut allows you go go in with finer carving tools and add more detail, smooth out some edges (to use carving as a metaphor).

This is also the part of the post production phase where we settle in on the VFX shots we will be using, and then I prep those for the VFX guy. We have had some issues with a few VFX shots, in the way they were set up, that were difficult to pull off given the budget of the show. But most of those were dealt with in cutting the scenes differently to make them work better, to lighten the load on the lone VFX guy plugging away in his VFX cave. For this part, since we were working at full res, we’d export out Quicktime movies of the footage, with handles when we could manage, and reference quicktimes of our often pathetic attempts in temping them (If only you saw how rough some of my VFX attempts are. Yeah, not my forté).

And then we send this off to the network…and hopefully their notes won’t cause too much pain.

OH…and one note on the last episode I am working on. I have been using Avid Symphony 6.5 pretty much since the start of the series, as I was beta testing it since June. And it allows more “voices” of real time audio…basically more tracks of audio.  I still get 16 tracks, but instead of them all being MONO, and needing to use two tracks for much of my audio like SFX and music…I can modify them to stereo tracks and thus they only take up one track on the timeline.  This gave me more options when I did the sound design. Which it turns out I spend most of my time doing. Sure, I cut the picture, but a lot of the scare that happens, in my latest episode at least, is due to audio hits and cues. Relying on what you hear more than what y0u see to sell the scare.  To me, it works a lot better than seeing the ghost…flashing it and then hinting at what people see tends to work better.  On the first two episodes I did I used mono tracks…but because I found myself very limited in what I could do, I tested using 7 mono tracks (1 for narration, two for interview, 4 for on camera audio) and then 9 stereo tracks  (2 for music, 7 for SFX). I sent an AAF to the post mix house and they said it came into Protools easily, so for the last show, I had more audio tracks for sound design goodness.

All right, that does it for this episode of…A HAUNTING, the post process.