Skip to content

Little Frog in High Def

Adventures in Editing
Little Frog In High Def

Archive

Category: FCP

GoPro Hero cameras are everywhere lately. It seems like there isn’t a production I am working on that doesn’t utilize this camera in some way. They are mounted in cars to either see the driver and passengers, or aimed at the road. They are mounted on back hoes as they dig, mounted on drills as they burrow into the ground. They are mounted on people as they do crazy things. They get angles that you normally cannot get.

First, let me mention the three models currently available from GoPro:

Hero 3 White Edition can shoot video at 1080p30, 960p30 and 720p60, and 5MP photos at up to 3 frames per second. It can shoot timelapse from half second to 60 second intervals. It has built in WiFi, and can work with the GoPro WiFi remote or a free smartphone app.

Hero 3+ Silver Edition does all that, but shoots up to 1080p60 and 720p120, and shoots still photos at 10MP up to 10 frames per second.

Hero 3+ Black Edition does all that the Silver Edition does, but adds 1440 at 48fps, 960p100, as well as 720p100 and 720p120.  It also shoots in ultra-high resolution, going to 2.7k at 30fps and even 4k at 15fps. And it has an option called SUPERVIEW, which enables ultra-wide angle perspectives.  It can shoot stills at 12MP stills,  30fps.  All cameras have built in WiFi and work with the remote, or smart phone app, and all perform much better in low light situations than their predecessors.

For this post, I was provided with a Hero 3+ Black Edition camera, and a slew of accessories.  What is really handy about the Hero 3+, is that it can shoot in a wide variety of ways that might suit various aspects of production. For example, The ultra high speeds is shoots makes it great for smooth slow motion conformed shots.The ultra-HD frame size it shoots allows for repositioning the shots in post to focus on the areas of interest we want to focus on. They all can be controlled wirelessly from an iPhone or Android device with a free app…and you can change the settings in those apps, far easier than the in-camera menus.

OK, so the GoPro Hero 3 line of cameras prove to be very useful cameras, enabling you to get all sorts of useful footage. But the point of this post is to showcase workflows for ingesting the footage into various edit applications so that you can take advantage of these advanced shooting modes.

AVID MEDIA COMPOSER

Let me start with Avid Media Composer, only because that is what I have been using the most lately. If you set up the camera to shoot in normal shooting modes, like 1080p30 (29.97), 1080p24 23.98 or 720p60, then importing is easy. Simply access the footage via AMA, and then transcode to DNxHD…either full resolutions like 145, 175 or 220…or an offline codec like DNxHD36, DV25 or 15:1 so you can cut in low resolution, and then relink to the original footage and transcode to a higher resolution when you go to online.

First, go FILE>AMA LINK and you’ll get the following interface. Select the clips you want to link to:

Once you have all your clips in a bin, go to the CLIP menu and choose CONSOLIDATE/TRANSCODE:

If you shot 720p60, so that you can use the footage either normal speed, or as smooth slow motion in a 29.97 or 23.98 project, then you need to first import the footage in a project that matches the shooting settings…720p60. Then copy the bin over to your main project and cut the footage into the sequence. You will note that the footage will appear with a green dot in the middle of it, indicating it is of a different frame rate than the project:

The footage will play at the frame rate of the project, or you can adjust it to smooth slow…take all of the frames shot and play them back at a different frame rate. First, open the SPEED CHANGE interface, and then click on the PROMOTE button:

That enables more controls, including the graph. When you open the graph, you’ll note that the playback speed is different. If you shot 60fps and are in a 29.97 project, then the percentage will be 150%. Change that number to 100% and now the clip will play back in smooth slow motion.

If you shot at a higher frame rate and want it to be slow motion…say 720p 120fps, then you’ll have to use the GoPro Studio app to convert that footage. The cool thing about that application is that it’ll conform the frame rate, and convert the frame size to suit your needs. I’ll get to that later.

NOTE: You can edit the footage native via AMA. When you bring it into the main project, and drop it into the timeline, it’ll be 60fps, or 120fps (note the image above of the timeline and green dots…those are AMA clips, thus why one shows 119.8fps). So when you promote to Timewarp, and adjust the percentage, it will play in slow motion. But know that editing MP4 native in Avid MC is anything but snappy. It will cause your system to be sluggish, because there are some formats that Avid MC doesn’t edit natively as smoothly as it can Avid media.

One trick you can do is to AMA the GoPro footage, cut it into the sequence, promote to Timewarp and adjust the playback speed…and then do a Video Mixdown of that. Then you’ll have a new clip of only the portion you want, slowed down. The main issue with this trick is that any and all reference to the master footage is gone. If you are doing an offline/online workflow this might not be the best idea. It’s a simple trick/workaround.

Now let’s say you shot a higher frame size, such as 2.7K or 4K, and you want to reframe inside Media Composer. First thing you do is use AMA to access the footage. But DO NOT TRANSCODE IT. Once you transcode, the footage will revert to the project frame size…1920×1080 or 1280×720. Avid MC does not have settings for 2.7K or 4K. I’ll get to the workaround for that in a second.

Once you add the clip to the timeline, you’ll notice it has a BLUE DOT in the middle of the clip. Similar to the GREEN dot, except where green indicates a frame rate difference, blue indicates frame size difference. If you then open the EFFECT MODE on that clip, FRAME FLEX will come into play.

You can then use the Frame Flex interface to reposition and resize the shot to suit your needs. If you shot a nice wide shot to make sure you captured the action, Frame Flex will allow you to zoom into that action without quality loss. Unlike zooming into footage using the RESIZE or 3D WARP effects on regular 1080 footage.

One drawback is you cannot rotate the area of interest. The other is that you cannot convert the footage to an Avid native format…something I mentioned earlier. So you can either work with the 4K MP4 footage native…which might prove to be difficult as Media Composer doesn’t like to work with native MP4 footage natively, much less at 4K. So one workaround is to do your reposition, and then do a VIDEO MIXDOWN. This will “bake in” the effect, but at least the footage will now be Avid media:

ADOBE PREMIERE PRO CC

The workflow for Premiere Pro CC is by far the easiest, because Premiere Pro will work with the footage natively. There’s no converting when you bring the footage in. Simply use the MEDIA BROWSER to navigate to your footage and then drag it into the project.


(the above picture has my card on the Desktop. This is only an example picture. I do not recommend working from media stored on your main computer hard drive.)

But I highly recommend not working with the camera masters. Copy the card structure, or even just the MP4 files themselves, to your media drive. Leave the camera masters on a separate drive or other backup medium.

So all you need to so is browse to the folder containing the media, and drag it into the project, or drag the individual files into your project. Bam, done.

CHANGE IN FRAME SIZE

Ok, let’s say you shot 720p60…but you want to use your footage in a 1080p project. When you add the clip to the timeline, you’ll see that it is smaller:

That’s an easy fix. Simply right-click on the clip, and in the menu that appears select SCALE TO FRAME SIZE:

But what if you want this 720p 120fps footage you shot to play in slow motion? Well, that’s very easy too. Right-click on the clip in the Project, and in the menu select MODIFY>INTERPRET FOOTAGE:

Then in the interface that appears, type in the frame rate you want it to play back as. In this example, I choose 23.98.

Done…now the clip will play back slow…even if you already have it in the timeline.

FINAL CUT PRO X

Importing is really easy; File > Import > Media. You can either work natively, or choose the OPTIMIZE MEDIA option. Optimize media will transcode the footage to ProRes 422.

You get a nice box to import with an image viewer.

Now, as I said before, you can work with the footage natively, but I’ve found that GoPro, because it’s H264, it likes to be optimized. I haven’t worked with GoPro native extensively in FCPX so I cannot attest to how well it works compared to how it does in Premiere Pro CC. Premiere has the advantage of the Mercury Engine and CUDA acceleration with the right graphics cards.

OK, so to transcode all you need to do is right click and choose TRANSCODE MEDIA:

Get these options:

You can create ProRes master media, and proxy media at the same time if you wish. Or just full res optimized media (ProRes 422), or just Optimized Media (ProRes Proxie) that you can relink back to the masters when you are done editing, that you can transcode to full res Optimized Media when you have locked picture. When you create the optimized media, or proxy, the frame rate of the footage is retained.

When it comes to speed changes, unlike FCP 7 and earlier that required you to use CINEMA TOOLS, you conform the GoPro footage internally in FCPX. As long as you set the timeline to the desired editing frame rate, 23.98 for example, then you can conform any off frame rate clip to it by selecting it and choosing Automatic Speed from the retime menu.

OK, lets say you shot 4K, but want to use it in a 1080 or 720 project. FCPX has what is called Spatial Conform. When set to NONE the clips go into a timeline at the natural resolution. For example, a 4K clip will be at a 100% scale, but will be zoomed in. All you need to do is scale back to like 35% to see the entire 4K image.

GoPro STUDIO

All right, let’s take a look at the tool that GoPro provides free of charge…GOPRO STUDIO. I use this application quite a bit, not only to pull selects (only portions of clips), but also to convert the footage into a easier to edit codec. H.264 works OK in Premiere, better if you have CUDA acceleration. But my laptop doesn’t enable that, so I choose to use the CINEFORM codec that GoPro Studio transcodes to. I also use it to convert higher speed rates for use in Avid Media Composer…like I mentioned earlier. If I have a 120fps clip, I cannot bring that directly into Avid and transcode it to that same frame rate. So I will convert it here first, to match the frame rate of the project….then AMA link and transcode.

Importing is easy. In the main window, on the left side, simply click on the “+” button, that allows you to import the clips. Grab as many clips as you want to. And then when you click on a clip to select it, it opens it into the center interface, and that allows you to mark IN and OUT points…if you only want portions of the clip:

To adjust the speed of the clip, click on the ADVANCED SETTINGS button. You’ll be presented with the following interface:

In here is where you change the speed to what you want. Simply click on the frame rate drop down menu and choose the one you want:

You can also remove the fish eye distortion from the footage if you want.

If the speed change is all you need to do, then click on ADD TO CONVERSION LIST and be done with it. But since the 120fps frame rate is only available at 720p, and most of my projects are 1080, you can also up convert the size to 1080 in GoPro Studio as well. And the conversion is pretty good. For that you go into the Advanced Settings again, and in the Frame Size drop down menu, choose the frame size you want:

If you want to convert 720p 120fps to 1080p 23.98, then the settings would look like this…I also removed FishEye:

So there you have it. Some of these workflows are just the basics, others go into more detail. But I’m sure there are lots more tips and tricks out there that some of the more “power users” of the edit systems employ. My hope is that these tips will enable you to use your GoPro Hero cameras to their fullest.

(Thanks to Scott Simmons (@editblog on Twitter) of the EditBlog on PVC, for helping me with the FCPX workflow)

A GoPro Hero 3+ Black edition was provided to enable me to test various aspects to the workflows. GoPro was kind enough to let me keep the unit, enabling me to shoot family activities in super slow motion, or in “Ultra HD” resolutions.  It was used to shoot a sledding outting, including a couple crashes…and a couple cat videos. They weren’t interesting enough to post…my cats are boring.

I started this blog in 2005, when I made my leap from editing on an Avid in standard definition…to editing with FCP in high definition…thus the name Little Frog in High Def (Little Frog being my Indian name from my youth). This blog was me talking about my foray into the world of HD specifically using Final Cut Pro…for broadcast TV shows. A diary of my successes and my failures…lessons I wanted to share so that people could learn from my…well, successes and failures.

So now, with the EOL of FCP and me moving back to using Avid Media Composer…and Adobe Premiere…I’d like to list off my 10 favorite things about FCP that I will miss.  My favorite features that made me love the application.  Don’t get me wrong, I’ll still use it for a while, my current job as a matter of fact most likely will use it for a few more years.  Companies out here tend to do that…use what they have because it works…until it no longer works (in some cases, even when it doesn’t work).  I post these in hopes that the other NLE makers will see them and go “yeah, that’s a cool feature” and try to incorporate it into their future releases.

Here are the top 10 favorite features that I’ll miss in FCP…in no particular order:

1. Resolution independence. I like that I can add HD to an SD sequence, and it works fine.  720p in a 1080p sequence look fine too.  And I can take 1080, put it into a 720p sequence, and scale and reposition to show what I want to show.  Adobe has this too…Avid does not. If I put a 1080p clip into a 720p project…it becomes 720p.

2. Audio mixing on the timeline, and with keystrokes. My favorite ability is to lasso audio, and press the CONTROL key and bracket and + – keys to increase and decrease audio by a few db.  Control brackets adjusts by 3db in either direction, – and + by 1db.  This allows for very quick and very precise audio mixing. And if I didn’t do that, just toggling Clip Overlays brings up the level lines and I can drag up or down, quickly add keyframes for more controlled audio dips.  Yeah, Avid does this too, but it isn’t as elegant. And Avid doesn’t do keyboard audio mixing. Nor Adobe.

3. Speaking of audio…I like having more than 16 tracks of realtime audio. Most times I don’t have more than 8-14 channels of audio, but it isn’t all that rare for me to have between 24 and 48 channels of audio. I have been in that boat many a time, especially when dealing with 6 people on individual mics, the need to add b-roll audio, extensive sound design for SFX, and smoother music editing.  And yes, as a picture editor I am responsible for a lot of the pre-mix. Many clients/network execs can’t watch a cut with temp sounding audio…so it needs to sound finished.  And be very in depth.  Avid stops at 24 total tracks of audio…only 16 audible at a time.  PPro is better…it allows, well, at least 48. Although the audio mixer is track based, not clip based, and mixing audio on the timeline is lacking…more difficult than it should be.

4. The ability to work with picture files at full size on the timeline without plugins. Being able to add picture files, in their full size (well, they have to be under 4000 pixels or FCP gives the über helpul “general error”) onto my timeline and do small moves, or temp moves on them and have them remain sharp is handy.  Avid imports still as media, unless you use the Avid Pan & Zoom plugin, which allows for manipulation. But isn’t as easy as direct picture access.  Adobe works like FCP in this respect…so that is good.

5. Clip enable/disable. With the click of Control-B, I could turn off clips in the timeline that I had highlighted…rendering them invisible and silent.  This was a quick and easy way to see clips under clips, without turning off track visibility and un-rendering EVERYTHING. It enabled me to only turn of portions of my timeline. To be fair, Avid doesn’t need this, as you can monitor separate video tracks, and go under clips without losing one render.  Disabling audio files quickly, so that I can only hear the music though…that is something Avid doesn’t do. Yeah, I could click-click-click to turn off tracks.  But it is so easy to lasso/disble in two quick strokes. And I could use it to turn off clips surrounding others for easier soloing of audio elements.

6. Simple compositing on the timeline. FCP is a far better compositor than Avid…for an NLE. Adobe is good too, but the simplicity and ease that I can composite shots in FCP dwarfs what I can do in the Avid.  And I can blend elements better, add filters to single clips only, rather than from a clip, and everything below that clip.  Composite modes right there on the timeline for many cool effects (not all broadcast safe, so beware). Building a composite shot, or funky transition is easy in FCP…a tad more involved and difficult with Avid.  As I said earlier, Adobe Premiere Pro does this well too.

7. The wide variety of plugins.  Let’s face it, there are simply a LOT of plugins available for FCP.  Enough free ones to keep you occupied and happy…and dozens more cheap ones.  A few spendy ones.  But really, A LOT of plugins.  Did I use them all? No, I have favorites, and I don’t rely on them a lot.  But when I need them, I know that I have a wide variety that I can choose from, give the look I want to make.  Avid has darn few, and of those few, they are EXPENSIVE. The only free ones are the ones built in.  There are no great free fan-made plugins for Avid.  FCP had lots of people doing this for free…for fun. FCP has a great and vast plugin community.

8. Organization of materials. This is big…so big that I had a tutorial DVD that covered all aspects of this topic. I am big on organization.  But the strength of this, the beauty of it, was also a curse. If you are new to FCP, or don’t know how it dealt with assets or  just weren’t paying attention, you could hose your project in a big way, or make life difficult down the road.  So it’s a gift, and a curse…to quote Monk.  FCP allowed for organizing footage in the project, and outside of the project, on the desktop level. It kept all tape imports and tapeless imports separated by project. And renders as well.  All captured/imported media was imported into the Capture Scratch folder, into project subfolders.  This made it really easy to find only the assets used by certain projects. I liked to make one folder per project, point FCP to that project for captures and renders, and make folders for audio assets, stills, graphics…everything.  So that all assets for one project were in one location. Easy to backup, easy to transfer…easy to delete.  The danger of the way FCP did things is that if you just grabbed a picture file, or audio file from your desktop and put it into the FCP project, the original file REMAINED on the desktop. So when you transferred the media to a drive for mobile editing, or to hand off, you might forget those odd stray files. So you really had to pay attention and be organized on the desktop level, and in the application. But this was a REALLY powerful way of doing things.

Adobe does this too…so that point is moot.

Avid doesn’t. Avid puts ALL imported assets, regardless of project, into one location. Or if you need to use multiple drives, into single folder locations on multiple drives.  And the media wasn’t accessable via the desktop level, all organization needed to be done inside the Media Composer itself.  I find this limiting.  But, it is just one way that Avid keeps track of everything, and VERY well.  There are power-user things you can do, like change the MXF folder names so that you keep multiple folders, separated out by project. But you should only do this if you know what you are doing, and know how Avid does things.

9. Exporting a Quicktime file with multiple channels of discreet audio.  Before MC6, this was something ONLY Final Cut Pro did. In fact, when I asked someone how to do this from Avid as DNxHD, they responded “it can’t. And that is the reason we have one FCP station, so that we can do just that.” But now, with MC 6, I can do that too. Isn’t as smooth as it is in FCP, but it is close, and will only improve.  Adobe PPro cannot do that…it has Mono, Stereo, and Dolby 5.1 options only. We’ll have to see if CS6 adds this ability.

10 – The ability to import only portions of tapeless media via Log and Transfer.  In Final Cut Pro you can import only portions of clips if you want.  Have a 1 hour clip of nothing, then 2 min of something happening?  Import only that.  Premiere Pro, being native only, does not do this. All or nothing. With Avid, you have to do a few tricks…extra steps.  Access via AMA, put your selects onto a timeline, and then transcode.  I guess that isn’t too bad, but not as slick as Log and Transfer.  And again, Premiere Pro doesn’t do this.

OK…eleven things.  I will also miss the ability to open multiple projects…and especially multiple sequences.

Avid and PPro have improved, and might now include something I used to only be able to do in FCP.  Either that or I simply only have 9 things.  Either way, I’m keeping the title the same…sounds better to say “my top 10 list” rather than “my top 9 list.”  Monk knows what I’m talking about.

Please feel free to add your favorite features you will miss in the comments section. Doesn’t need to be 10, but I am interested in what tricks other people do in FCP, that aren’t doable in other apps.

First, I’d like to point out two excellent articles about FCP-X and the future of post. First, a blog post by Oliver Peters, and then a Creative Cow Article by Walter Biscardi.  Both very good and in depth.

OK, now on to me.

I finally downloaded the FCP-X trial and explored the application for a full day. Prior to this, I used it briefly for two hours. But now, while spending all day trying to make something with it, I discovered that I disliked  just about everything about it.  Every minute I spent using it made it worse because it was backwards from the way I like to work.  But I guess that is how it is designed…to be unlike any other NLE, and to do things very differently.  But is the different way better?  Not for me. Am I too tied to TRACKS?  Maybe. To tied to two monitors when working?  Maybe. Dislike that I needed the Skimmer on to view footage in my EVENT, but that meant that the Skimmer would be on in the timeline too, and every time I moved the mouse, I’d be hit with a barrage of hyper fast audio? Definitely.

I had a list of all the issues I had with FCP X, and I was going to gripe about every one, but then my blog went down for five days giving me time to think about things and I’m not going to post another rant.  I am only going to say that I will not be using FCP X in the forseeable future.  Why?  Well, it doesn’t solve any post issues that I currently need solving, and the whole reason why I moved to Final Cut Pro in the first place was that it solved a big post issue I was facing

You see, I started this blog many many years ago, April 2005 to be exact, when I made my leap from Avid Media Composer to Final Cut Pro.  I had been using FCP for a couple years before that (starting with FCP 3) on smaller side projects, like actor demo reels, a handful of short films, a couple of corporate videos.  But I didn’t think it was quite right for me to use on broadcast work…even though FCP 3 was enabled to do this with the RT Mac and Cinewave hardware cards.  It wasn’t until FCP 4.5 came out with it’s native workflow with DVCPRO HD that it caught my attention.

See, I was working at the time on a National Geographic series that was shot with the Varicam to tape at 23.98…720p 23.98.  But the Avid Meridians that we were using couldn’t deal with that format…they were SD only…so we dubbed all the footage to DV tape and offlined that way.  And then when the time came to online, we were faced with a big issue…The Avid Adrenaline that we were onlining with didn’t do 720p…only 1080.  So we needed to upconvert everything, and deal with the 29.97 to 23.98 frame rate difference, and that was complicated, and costly.  We went over budget by just over a hundred grand for 9 episodes.  Not good.

Shortly thereafter I was asked to edit a History Channel series on the Mexican American War…and the producer wanted to shoot with the Varicam.  I was hesitant, given my recent experience.  And while Avid did release an update shortly AFTER our online to allow for 720p onlines…I had just been to a LAFCPUG meeting where I saw FCP 4.5 demo’d showing how it could capture DVCPRO HD from tape natively.  No offline/online…it was online from the start.  And it was 720p…23.98.  Final Cut Pro offered a solution to a post production issue I needed solving.  So I leapt on it.  Then we were going to also try to shoot with the new Panasonic P2 cameras as b-roll…and FCP was the only NLE to actually work with that format as well…so it was a no brainer.  (If you want to see my experiences with that, dig into the archives).

So…with the release of FCP-X and how Apple seems to have changed the way it feels editors should work…it doesn’t offer any solutions to any post workflow needs I have.  In fact, it actually lacks many features I need for the type of work I do. Other than being able to string pictures together to tell a story, and make the audio sound decent and picture look OK…it is missing everything I need to master for broadcast.  You know the list…no OMF for audio mixing, no output to a broadcast monitor for color grading, no ability to export to Color or Resolve for grading, no way to export multi-channel audio that I need (oh, wait, with the update I now have ROLES…), and on and on.

So, instead of trying to make it work…or wait for it to eventually work…I will be looking at the alternatives.  Going back to Avid Media Composer…and exploring Adobe Premiere Pro…both of which are making advances yet retaining the basic editing methodology that editors rely on to edit quickly, and concentrate on the creative and not the technical. They solve the post issues that I am currently faced with.

OK, while I am also transitioning some projects to Avid Media Composer (that I know fairly well, having used it for 10 years before I switched to FCP), I am also transitioning to Adobe Premiere Pro CS5.5 as well.  Because in playing with it, I find it VERY similar to Final Cut Pro in many ways.  And how I can manipulate media on the timeline is very FCP-like, how to composite graphics and layer footage, add titles and a whole list of other thing, is more natural to me. That’s a big bonus because that’s how my brain works.  I have always been more comfortable (and faster) with basic editing in FCP opposed to Avid Media Composer.

Anyway…while there are many many similarities, there are enough differences to make one get a little frustrated with PPro, and swear at it.  Those differences will just take a little getting used to…you always have to learn how the other NLE does things, as it isn’t EXACTLY like FCP does things.  So it will take a little time.

What are these differences?  Well, Walter Biscardi seems to be leading the charge (from FCP to Premiere) and has started a list of things that are slightly different, in his quest to change NLEs.  And he has been kind enough to provide not only pointers about these differences, but has done so in video form.

Here’s part one of his “gotchas” video series.

(If you want to see other videos he has done, such as how to configure PPro to work with an AJA capture card, go here)

OK, I can finally come up for air.  August was a VERY busy month for me.  On top of my regular day job, I took on two side jobs (after hours work).  One was really easy…online a 23 min reality show.  That was straightforward and I was able to do it in two four-hour nights.

But the other one…well, that was a doozie. It’s the one I blogged about last time…the one that required the DNxHD Quicktime with 12 channels of embedded audio.  That wasn’t the only tricky part. The show, that was edited at 23.98, needed to be delivered at 29.97.  This was easy, actually.  Because of MIX AND MATCH (available since Avid MC 4) I could easily convert the timeline and have it look right.  We were given the uprezzed project (they edited low res, DNxHD36 from XDCAM EX, uprezzed to DNxHD175) as 23.98.  Then I would open that project, and remove all the matte graphics…all the lower thirds and other keyable graphics they had in the project.  Because while I can convert the media to 29.97, mattes won’t.

I put the prepped sequence in a new bin.  Then created a 1080i59.94 (29.97) project, and dragged the bin into it.  I then opened the bin, and double clicked on the sequence.  I was prompted with a message stating “This sequence is of a different frame rate than the project.  Would you like to convert it to 29.97?” Why yes, I would!  So it did.  When we tried this with the mattes in the sequence, it said “whoa whoa whoa…I can’t do that.  You have matte graphics in here!”  (I’m paraphrasing)

Now, when I did this, the timecode was way off.  I mean, the original sequence was 48:00:00, but the converted sequence was over an hour long.  In looking at the sequence, stepping through frame by frame, I noted that several timecode numbers were missing.  At first every 5th number, so I was missing 5, 10, 15, 20, 25 and 00.  But then later, I was missing 1, 6, 12, 18, 24.  Really odd.  To correct this I loaded the sequence into the Source monitor, made a new sequence and just cut in the old sequence into the new one.  That fixed things.  We were back to picture ending at 1;48;00;00.

But what caused that? Well, it appears to be because they cut with a 23.98 Drop Frame timeline. Wait…what? 23.98 DROP FRAME timeline?  But 23.98 is a non-drop frame only format…right?  Well, yes.  On tape, and with QT files, 23.98 is non-drop only.  But apparently Avid MC 5 (not sure about earlier versions) allow you to have 23.98 drop frame sequences.  I’m guessing they do this to allow you to cut to a proper drop frame time for delivery.  Clever.  But, it does have that small hiccup of an issue.  Figured that workaround though…

OK, the frame rate conversion was done. And it was delivered in high resolution, all I need to do now is color correct it.  Yes, I could do it in Avid MC, but I don’t that much time, and I am a tad rusty with color correcting in Avid MC, and I really like the control I have with Color…so…I thought I’d go with Color. But, I cannot SEND TO Color like I can with FCP.  So, what did I do?

Simple.  I exported a Quicktime file from Avid MC 5.5…encoded as ProRes 422 (Because Color doesn’t work with DNxHD).  At first I tried exporting as DNxHD and then converting to ProRes with Compressor, but when I did that, I got the famous gamma shift.  But I found if I exported directly to ProRes (something that requires FCP be installed on the system) I didn’t get that gamma shift.  So I exported the QT file, and then I exported an EDL.  What made that easy is that all the video was on one layer.  Well, after I prepped the cut and moved things to one layer.  And then I launched Color and imported the EDL into Color. When you choose the option to use it as a “cut list,” Color then knows that there’s a media file that this references. So it asks for the path to the QT export.  So I selected the EDL, the path to the QT file, chose settings for 1920×1080 29.97, and clicked OK.

And Color imported the media, all chopped up…perfectly.  And yes, where there was a dissolve, Color added dissolves.

I color corrected and then…hmmm, now what?  Rendering.  The options I have are to render as QT…ProRes, AJA 10-bit codec, or Uncompressed, or as an Image Sequence.  I could do 10-bit, but that requires a LOT of space.  And I did still need to do a playback with the client, and have them give changes, and I wanted to do the changes in real time…so I opted to render as ProRes HQ, and playback in FCP.

Yes, this is adding adding a layer or two of compression.  DNxHD175 to ProRes 422 HQ, rendered again as ProRes HQ.  And then exported out as a self contained file (when all the color notes are done), and then using AMA to bring that into Avid MC, transcode to DNxHD220 (the delivery requirement, and because we would be adding titles in MC), again, being able to avoid the gamma shift (the AMA clip and transcoded material matched exactly).  That’s three conversions (DNxHD to ProRes HQ, render to ProRes HQ, transcode to DNxHD)… but that is fine. DNxHD and ProRes are very good compressed formats and hold up well after many conversions.  And, this is not any less than I would be doing if we, say, output to tape, color corrected on a DaVinci, then output to another tape, captured that tape in Avid again for titling.  It might be one more than I’d get with Resolve (as it reads the Avid media, and renders back out Avid media)…but it did hold up VERY well.

Slightly tricky?  Yes…but it worked.  FCP was used in this case only as a means to get the footage from Color to Avid (export self contained QT file)…and as a means of playback.  Well, that’s not true.  I did do the blurring required in FCP, with Andy’s Region Blur.  Because it is far better than the blurring the client was able to do (more subtle).  But other than that, just an in-between option.  So it looks like I can get a bit more mileage out of COLOR while being able to move to Avid Media Composer.  And I was able to convert 23.98 to 29.97 inside the Avid with very good results. Something I couldn’t do inside FCP…and if I used Compressor, would end up taking quite a while rendering.

So Apple came out with this shiny new operating system with a really cool name…LION. And you are thinking to yourself, “Hey, I’d like to install that new OS on my computer.” OK, I can dig that. But there are a few things you should do FIRST, before you install. Especially if you use this computer to edit with Final Cut Pro…and depend on that machine to earn your keep. Because if you install LION, and things don’t continue to work as well as they used to, then that will cost you in lost time that can lead to lost money.

Step #1 – RESEARCH!!!
You need to look into whether or not your current applications will even work under the new OS. You might be shocked to find that many of them won’t. For example, Final Cut Studio 2 will work…but the INSTALLER will not. Because the INSTALLER isn’t Intel native…it requires Rosetta to work, and LION doesn’t have Rosetta. For this reason, MANY applications that rely on it won’t work. Adobe CS2 will not work on Lion. So do your research to find out if the applications you rely on will work on the LION OS. Make sure that the hardware you rely on for video input and output (capture cards) have drivers for LION.

Step #2 – CLONE YOUR WORKING SYSTEM!
Clone your current working OS drive. Get a cheapish firewire drive…something that the system can boot from…and use Carbon Copy Cloner (bombich.com/download.html) to completely copy the current working system drive to another drive. This way you have a copy of your working setup in case LION doesn’t work out. If things don’t work, just boot from that drive and erase your main drive and clone it back. You will lose a day, tops. And this keeps you from needing to reformat the drive, install the OS fresh, and all the applications fresh, then bring back all of your files, set up all the applications properly again…stuff that can take days.

You can back up any files you want manually as well..but the clone will have everything in case you forgot something.

Step #3 – DO A CLEAN INSTALL
Boot from the LION drive and then ERASE your system. Wipe it clean. Then install the Lion OS fresh. This ensures that you are getting the best possible OS install. Installing on top of existing OS might work…it does for some…or it might not. Some people report issues, others do not. But doing it completely fresh ensures that you have the best possible install. After the install, check for any possible updates with the SOFTWARE UPDATES in the System Preferences.

Step #4 – INSTALL ALL OF YOUR APPLICATIONS FROM THEIR INSTALLERS
DO NOT use Migration Assistant for applications. Install them fresh from their install disks or installer files. Because many of them, like FCP, install bits and pieces of the application throughout the OS, and Migration Assistant might miss those files. Correction, WILL miss those files. So if you want to have a good working application, install from the installers. You can migrate your files if you want…or manually drag them from the clone drive…but not the applications.

Then run the updates for the applications (if they are Apple apps, use the SOFTWARE UPDATE in the System Preferences) to update them fully. And install any drivers, firmware, other bits for other things on your system. Like Capture card software (make sure you get the latest versions of the drivers) and plugins and graphics cards drivers.

If I forgot anything, please feel free to comment and add that to the list.

Personally, IMHO, if I have a good working system, I do not update. Because my system is working, and I might not need any of the new things the new OS offers. I recently upgraded to Snow Leopard only in March, because one application I relied on…the new version of it…only ran on Snow Leopard.

The thirty-fifth episode of THE EDIT BAY is now available for download. A blacksmith bucks the system by making a new hammer and changing the way blacksmithing is done. So does Apple..

To play in your browser or download direct, click here.

To subscribe to this podcast in iTunes, CLICK HERE.

This all started when I sat down to give ScopeBox a spin.  ScopeBox, as you can see via the link, is a way to get external scopes running on your Mac.  Feed it a signal via firewire from a camera, or via a signal into a capture card connected to the computer.  When I was testing it out (and I am still in the middle of testing it, so no final conclusions at this point and time), I noticed that the video levels that it indicated was different than what I saw in FCP or Color.  BUT, I should note that the scopes in FCP and Color didn’t show the same thing either.  But this I knew.  I have grown accustomed to trusting the Color scopes more than FCP’s… even though I know I am not supposed to trust either one.  Because software scopes are no match for hardware ones.  But I felt that the ones in Color more closely represented what I saw when I did have hardware scopes on a system once.

OK, I am doing testing, looking at the comparisons when…I noticed something.  I am parked on the same frame of video in FCP and Color.  Kind of bright, so peaking a little over 100IRE, blacks a little high too, muddied.  BUT, when I switched back and forth from FCP and Color…the signal I got from them to ScopeBox was DIFFERENT!  The image from FCP was a little hotter….brighter.  Just by a couple points, but noticeable when I switched back and forth and looked at the scopes.  The signal coming from COLOR was different than that coming from FCP…even though I had the same hardware involved.

By the way, the hardware involved is my MacPro Octo 3.0 Ghz Jan 2008 machine, outputting from my Matrox MXO2 Mini via HDMI or Component (same issue on both) into my Matrox MXO that is connected to my MacBook Pro 2.4Ghz Duo core machine.

So the image looked different.  In FCP, the image was brighter.  Well, the brights were brighter, the blacks were actually more crushed too.  This concerns me.  Which is the PROPER video signal?  Because I color correct in Color, but then output to tape in FCP.  I wondered if this was an issue with my hardware…the MXO2’s.  So I went into work and that machine is running an AJA KONA 3 feeding a Flanders Scientific (FSI) monitor via SDI.  Using the built in scopes on the FSI I checked this again.  Sure enough, THOSE scopes didn’t match what FCP or Color was showing, and it TOO was showing the offset between FCP and Color.  Again, FCP was hotter and more crushed.

This is not good.  And I am sure that if I point this bug out to Apple, they will do nothing, as FCP 7 and Color 1.5 are legacy apps, with FCP X around the corner.

The full sized screen captures of the scopes from ScopeBox can be found here.  SIGNAL FROM COLOR.  SIGNAL FROM FCP.

Comments welcomed.

UPDATE: Here is a frame of video seen from FCP and Color…output via an AJA Kona 3 to an FSI monitor via SDI.  FCP scopes, and FSI scope…and Color scopes, and FSI scope. https://lfhd.net/wp-content/uploads/2019/04/ScopeComparison.jpg

Unless you live in a cave, you might have heard that Apple announced FCP X (10) at the Supermeet at NAB. And from all the people asking me my thoughts about it, I gather they want to know what professional people think about what we saw.

Me? I put forty more questions than it answers. So much was left out, and I need to see the full app before I can really rant or rave. I simply don’t know what it can do for us broadcast professionals.

But, there are plenty of other initial thoughts to read. Here is are a few:

http://provideocoalition.com/index.php/ssimmons/story/fcp_x_is_shown_to_the_world._flashy_things_are_seen_questions_are_asked/

What are my thoughts on Final Cut Pro X?

http://www.digitalartsonline.co.uk/news/?newsid=3274193&olo=rss

http://www.larryjordan.biz/app_bin/wordpress/archives/1452. (Although I disagree with the title. Not all of our jaws dropped)

Does this post seem like a cop out? Well, a little. I mainly don’t want to add to the chorus going on about this release just yet. Still letting things sink in.

More later.

This great tutorial for a cool effect comes from the GeniusDV site. They offer tips and tricks for FCP as well as Avid Media Composer.  Nice!

Again I find myself in HDV Hell. Further proof that this format is pure and utter crap. OK, fine for shooting…whatever. But never ever ever NEVER work in an HDV sequence. Ever. Those who do, if they are well informed and choose to ignore the advice given here and on the forums, deserve to be flogged. Beaten to death with dead rodents.

What now you ask? Let me tell you…

I was approached to help figure out a problem with a project.  A 90-min documentary shot on HDV, edited with FCP, and using an HDV sequence setting. The show is fully rendered, yet every effort to export a Quicktime Movie, or go via Compressor (this is FCP 6, so no SHARE option), results in either a GENERAL ERROR (love that one) or an OUT OF MEMORY error.  I have 8GB of RAM, I’m out of memory?  I can’t export the full show…I can’t export even five minutes of the thing without getting that error.  Absolutely maddening.

This project is LOADED with graphics.  Text effects.  Tons and tons of moving text and keyframed pictures with text overlays…stills, video, text..sometimes 18 video tracks deep!  It’s intense.  The original editor is running a G5, and they thought that the machine was the issue.  But no, I get the project and media here, on my Octocore MacPro 3.0Ghz machine, and I have the same problem.  So it isn’t an Intel vs PowerPC processor thing.  I just can’t export a darn thing.  And I have been tasked with making a DVD and an H.264 file for the web.

Well, the DVD was easy.  Just output the video from my MXO2, downconverted the signal to SD and connected to a DVD Recorder. Press record on the Recorder, play on the timeline…realtime output.  Still one of my favorite ways of doing this.  MUCH quicker than authoring.  No menus, but I don’t need those for this stuff.

But now the real trick…the H.264 QT movie.  I can’t export anything from this HDV timeline, so I am stuck in the mud.  I did notice that the Render Controls were set to render as ProRes, but that didn’t seem to be making any difference.  The render files are ProRes, but still no go.  I guess that something in there doesn’t need to be rendered, and still references the HDV master…so… I don’t know, just grasping at a cause really.  When I try to export a self contained movie, FCP says it will take 5 hours!  And then it bombs out…but note that time.  When I tried the 15 min chunk, it said just over an hour.  And still bombed out.  OK, I made a few attempts to figure out a work around for this.

ATTEMPT #1

Ok, the first thing I thought of was to make a new sequence…this one a ProRes sequence.  I copied and pasted the HDV sequence content into this one and rendered.  After about an hour the first 15 min was done.  Not bad.  And then I chose a 15 minute chunk, and exported that as a self contained file.  It took 8 minutes.  FAST.  ProRes is definately better than HDV…proof #1 there.  But then I watched the export and…OOOPS!  All the graphic elements were messed up!  They didn’t line up like they used to and they were squished.  OH, right!  HDV is thin raster 1440×1080…anamorphic.  And ProRes is full raster 1920×1080.  Those clips will have some DISTORTION in their motion attributes. So when I try to just remove the distortion, they still didn’t look right.  They were way off of alignment.  So no quick fix here.  These would all have to be redone manually, as they were all done internally in FCP, and not in Motion or After Effects and rendered out.  And there are a lot of these effects.  I can’t tell you how many text effects there are.  Almost more than there is footage.  A lot.  So that wasn’t going to work.  Not without a LOT of work.

ATTEMPT #2

OK…I was able to output a DVD via my MXO2…why not try something similar.  I happen to have TWO computers, a MacPro and a MacBook Pro.  And I have two capture devices…the MXO2 and the MXO2 Mini.  Why not play out from one machine to the other, capture the signal as ProRes, and then render out that file to H.264?  That sounds like it’d work.  So I connected the devices via HDMI, and I got the capture tool to see the image.  I played from the Laptop, as the tower would need to do the capturing…as the MXO2 does capture ProRes, but it relies on the COMPUTER to do the encoding.  So the laptop isn’t quite up to snuff for 1080…it does 720p at 23.98, but this was 1080i 29.97.  So I pressed CAPTURE NOW on the MacPro…pressed PLAY on the Laptop…and recorded for 12 min when…dropped frames.  Damn.  OK…try again.  I got 8 min before the same thing.  Hmmmm…. Let me turn off the dropped frame warning, it won’t look that bad, right?  Wrong…there were two cases where I noticed it…a freeze frame for 1-2 seconds.  Unacceptable.

OK, but how about the other way around?  I know that I can’t to 1080 on the laptop, but it does 720…and I do have a 720p 29.97 setting.  And the MXO2 does cross convert upon output, so it should work.  So I swapped the cables from IN to OUT on both sides, and tried the other way.  I got 23 min in before the laptop stopped capturing due to dropped frames.  Damn.  I have captured 720p 23.98 in one hour chunks before, so I know that works.  I guess that 29.97 is just too much.  OH, and the drive I was capturing to was a 2TB G-Raid3 via FW800.  Playing from a G-Drive via eSATA.  OK, so machine to machine won’t work, unless I have two MacPro towers.  Darn it.

ATTEMPT #3

When I was sitting here pondering what to do next, I recall how on one job we’d play out from the Avid to a PC with a capture device and PC software that would encode straight to MPEG-1 (this was 5 years ago) while we played.  VERY slick.  And I was wishing for something like that, when I remembered, “Hey, didn’t Blackmagic Design make something for this?”  I went to their website and looked and sure enough, the BMD Video Recorder did just what I hoped.  It captured a signal straight to H.264…just what I needed.  I called a local dealer, they had a bunch in stock, so I drove into Hollywood to get it.  I asked “does this take HD video?” and the salesperson said, “yup.”  So I bought it.  When I tried to use it with the HD signal…it didn’t work.  Then I looked at the manual and it states, near the end, that “this doesn’t work with HD video signals.”  Son of a…!  Drat…I didn’t research this well enough.  I expressed my frustrations on Twitter when a follower said, “yeah, for that you need the PRO model.”  I look on the site, and there is the nice pro model.  Yeah, that will do what I want.  But it is more than the other one…as it should be.  And it was not only a tad outside my budget at this time, the dealer didn’t have any in stock.

I did a test capture and compared that to the H.264 encode of the ProRes output (the bad one)…and it was noticeably softer.  This H.264 is intended to look GOOD, a way to attract investors for distribution.  So this model, while darn handy for quick outputs for producers or clients to review the cut, it isn’t all that great for FINAL compression.  Compressor is better for that.  And I’ll soon see if the PRO model does a better job.

ATTEMPT #4

OK, I know that if I can get this to ProRes, things will go smooth and I can use Compressor.  But that darn full raster issue.  But wait, when you capture HDV as ProRes via firewire in that little trick that Apple made available, the ProRes setting that it is captured as is 1440×1080…anamorphic.  So why can’t I make the timeline settings ProRes 422 at 1440×1080?  Let me see… I go into the Sequence Settings, see that it is HDV, that there is no field dominance…so not interlaced.  And I simply change the compressor from HDV 1080i60 to Apple ProRes 422.

BAM…I needed to render everything.  And when I tried to do it all in one clump, I got the GENERAL ERROR and OUT OF MEMORY error again.  So I rendered in small chunks…5 min…2 min….and it worked out.  When I looked at the graphics, they were perfectly fine.  So now I have a ProRes sequence, and the graphics are all fine.  And after I rendered out 15 min…I exported the QT and it was fast.  PERFECT!  But the big hitch in this is that I have to render those small chunks, and babysit them.  I am still doing it, as I write this.  It has been an 8 hour ordeal, but it is almost done.  VERY close.  I did find that the estimate that FCP gave was never right.  It’d say 15 min, but really be 24 min.  Two minutes is six minutes.

Now it is done, and I have exported the self contained file.  It took 30 min, but it is out and done.  SUCCESS!  Now compressing the H.264.

THE SOLUTION

Don’t edit HDV on an HDV sequence.  Not if you like things to go smoothly.  Yes, you will have the green render bar above the footage, and yes, eventually you will have to render everything.  But if you do it while you are working, bit by bit, you avoid the one massive render in the end.  And you avoid the HDV madness.  So if you want to save space by not capturing HDV as ProRes, and I can understand wanting to do that, then at least use a ProRes sequence while editing.  Even if it is full raster, which is a good thing, the HDV will fill up the space.

And for all you Avid people that might feel like chiming in with, “Well, if you editing with Media Composer you wouldn’t have had this issue,” that is true.  Because you’d be editing with DNxHD project and sequence settings.  Just like if someone in FCP land edited HDV on a ProRes sequence.  In both cases, you’d need to do a full render when you were done and ready to output.

Working with Tapeless Formats in Final Cut Pro 7 via Log and Transfer.

This covers P2 (AVCIntra and DVCPRO HD), XDCAM EX, AVCHD, Canon DSLR (5D, 7D and T2i)…and one possible RED workflow. The basic workflow on getting footage off the cards and into FCP for editing.

Conforming 60p Footage to Smooth Slow Motion using Cinema Tools.

This is a quick tutorial that addresses the question “How do I get this great 60fps footage I shot into slow motion 23.98 or 29.97?”

Yeah…I’ve been busy with Tutorials.  This is what happens when I don’t have a steady job…

Last year I worked on a National Geographic special called JFK: THE LOST TAPES. This show (currently up for an Emmy for editing….congrats to the producer/editor Ron Frank) followed the events surrounding the assassination of JFK, but told entirely with TV and Radio news reports. No interviews, no narration…only audio that existed at the time of JFK’s visit to Fort Worth and Dallas. It was a really neat and unique way to approach the subject…and the technical side of things was pretty unique too. Especially since this show needed to be in HD, and every single bit of footage was SD.

That’s where I came in. I not only onlined the show, but worked with the post supervisor to figure out the best workflow.

All the footage that was supplied to us was stored on DVDs and audio CDs. The CDs contained the radio news audio, and the DVDs were all viewable DVDs of all the footage the John F. Kennedy Library in Dallas had on the news reports from the time. All of them had window burn timecode (well, not the audio CDs). There were 35 audio CDs and 78 DVDs, each one between an hour and two hours long.

The audio CDs were easy…converted all of the audio from the 44.1Khz CD files to 48Khz, 16bit stereo files using Compressor. When it came to the DVDs, that took a little more effort.

First off, we ripped all the DVDs using MPEG STREAMCLIP, free software that needs to be in every editors APPLICATIONS folder. We ripped everything to DV/NTSC. But of course this footage wouldn’t be the final footage. We’d need to go back and capture the master tapes when it came time to online…so in order for that to go smoothly, we needed the timecode of all the footage to match the window burn. It took a few days to rip the footage, and then time came to adjust the timecode. This was relatively easy, but very time consuming as there was a LOT of files.

To do this all you need to do is load a clip into the Viewer, then park the playhead on a frame. Note the timecode. Go to the MODIFY menu and choose TIMECODE.

You then are presented with this simple interface. You can add an AUX TC if you want, but for this one I want to modify the MAIN timecode, so that when I go to recapture, I will be able to do so frame accurately. This also helps in accurate EDLs for ordering footage too.

Now, this wasn’t without a few roadbumps. A handful of clips had timecode skips in them. The video skipped and left out a few numbers, meaning that after a certain point timecode was off, or the DVD contained two tapes tied together. In this case, we used Quicktime Pro to break the clips up into two or three separate QT files, so that when we modified the timecode, it was all accurate. (By we I mean the post supervisor and assistant editor.)

Then editing proceeded. While it was happening we were discussing the logistics of delivering an HD show consisting entirely of SD material. Initially the network wanted us to blow things up full frame…fill the screen. This would mean losing the top and bottom of the image. But the JFK Library came back saying that they would really prefer that no part of the image be cropped off. So we came to an agreement to pillarbox the entire show. That means capture 4:3 into a 16:9 frame…having black bars on the left and right side of the screen. We did toy with ways to fill in that area, like layering the footage and stretching the image to fill the side, and reduce the opacity and blur it…but found that it was too distracting. So a straight pillarbox was the better idea.

Now, the online of the show was going to be tricky.  The issue we had is that the museum wasn’t about to ship us the masters, and we couldn’t afford to get dubs of the full tapes, and we just needed bits and pieces from each tape. So just dubbing those bits and pieces would be a logistic nightmare. So we decided that it would make the most sense for me to go to Dallas and do the capture on site. We opted to save money on a bay (and if I recall, the company did FCP systems, but not with cards capable of a hardware upconvert) by sending me down with a mobile bay.

I flew to Dallas…I went to the JFK museum…I walked around Dealey Square and looked at the spot where JFK was shot. I stood on the grassy knoll. Things are a LOT closer than I expected them to be. It was amazing to be standing in such a historical location. Words cannot describe it.

So I won’t try…back to the workflow.

I set up shop in the machine room of a post company that the JFK museum. See?


For the next two days I sat with Gary Mack, from the JFK Museum. I’d say a tape number, he’d grab it and load it, we’d shuttle to the shots we needed, and we’d set the exposure (and at times the color) of a shot using the controls on the Digibeta deck. Adjust things BEFORE we captured. This meant darn little time coloring afterwards. We spent two days doing this. And it was a good thing I went. Because often we’d find a shot that was on one tape, and Gary knew that there was another tape that had a better film transfer of that shot. So we’d find the shot, capture that one, and I’d cut it into the sequence.

Oh…that reminds me…I didn’t mention how I prepped the sequence for online. I media managed the locked cut to CREATE OFFLINE and DELETE UNUSED, with handles…meaning that it would make a new sequence, with the capture settings of my choosing (in this case, ProRes 422), and make clips that only referenced media used in the cut…with two second handles, so that I could slip shots one way or the other, to make sure things matched up. Because when you online, some slippage of a frame or even a half second, often occurs. Yes, even on an Avid. On top of the online sequence…on the top-most layer, I cut in the QT movie that I exported of the locked offline cut. And then I dropped the opacity down to 50% so that I could see if shots lined up.

So after the first day of capturing, I went to my hotel, ordered pizza, and set up my laptop and drive to go over all the footage we captured into the cut, and made sure that things lined up.

It took a while. And I found that some clips were actually off by more than the allotted 2 seconds. So I marked those for manual capture the next day. It was odd, the TC on the DVDs was off from the master in a couple cases. I know because the QT of the offline had burned in timecode. Having that reference Quicktime file is pretty important.

The next day I finished capturing the remaining clips, then captured the ones I had issues with. I went through the cut to make sure things all lined up. And when they did…high fives and hand shakes were exchanged, and I headed home.

Now, one thing that I didn’t mention earlier is that I didn’t have one version to online, I had four. And this created a logistical…speed bump. I needed to online the main two-hour cut-to-clock, meaning the cut with the commercial breaks, the two hour seamless international…with extra footage so that it made up for the lack of commercials on the international markets. But then there was also a one hour version…cut to clock and international seamless. Making sure that I didn’t capture the same clip four times took some planning.  The solution to that was relatively simple:  Put ALL the sequences into one sequence (Copy/Paste) and then use the Media Manager.  This way I got one bin with all the clips that are referenced by all the sequences.  I then copied the smaller sequences out of the the media managed sequence   And that first night at the hotel was a late night, as I had to check FOUR versions to make sure they all lined up.

Phew…that was quite a long article. I best stop now. If you read it in one shot…I salute you! I think this would take me two or three nights to get through. Unless I made it fun enough to have in one go.

To start off…read this post to see what the details were for the online I was doing.  These are the INITIAL details.  It got worse.  Go ahead, I’ll wait.

All done?  OK, let’s move on.

Well, after a normal shift onlining two acts (we decided to do the online act by act), and then a marathon 25 hour online session over the weekend, I am done.  And that is all I pretty much did…online.  No color correction as there was no time.  This was the fastest and most down and ditry online I have ever done.  I finished 10 min before the producer had to jump onto a plane with the tape and drive with all the media managed footage.  So all I really did was make it air-able.  Make sure that it would pass QC.  That’s all there was time for.

NOW…I would like to point out that those first five formats I listed in the previous post were not the only formats that I was given.  By the time all was said and done there were H.264 files, MPEG-4, WMV files (that I had to convert to ProRes…so I had to buy Flip4Mac Pro) and Animation files the graphics).  It’s like they were grabbing every format they could and throwing it at me…a snowball fight that I was obviously losing.  And unfortunately, due to the rush, I was unable to convert any of them. I just had to render and go.

The show was initially edited on an XDCAM timeline, as that was the majority of the footage.  But that caused crashing left and right, and the machines slowed to a CRAWL.  So we switched to a ProRes HD timeline and that seemed to solve the issue. Because at the start we thought we had to deliver and HD show, and it wasn’t until later that we found out it was destined for SD, so I said I’d deal with that when we got to the online.  And I did.  I made a new sequence with the ProRes SD PAL Anamorphic settings and then copy and pasted the footage into this.

Actually the first thing I did was media manage the acts to a local drive, from the SAN.  I wanted to have ONLY the footage used in the cut, with handles, so that I could send them the drive (keeking a copy here, JUST in case) for all the changes they needed to make on their end.  They did have a copy of the original footage they gave us, but we had added a lot of footage since then, so they wouldn’t be able to reconnect to all that new footage, that was scatter all over the SAN (6 editors).  I wanted things to be easy…everything in one place.  So, I media managed…then moved on.

And then I moved on to the resizing.  I spent a good amount of time resizing and changing the distortion of the clips to fit into this ProRes SD timeline.  Yes, this was time consuming…thanks for asking.

Then I quickly applied the COLOR CORRECTOR 3 way and BROADCAST SAFE FILTER to every clip.  Then I went through the cut clip by clip and adjusting things so that they fell well within the safe area.  I didn’t want to take any chances, I just wanted it to pass QC.  Now I did color correct a few clips, as some people were greenish, or had an orange hue, and I can’t just let that go.

Then I moved on to subtitles and lower third identifiers.  I changed the fonts to match the specs, and I had to replace all the TEXT generators with TITLE 3D, because I needed the white subtitles to have a black border to be more visible, and TEXT didn’t offer that option.  Yeah, that took a while too, thanks for asking!  I had to make sure that the titles and the lower thirds all stayed within 14:9 title safe in this 16:9 picture, so for that I used ANDY’S GUIDES 3.1. VERY useful…and free.  All of his plugins are free…check them out.

MOVING ON…

I did each act, then pasted them into the main sequence.  In there I had to make sure that there were 32 seconds of commercial blacks (emptyness between the acts)…that every act break landed on a 00 frame (so 1:06:07:00, not 1:06:07:12)…and that the show was to time.  EXACTLY to time.  Since each act was cut by a different editor, sometimes multiple editors, working simultaneously, getting proper timing was impossible.  So they left that up to me.  So after I got all the acts into the main project, and found out what my timing was that I needed to hit…I was 57 seconds short!…I started added ‘breath’ to the cut.  Pauses after one scene finishes to then another begins.  And there was plenty of breath to be added…things were slammed together.  Well, after an hour or two adding breath and fixing audio to still work, I was still 26 seconds short.  GAH!  But the exec producer said that we can be up to 30 seconds short…so we were fine.

YAAY!

So I then made the acts all hit the 00 mark on the act breaks, made sure the text was all good…and rendered.  For over an hour.  A little break.  And let me tell you, the HD downconverted fine to SD when I rendered, and the PAL DV, uncompressed 8-bit, and ProRes PAL looked OK.  The mpeg-4 and h.264 files were a tad rough, but since some were 320×240, I didn’t expect greatness.  And the NTSC footage that I didn’t have time to convert…I just had to render and let it be.  All in all the quality was fine, but there was a stutteriness and jerkiness to the footage that looked back.  Not really acceptable in my book, but, good enough for air… this time.  I’m sure they’ll color correct and convert when they have time for a later master.

I’m telling ya, this is a project that I would have LOVED to have edited with the new Avid Media Composer 4.0.  It seems build specificially for this purpose.  Because when I mix PAL and NTSC, I don’t see the stutteriness and jerkiness that I saw in FCP.  And the Motion Adaptive plugin not only dealt with that, but also upconverted or downconverted footage in very smart ways, and utilizes better technology (the only way I as a non-engineer can describe it) to make it work better.  BUT, this project was handed to us with 90% of the footage already captured, so he had to deal with what we were dealt.

Oh, one cool thing.  We were still editing the cold open/tease of the show when the producer had to catch the flight, so I had to leave a 1:20 hole in the show for it to fit into.  When the editor was done, he did the final online of that small segment and exported as a self contained QT file, and used YouSendIt (the pro version) to upload this to the client overseas.  And I used YouSendIt a lot to send them OMFs and MP4 of the cut.  Fine for stuff 2GB and under.

OK…that’s my story and I’m stickin’ to it!

I need a hobby.  Well, an inexpensive one.  I really want to get into cycling and get a road bike, or get one of those single gear really LIGHT ones but then I have to shell out cash for a bicycle and all the extras.  But then I’d be shape at least.

So what do I do for a hobby?  Play with plugins and workflows and see what works and what doesn’t.  I REALLY should play with MOTION so that I can learn that application.  That’s what a friend of mine does as a hobby, and he is so good, that he wrote a book on how to do really cool things with Motion.  But I get all into solving puzzles like how can I get an Avid Adrenaline that was on a PC onto the Mac I have running FCP?

The latest thing is that soon we will have a client that wants us to finish a project of theirs.  Online it and color correct it.  They shot on P2 and captured everything at full resolution, we just need to color correct it and output.  And I have gotten really handy with COLOR, so I’d like to do it there.  Only thing is…they are on an Avid Media Composer. So how on Earth do I get TAPELESS footage from Avid to FCP to then send to Color?  If it was tape, I could recapture.  But tapeless?

I know that I can use Automatic Duck to export and EDL from Avid and open that in FCP and have a nice full sequence that will need recapturing.  And I have done this before…on tape.  But tapeless?  Can I batch capture only what is needed.

Luckily I didn’t need to figure that out.  Because The Duck (the nickname for Automatic Duck) has a great new feature.  Not sure how new, but recent.  This feature allows you export an AAF with Imbedded media.  And then Automatic Duck, thru FCP will import that and create quicktime reference movies that are tied to the MXF format, or OMFI format Avid files.  Go watch the demo here…it’s pretty slick.

So I did that.  Imported some P2, did a quick little nonsense edit.  Added a couple filters.  Then exported the AAF, opened Final Cut Pro (after rebooting the computer to that partition), imported that AAF and FCP with the aid of The Duck Importer imported the cut and the media…intact.  And was is REALLY cool, it makes a bin of that footage, and you get not only the clip, but the ENTIRE clip.  So if you need to extend shots or look for more footage on that file, you can, and re-edit!

OK then…now the big test.  Can I then send this to Color (because it is DVCPRO HD QT files, it SHOULD work), color correct, then render out to FCP again.

I did the send to Color easily…all the files showed up.  I color corrected, then rendered, and got brand new QT files (as is normal with COLOR) linked to the new exported cut.  Worked like a charm.  Not bad Wes…not bad.

Now…I wonder what I need to do with MXF 145 or 220 files.  If I need to import them, then transcode to ProRes.  Hmmm…that bears testing…