Skip to content

Little Frog in High Def

Adventures in Editing
    Little Frog In High Def

    Archive

    Category: EDITING

    I’m working on a series of posts that are less about the technology used in the projects I work on, and more about the workflows involved. Tips and techniques that can be used no matter what edit system you happen to use.

    I have blogged about going back to Avid and editing with a single user setup on the series, A HAUNTING…now to talk about the challenges with editing a TV series with multiple editors working on one episode at the same time.

    I will mention that Avid Media Composer is involved only to illustrate that we are working from shared storage (ISIS) and using a shared project file…all accessing the same media and able to share sequences. Beyond that, it doesn’t matter, as what I am going to talk about is more generic.No  matter what editing software you use, these are the challenges one faces when multiple editors work on a single show. Most of this advice can be applied to narrative shows as well as reality and documentary. In this case, I’m referring to documentary TV and some reality, since that is the majority of what I cut.

    SHOW STYLE

    When you edit a TV series, you need to work within the show style. You might have your own sense of style and story that you’ve used on other shows or projects, but now you need to conform to style of the series you are now working on. Who sets that style? Typically the lead editor. The lead editor might have edited the pilot, or the first episode…or just served as the main editor for a group of editors. Whoever it is, they set the style. When you join that pool of editors on that series, it’s your job to conform to that style. It’s very important for the episode, if not the whole series, to look seamless, as if it were edited by only one editor.

    The first way to figure out that style, is to watch previous episodes. Take note of the music used, how dramatic emphasis is added, how visual effects (VFX) and sound effects (SFX) are used. Whenever I start a new show, that is what typically happens on the first day. Fill out the start paperwork, get the script, and get access to previous episodes so that you can familiarize yourself with the show. I will watch the episodes first, then read the script, so that I can get the show style in my head while I read, and picture how it should be cut. I might even make notes about what sort of b-roll I picture in certain spots. And if I don’t have it in the project already, then I’ll request it.

    One big part of the show style is the VFX…the plugins used and how they are used. This is what I call the “language of the VFX.” Some shows will have a certain style…an approach to the subject that will dictate how the VFX are utilized. A civil war era show might have smokey transitions, or flash explosion transitions. Robot reality shows might have transitions and SFX that try to convey something robotic, like we are looking at shots in a robot workshop. Like mechanical steel doors closing and opening as a transition. All the SFX being mechanical in nature. Another show might want to present itself as though you, the observer, are reviewing files on a government computer and surveillance system, so the effects are geared towards camera sounds, picture clicking and shutters, spy cameras and scan lines with vignettes. Or a show that explores the paranormal so there are ghostly SFX and flash frames, light ray transitions, eerie sic-fi music beds and transitions.

    One way I make sure to stick to the show style is I will copy and use the effects the main editor, so that I can mimic what they do. I might use an effect they use, so be a reoccurring theme, or modify something they do so that it is similar, yet different enough to keep the viewer from thinking, “I saw that same effect 10 min ago.” It might draw them out of the story. I will also find the music they use, and match back to the bins where that music is and see if cues next to it are similar. If not, I’ll search for cues that closely resemble the style, yet are different enough and fit the story I’m trying to tell.

    As I mentioned before, music is also key. How long does the music typically last? One one series, I had the music change every 20 seconds, pretty much every time a thought was concluded and we moved onto a different topic. Music sting happened, layered SFX and WHOOSH light ray transition and we were onto the next topic. Very fast paced. Another show might be more investigative, more mysterious. So the music cues are dark, mysterious, with hits. It might last 1 min or so. Used to underscore a particular thought, and again, end with a hit to punctuate that thought and transition to the next music cue for the next thought. Or, at times…no music to add particular emphasis whatever is being said next. Sometimes the lack of music, when it is almost constant, punctuates a statement more than having music at that time. It might seem more important…”Oooo…there’s no music here…what they are saying must be so important, they don’t want to distract us.”

    VFX

    A bit more on working with VFX…meaning filters and transitions…in a show. One thing that I find very important is not to have the VFX distract the viewer from the story. The VFX are to emphasize the story point, punctuate what I am trying to say. If it is too flashy, or too fast, or happens on top of what the person is saying then I’ve distracted from the story and taken the viewer out of the moment. I’m lucky that many of the producers I work with feel the same way. Story is king…let the story happen. TELL the story. The story is the cake…the VFX are the frosting. The cake is good on it’s own, but frosting makes it better. A pile of frosting with no cake is too sweet (although my wife will disagree with me on this). Too much sweet with little to no substance. Filters and transitions used well, will add to your story.

    Now, that’s not to say that I haven’t done over the top VFX. I most certainly have. I’ve worked on many reality shows and clip shows that lack a lot of substance, and to make up for that, we add flash. We will take one picture and milk it for all it’s worth…push in here FLASH, wipe there WHOOSH, pull out here BAM BAM BAM flashbulbs going off push in to have it settle on the girls face. Although a bit gratuitous, it might serve a point. “Britney Spears leaves the Mickey Mouse club…and moves on to pursue a career in music….BAM BAM FLASH FLASH WHOOSH BANG BOOM!” The VFX is there to punctuate the moment, and it has a language…paparazzi, stardom. And sometimes to cover up the fact that we really have no substance.

    RECYCLE PAPER, NOT FOOTAGE

    One of the challenges of working on a show where it is divided up among the editors, say one editor per act, is that we might end up using the same shot or still in Act 4 that someone used in Act 2. You can avoid this by occasionally watching or skimming the other acts to see if that shot is used. Or, if a shot really works well for me, I’ll ask the other editors if they are using it, or plan to, and if so…plead my case as to why I should be able to use it. And even when we do this, when we all watch the assembled show for the first time, we’ll see duplicate footage, or hear duplicate music. At that point we’ll discuss who gets to use what, and who needs to replace it. in a perfect world, this would happen BEFORE the first screening with the EP, either we screen it with the producer, or he watches it alone and finds the shots…but doesn’t always happen. Be hopeful that your EP (executive producer) understands the issues and just mentions the duplicate footage…rather than throwing a fit. “WTF?!?! I just saw that two acts ago! REMOVE IT!”

    COMMUNICATION

    Of course the biggest thing in working in a multi-editor per episode show is communication. Besides the “are you using this shot” type stuff. I will go to the lead editor and ask for them to watch my act with me, and give me feedback. they know they show, they are in charge of the show style, so they will give me pointers to make my act cut together more seamlessly with the others. Sometimes I’m the lead editor that people come to for advice. One thing I found too is that often after the first screening, when all us editors are milling about after getting our acts picked apart by the EP…we tend to discuss our acts, and the style used. “Hey, I really liked that clever wipe transition you used in Act 5…mind if I steal that for Act 2?” Or, “I really liked how you amped up the drama in that reenactment. I can’t figure out how to do what you did…can you help me with that?” Or we’ll ask where they found a certain sound effect, or music cue and play off of each other. It can, at times, be like the Shakespearian era playwrights…each taking an idea and modifying it to make it better. Only in our case, we tried to tell a story in one way, but see how someone else did it, and try their approach.

    One thing I forgot to mention is that sometimes the lead editor will go through all of the show…all of the acts…and do a “style pass.” They will take all the separate acts by separate editors and make it all conform to the style of the show. This does happen in docs on occasion, but I see it more in reality. I myself have been hired specifically as the “style editor,” or ‘finishing editor.” I might have an act of my own, but also be in charge of the overall look of a show.

    To close on an anecdotal note…I once worked on a doc series and we were very behind. There were two of us editors on this one act, and the producer would write on page, give it to me and I’d go to work on it. Page two he’d hand off to my partner and he’d work on that. Page 3 was mine, and so on. This was tough because we weren’t editing separate acts…not even separate thoughts separated by a music cue. We were just picking up on the next page. To deal with this, we’d edit without music and effects, just get the story down and filled with b-roll and some slight pacing. And when we had assembled the act, or at least two separate thoughts, we then divvied them up and tackled them, adding music and effects. And when we finished the whole act, the other editor would take it over and smooth out all the edits and make it into one cohesive piece (they happen to be the lead on that show).

    Note that narrative shows also have a show style that all the editors need to conform to. CASTLE has a very unique look and style, as do BURN NOTICE, PSYCH, LAW & ORDER SVU, MAD MEN and THE BIG BANG THEORY. Those editors also need to fit within the show style, and make it appear as though one editor cuts the whole series. And a few of these shows also happen to have two or more editors (note this in the TV series, LOST).

    If you happen to follow me on Twitter, you were no doubt privy to the barrage of tweets I did while at the LACPUG meeting on Jan 23. Dan Lebental was showing off this cool editing app for the iPad, TouchEdit, and I live tweeted interesting points he made, and pictures I took.  I’d like to go a bit more in depth here.  More than 140 characters for sure.

    The reason this app came about is because Dan bought an iPad and when he moved the screen from one page to another…he went “hmmm, there’s something to this.” And then he would browse through his photos, moving them about with gestures of his hand like he would if he was holding them, and he said, “hmmm, there’s something to this.” Eventually he figured out that this could mimic the tactile nature of editing film. Being able to grab your film strips and move them about, and use a grease pencil to mark your IN and OUT points. So he went out and found a few people to help him develop this. No, he didn’t do it on his own, he went through about 14 coders (if I’m remembering right) to eventually come up with version 1.o of his software.

    Who is this for? Well, he honestly said “This is designed for me. For what I want…my needs.” And I like that attitude. Because if you like something, chances are you’ll find someone else that likes that something. And that is a great way to develop a product. To fulfill a need/want/desire that you might have.

    Anyway, moving on.

    He showed off the basic interface:

    The film strip above is the source, and the film strip below is the target…your final film. Now, the pictures of the frames don’t represent actual frames. You don’t need to advance to the next picture to be on the next frame…that’s just a visual reference to the film. Slight movement advances the film frame by frame…and there’s a timecode window on the upper left (sorry for the fuzzy picture) and the clip name on the upper right. So you can see what clip you have, and what the timecode is. You’ll scroll through the footage, or play it, until you found the section you wanted, and then mark your IN and OUT points. To do this you swipe your finger UP on the frame you want to make a grease pencil like mark for the in point. Now, the pencil mark won’t be on the frame you selected, it will be on the frame BEFORE the one you selected. Because you don’t want grease pencil on your actual frame. A swipe down marks the out point, and then you drag it down into the target where you want to put it.

    There are a couple big “V” letters to the left of the footage on the timeline. The big “V” means you are bringing audio and video. Click it to get the small “v” and you will bring over only picture.

    When you do this, you’ll note that your cut point, where your footage was dropped into the timeline, is marked with a graphic depicting splicing tape:

    One thing to note too is that the GUI (graphic user interface) of the film appears to run backwards when you play or scroll it. That’s because it mimics the way actual film moves through a KEM or STEENBECK editor. Really meant for the film people. But, if it’s too distracting, Dan said he would take all comments on the matter and make it an option to play the opposite way…in case it’s too distracting.

    OK, Dan flipped the iPad vertically and the interface changed:

    Now we see just the source strip, and 8 tracks of audio. This is where you’d be doing your temp audio mix to picture. And with the tap of a button…

    And you have a mixer, to allow you to adjust your levels.

    I did mention that I felt that 8 channels wasn’t quite enough for the temp mixes I was required to do. He replied that he could perhaps add a second bank of tracks so that you could then have 16…or 24…or 32. This is a possibility on later versions.

    BINS.

    Dan didn’t call them bins…he said the more accurate term was “collections,” as they are the place that holds the collection of clips you have to work with. That area looks like this:

    There is also the main project window. That, interestingly enough, does like like a bin type thing, with film strips hanging down, representing your projects. In graphic only…they are actually below in the window:

    IMPORTING

    Here is the import interface:

    There’s even a help menu for importing:

    Importing footage can be done via iTunes Sharing, iPad Video (which is called Photos on the iPad) or Dropbox. For maintaining Metadata you use iTunes Sharing or Drop Box as iTunes Videos tends to drop some Metadata. The footage can be low resolution proxies, like 640×360 MP4 or H.264…or full resolution…but in a format that the iPad can work with…thus MP4 or H.264. So you can use the app as an offline editing machine, or for editing your project at high resolution for exporting to the web straight from the device.

    STORING YOUR FOOTAGE

    The question I had for Dan was…how do you store the footage? Well, it’s all stored on the iPad itself. There currently are no external storage options for the iPad. So you are limited in the amount of footage you can store at one time. How much depends on how compressed the footage is. A lot a low res, not much at high res. Yes, I know, VERY specific, right? Specifics weren’t mentioned.

    I did ask “what if you are editing, say, THE HOBBIT, and have tons of shots and takes…a boatload of footage. What would you do then?” His answer was “Well, you can have the footage loaded in sections…for certain scenes only. Or have multiple iPads.” I pictured a stack of iPads in a bay….one with scenes 1-10, another with 11-20, and so on. Not altogether practical, but the loading of sections seemed OK. And Dan did have three iPads present, including a Mini…so he might just be headed that second way. (joke)

    Dan mentioned that he loaded an entire indie movie on a 64gig iPad at 630×360 with room to spare.

    EXPORT

    It eventually gets to a point where you are done editing…now what? Hit the export button and you have a few options: Export final MOV to iTunes sharing, export the final MOV to Dropbox so you can share it with others, export it to your PHOTOS folder, or export the FCPXML to iTunes Sharing or Dropbox.

    FCPXML you ask? Yes, that is the current way to get the “edit decision list” out of the app and have you reconnect to the master footage. It exports an FCPXML meaning that it interfaces with FCP-X. But that is only in version 1.0. The TouchEdit folks did mention that a future update, Version 1.1, will feature FCPXML Input/Output and AAF Input/Output (AAF support is for Avid). Good, because I was wondering how you’d edit this feature film on your iPad and then deal with it in FCP-X. That’s just temporary…other options are in the works. But Dan did say that the application is based on AV Foundation, and not Quicktime..so that points to working tightly with FCP-X…and working well with the future Apple OS’s.

    In addition to all of this, TouchEdit has partnered with Wildfire studios in Los Angeles. Wildfire is providing an a large number sound effects library to TouchEdit free of charge in version 1.0. You heard it…free SFX. In version 1.1 or 1.2, TouchEdit will add a SFX store where you can buy SFX rather cheaply.

    TUTORIALS

    Yes, there are already YouTube tutorials on the TouchEdit YouTube Channel, to get you up and running. Nice guys…thinking ahead!

    COMPATIBILITY & PRICING

    TouchEdit works on any model iPad 2 or higher…including the iPad Mini. And it will be available in early February for a price of $50.

    Let’s start off 2013 with a review of a really cool product…the AJA T-TAP.

    Until recently when you wanted to send a signal to an external monitor from your edit system, you needed to get an “I/O Device.” I/O meaning “In and Out,” and device being either a card that you installed internally on a tower computer, or an external box…or combination of the two. These devices allowed one to capture incoming video signals (from tape or directly from cameras or switchers), and output video signals (to client and color correction monitors). In the age of tape this was the way to get footage into your system.

    But in the current age of tapeless capture, the “I” part of the “I/O” is no longer needed. All we want/need/desire is output to a client monitor…or broadcast color correction monitor. So instead of shelling out $500 to $8000 for an I/O device…you can get the AJA T-TAP for a mere $299.

    The device is remarkably simple. It connects to your computer via Thunderbolt (so unfortunately it won’t work on Mac Pro towers or PC towers as they lack this connection type) and then outputs full 10-bit video via SDI or HDMI with 8 channels of embedded audio. And it’s so small, it can fit into a small compartment in your backpack, or in your pocket, and allow your edit system to be very lightweight and mobile. The T-TAP is also very versatile. It is compatible with the three major editing systems: Avid Media Composer 6 and 6.5 (and Symphony), Adobe Premiere Pro CS6 and Final Cut Pro (X and 7). Unlike other options that AJA has, the audio out of this device is only available via HDMI or SDI, so you will have to monitor audio from the client monitor, or patch audio from that monitor to your mixer…depending on the edit software you use. FCP 7 and Adobe Premiere Pro allow you to route audio through the computer speakers, while Avid Media Composer locks the audio output to the device.

    The T-TAP supports resolutions from SD (525i NTSC and 625i PAL) all the way up to 2K. Frame rates from 23.98, 25, 29.97, 50 and 59.94.

    I ran three real world tests with the T-TAP, and had great success with all three tests.

    First…the out of date, end of line Final Cut Pro 7. After I installed the driver I got a call from a client to do changes to a sizzle reel that I had cut in FCP. So I opened it and worked on it for two days. With this option, I was able to play audio out of my computer headphone jack directly into my mixer. The video offset was similar to what I used with the AJA Kona 3 and AJA IoXT. The video output was very clean…similar to what I get from other I/O devices. And I got all the flexibility of output I have come to expect from this…now discontinued software. It worked well.

    Next I tested it with Adobe Premiere CS6. For this I used it with a family video project. Now, prior to this I hadn’t used and I/O device with CS6. I had tried to use CS5.5 with the AJA Kona 3, and it was less than solid. You had to use custom AJA settings, and I could see the canvas (program monitor) output, but not the Viewer (preview). I had used CS6 to edit, but not monitor externally. So when I launched it with the T-TAP attached, I was very pleasantly surprised to find that it worked, and worked VERY well. No longer did I need custom AJA settings, the base T-Tap driver and Adobe plugin was all that I needed and I got a solid signal from CS6. Viewer, Canvas…zero latency and no audio drift. No slowdown in performance. It simply worked, and worked well. And like with FCP 7, I could either monitor audio via the T-Tap, or route it through the direct out (headphone jack). It was perfect.

    The final test was with Avid Symphony 6.5. And this was a full on, frying pan to fire test. I was hired to do a remote edit…travel to the location to edit footage being shot on location, and turn around the edit in one day. The shoot was tapeless, shot with XDCAM EX cameras. The footage came in, I used AMA to get it into the system, and then edited on my 2012 MacBook Pro, and I monitored externally via the T-Tap and the hotel’s HDTV. For the first part of the edit I didn’t use the device, I did everything with the laptop. That’s because Avid locks the audio output to the AJA T-Tap….meaning that audio followed video, and I’d have to monitor audio via the HDTV. A tad difficult as it was bolted to the dresser. Unlike FCP 7 and Adobe Premiere CS6, I couldn’t choose an alternate output for the audio. So I did the initial edit without the T-Tap, but when it came time to show the client my cut, I connected it to the TV and was able to play back (with zero latency and frame offset) for the client at full quality. All while I was confined to the really small hotel table. My computer, hard drive and T-Tap barely fit…but nothing was really crammed in, there was elbow room. And the edit went smoothly.

    Unfortunately I did not test this with FCP-X, as I do not have that on my system. However, I do know that it works with FCP-X, and the latest update of FCP-X and the T-TAP drivers make external viewing very solid.

    Bottom line is…the AJA T-Tap is amazingly simple, and simply works. It’s great no-fuss no-muss video output for the major editing systems. The simplicity, the price point, small footprint and the flexibility of this little box make it a must have in my book. It works with any Thunderbolt equipped Mac and it perfect for low cost, high quality video output monitoring. AJA has a reputation, and track record, for compatibility and stability…and that tradition is carried on with the AJA T-TAP.

    (NOTE: The T-Tap review unit was returned to AJA after a 4 week test period).

    Well, I’m done with A HAUNTING.  I sent off my last episode a couple weeks ago. The good news is that the ratings started out good, and only got better and better. So that means that another season is a strong possibility.  Although if it happened it might not be for a while…pre-production and writing and then production.  But now I’m getting ahead of myself.

    If you want to see the episodes I edited, they can be found on YouTube.  Dark Dreams and Nightmare in Bridgeport.  My favorite episode that I cut as yet to air.  It airs on Friday, December 7 on Destination America.

    The show was fun to work on. Cutting recreations that were more like full scenes with interviews interspersed throughout…instead of using them as b-roll over VO and interviews.  This was more like cutting narrative, which I really enjoy cutting.  I had scripts that were short, so I had cuts that came in a minute short…and then needed to struggle to not only find the extra minute, but the other 4:30 for the international “snap ins.”  I also had scripts that were 20 pages long, and thus my cuts were 20 min long. This presented it’s own issues…sure, I now had plenty of footage for snap ins, but with that much extra, I’m faced with cutting really good scenes, and often cutting scenes that help tie the whole story together.

    We did use a lot of Boris Continuum Complete effects…and I relied a lot on the Paint Effect tricks I learned years ago.  We did have an editor who was an effects wiz, so he made some presets we could drop onto clips..and that really helped. Tweaking those effects allowed me to familiarize myself with Boris a bit more.

    On the technical side, I started the show cutting on Avid Symphony 6.0 on my MacPro Octo 3.0Ghz tower (with AJA Kona 3), but then almost immediately began beta testing of Avid Symphony 6.5 and resorting to the new MacBook Pro 2012 non-retina with AJA IoXT…and the ability to have more “voices” of audio so I could layer in more audio into my temp mix.  And the AAFs exported perfectly to ProTools.  I also needed to resort to working in my bedroom, as my home office is my garage, and it isn’t insulated. And we had two very hot months here in LA.

    The only issues I had with Symphony 6.5 was a Segmentation Fault error when I tried exporting H.264 QT’s after working for a while on a project. It would export fine if I just opened and then exported.  But work for a while, and export…I’d get that error.  And during the entire time I used Symphony 6.5…including the two month beta testing period…I only crashed twice.  Pretty stable system.  As opposed to the current Avid 6.0.3 system I am editing with on my current gig.  Shared Storage setup running EditShare, on an iMac.  Crashed 2-3 times a day…segmentation faults that would cause Avid to quit.  Updating to 6.0.3.2 helped greatly…now I only crash once a week.

    So yes, I’ve moved onto my next show. In an office with multiple editors and assistants.  Shared projects and shared storage.  I’ll be working on Act 4 of show 103 on day, then Act 2 of show 105 the next, then re-arranging show 106 for the rest of the week.  Reality show, so I’m getting my toes wet in that field again.

    Denise Juneau and the Montana Native American Vote

    Last week I was enlisted to help edit a news package for Native American Public Telecommunications (NAPT) that would also end up on the MacNeil Lehrer NewsHour.  This was a rush job, as it pertained to the 2012 election, and that was in less than a week.  We had to work quick to get this done in order to air.  Very typical for news…but something I hadn’t done before.  It was a whirlwind edit.

    First off…the story.  Click on the link above to watch the end result.  Basically it is about how important the Native American vote is to the elections in Montana.  While we did showcase one candidate (who was the first Native American to be voted into a statewide post), the main story had to be about the vote itself.  Because if you make a piece about one candidate, and air that, you need to provide equal air time to the opposing candidate.  So we had to do this properly.

    How did I get this job? Well, the producer is a Native American producer out of Idaho, and I have a lead into that community on several fronts.  Mainly because I too am Native American (1st generation Salish descendant, part of the Flathead Nation in northwestern Montana).  But also because the camera operator runs Native Voices Public Television, and I was an intern there in college. And he is my stepfather…but that’s besides the point.  I’m a decent shooter and good editor (so I’m told), and they wanted my talent.  So on Tuesday I flew from LA to Great Falls…a trip that took 11 hours, mainly due to the layovers in Portland and Seattle.

    I tried to pack light.  I packed my 2012 MacBook Pro, AJA IoXT, mouse, assorted cabling and 500GB portable hard drive and clothing into my backpack.  And then in the camera bag I packed my Canon 7D, GoPro, headphones and various accessories.  Then a pelican case with a 2TB CalDigit VR.  All perfectly sized for carry on…nothing needed checking. The camera operator was bringing along a Sony HDCAM camera…tape based (one reason I was bringing my IoXT…to capture the tape)…as well as an audio kit with shotgun mic, wireless and wired lavs, Lowell lighting kit and a Sachler tripod.  While he was slated to be the main camera guy, I brought along my 7D and GoPro to shoot extra stuff.

    Now, while I was landing and staying in Great Falls, we needed to go to Havre Montana…120 miles away.  So we were up early and headed out.  I mounted the GoPro on the roof of the car to get driving scenics, and shot a bit out the window as we drove with the 7D.  When we arrived we needed to go to a few locations to get  some interviews before the rally that evening.  I’ve never worked in news, but because I have seen a few reports, I noted that often they have a wide shot of the reporter talking to someone before the interview, or a second camera shooting the interview, so I did the same.  Shooting a wide of the interviews to use as intros or cutaways. Between getting interviews and the rally, we also got as much b-roll as possible: campaign signs, scenics, town shots, as well as the reporter/producer standup.  I was glad that I was there with the 7D, as pulling over to get a quick shot of a sign or a poster was really easy…a lot easier than pulling out the big HDCAM camera and sticks.

    When we got to the rally I was relegated to audio duty.  Handed a boom mic and the wired lav and a small mixer, and charged with getting the audio, and riding the levels.

    The rally wrapped at 7PM and we needed to get back to the hotel.  While we drove back I offloaded the 7D and GoPro cards to my portable hard drive (loving the SD card slot in my laptop now), and then transcoded them into Avid Symphony.  The vehicle we were in had an DC outlet so I didn’t have to worry about power. I was very glad to have this “down time” to transcode the footage.

    When we got back to the hotel we ordered pizza and set up my remote edit station.  I connected the camera to the IoXT via SDI, and that to my MBP via Thunderbolt.  Then the CalDigit was connected via Firewire 800…fine for capturing and playing back DNxHD145 (1080i 29.97).  I was lucky enough to have an HDTV in the room, so I used that as the “client monitor,” connecting it to the IoXT via HDMI.  We watched the tapes as we captured, and then the producer wrote the story (he had to write a print version, a radio version and a web/broadcast version). We did have the first part of the story written, he did it as a stand up in the field.  The rest of the story he recorded as temp with a Snowball mic and Garageband.  And then he and the camera guy went to bed…long exhausting day.  I edited a “radio cut,” just audio stringout of the standup, narration and interview bites.  That took about an hour for a 5:30 run time.  Then I too hit the sack at 12:30.  We agreed to meet at 6:30 AM to finish the rest of the cut.

    At 6:30 met in my room, drowned ourselves in coffee and continued to edit.  After an hour we had the piece done, with a run time of 5:17.  I did a quick audio pass to even things out, very rudimentary color pass using the HDTV…and then compressed a file and posted it for the clients (NAPT) to review and give notes.  We hoped to have it delivered by that day, but since the Exec Producer was traveling too, they didn’t get a chance to see it until later.  So, I packed everything up, backed up the media onto the external drive and the CalDigit VR (redundancy!) and headed to the airport (11:30 AM flight).  I received notes while on the road, and when I landed (9:55) I got home, set up the drive on my main workstation, addressed the minimal notes, did a proper audio pass and color correction using my FSI broadcast monitor…and compressed it for YouTube per the clients request. I had that uploaded to their FTP by 1AM, and it was online by 6AM…YouTube, NAPT website and Facebook.

    This certainly was a down and dirty edit.  And I’m sure it took longer than most news stories do.  I also know that the ability to edit at least the tapeless formats native would have sped things up, but I did have time to transcode as we drove back.  Although, if we shot entirely tapeless, I’m sure I could have had the rough cut done during the trip back.  And I know that using another NLE, say Adobe Premiere, would allow me to edit the formats native and save on transcode time. But I needed solid tape capture, and Avid with the IoXT gave me that.  Yes, I could have captured with the AJA tool as ProRes and brought that into Premiere (I say, anticipating y0ur comments).  I used Avid as that is what I was used to and it’s best to use what you know when you have a quick turnaround.  One of these days I will learn that app better.

    Sorry, it has been a LONG while since I posted anything about A HAUNTING.  I was going to get into the FINE cut stage of the process when I was on my first episode, but then I got buried in things like the fine cut for that episode, beginning the rough cut of episode 2…prepping other episodes for audio mix and online.  This show took a lot of my time. One big reason is that, while the show needed to be 43:30 for the domestic version…and 48:00 for the international cut (an extra 4:30 of material called SNAP INS that we put at the end of the sequence to be added by someone later…cut into the show).  Anyway, the schedule we had was for cutting shows of that length. However, some of the scripts were a little longer and the rough cuts ended up being 62 min for my first episode, and 68 minutes for my third.  That means that I needed to take extra time to cut that extra footage.  I average about 3-4 minutes a day (pushing 10-12 hours a day) so that is a few days more of work.  Which is fine….gives us options to cut, and options for snap ins.

    My second episode? Yeah, that was a tad short.  42:50 for the rough, so I had to extend scenes and draw out moments to make it to time, and for one long edit session, my producer and I (she moved back to LA after production wrapped, so it was nice to have her in my cutting room…er…garage) figured out the extra four and a half minutes of program time for the international cut.

    So now I want to talk about the FINE cut process.  This is what happens after the producer gives me notes…although if it is my segment producer that might just end up being the second rough cut, and when the EP (executive producer/show runner) gives notes, THAT is the fine cut.  And that is what we send to the network.

    The Fine Cut is one of my favorite parts of the editing process.  Because that is where I can go back and finesse the scenes, add moments, tweak the cut, do any special transitional effects that the networks love.  See, for me, the rough cut can be a chore.  I have to take this pile of footage and assemble it into something that makes sense.  Look for the best parts, put them together in some semblance of order.  Sure, I do try to finesse the scenes so they work, but I don’t spend a lot of time on this as I need to just get the cut out for the producer/director to see.  “Git ‘er done” as Larry the Cable Guy would say.

    Then I get notes…and can start on the fine cut.  I can go back, look for better shots or angles (since they tend to ask, “isn’t there a better angle or take for this line?”)…mine the footage for something I might have missed. Spend time making the cut better.  Tweak the music better, add more sound design to make it sound richer, or to sell the cut (or in this case, the scary moments) better. That’s the phase I just finished up now…on my third episode.  And it’s one of my favorite parts because I can go back and look at the other options…find great looks or moments to add to the cut to make it better. Where the rough cut might have you hacking at the block of wood to get the general shape, the fine cut allows you go go in with finer carving tools and add more detail, smooth out some edges (to use carving as a metaphor).

    This is also the part of the post production phase where we settle in on the VFX shots we will be using, and then I prep those for the VFX guy. We have had some issues with a few VFX shots, in the way they were set up, that were difficult to pull off given the budget of the show. But most of those were dealt with in cutting the scenes differently to make them work better, to lighten the load on the lone VFX guy plugging away in his VFX cave. For this part, since we were working at full res, we’d export out Quicktime movies of the footage, with handles when we could manage, and reference quicktimes of our often pathetic attempts in temping them (If only you saw how rough some of my VFX attempts are. Yeah, not my forté).

    And then we send this off to the network…and hopefully their notes won’t cause too much pain.

    OH…and one note on the last episode I am working on. I have been using Avid Symphony 6.5 pretty much since the start of the series, as I was beta testing it since June. And it allows more “voices” of real time audio…basically more tracks of audio.  I still get 16 tracks, but instead of them all being MONO, and needing to use two tracks for much of my audio like SFX and music…I can modify them to stereo tracks and thus they only take up one track on the timeline.  This gave me more options when I did the sound design. Which it turns out I spend most of my time doing. Sure, I cut the picture, but a lot of the scare that happens, in my latest episode at least, is due to audio hits and cues. Relying on what you hear more than what y0u see to sell the scare.  To me, it works a lot better than seeing the ghost…flashing it and then hinting at what people see tends to work better.  On the first two episodes I did I used mono tracks…but because I found myself very limited in what I could do, I tested using 7 mono tracks (1 for narration, two for interview, 4 for on camera audio) and then 9 stereo tracks  (2 for music, 7 for SFX). I sent an AAF to the post mix house and they said it came into Protools easily, so for the last show, I had more audio tracks for sound design goodness.

    All right, that does it for this episode of…A HAUNTING, the post process.

    Sorry, I’ve really let this blog, and my podcast, fall to the wayside. I’ve fallen behind on stuff I’ve wanted to write and podcast about. If you would like to know why, here’ a great article from Kylee Wall on the Creative Cow about being a post production dad.

    The short of it is that I’m working 12-15 hours a day on A HAUNTING to get it done before it starts airing. And I’m setting aside the remaining time to be with my family, so they don’t forget what I look like.

    SOON! Soon though. I have a great blog post I want to write about going from Rough Cut phase to Fine Cut phase. And a podcast about borrowing ideas from other people. So… soon. In the meantime, read that blog post by Kylee.

    This is very interesting…given my current situation.  Production company in Virginia…production happening in Virginia.  Post being a mixture of Virginia and Los Angeles.  Really curious how this would handle long form shows.

    One thing I find myself doing very often while editing remotely…me in L.A., the production company in Virginia…is exporting Quicktime files of my project for my producers at “home base” to watch.  I will do this on an Act by Act basis…when I finish an act, I’ll export it, upload to their FTP.

    Now, like most, if not all of you, I don’t like to sit and wait for a long time while this process happens.  I have stuff to do. So I want this to go fast.  And I have found a formula that makes it not only go fast, but keeps the file sizes small too.  Without making the video look too crappy.

    First off, I want to note, this is REVIEW quality. Meaning, you watch it for content, not quality.  The outputs aren’t high res, they aren’t high quality.  They are OK quality.  This is how I keep the file sizes small, and export times fast.  How fast?  Real time fast.  A 48 min sequence exports in about 50 min. OK, a little SLOWER than real time.  But what if I told you this includes a timecode window? One that I didn’t render before exporting?  Yeah, that impressed me too.

    OK, so the first thing I do is highlight all my tracks, and from start to finish on the timeline.  Then I do an EXPERT RENDER…meaning, “render all effects that aren’t real time effects.”  Since I render as I edit, this often takes little to no time…but some stuff slips through the cracks.  Then, I make a new blank layer, and drop on my Timecode Generator effect.  And then, without rendering again (if you did another expert render, it would want to render that timecode…for the entire length of the sequence)…I simply choose FILE>EXPORT.  A window pops up asking for export type, and location of where the file should go.  From there I click on the drop down menu and choose EXPORT TO QT MOVIE, and set my destination, and file name.  Then I use the following settings.

    1) This is the main export window.  I’m not going to repeat all the settings you see here, I only want to point out that I use 640×360, as I am editing a 16:9 sequence, and I make sure it is chosen in both the WIDTH AND HEIGHT section, and the DISPLAY ASPECT RATIO section.  This frame dimension must be consistent in all export window options. Oh, and USE MARKS means that the IN and OUT points I set are the range that will be exported. I will have my full timeline up, but only want to export one Act, so I mark IN and OUT for the act I want to export. Make sure that is checked, otherwise it’ll export the whole sequence.

    OK…moving on.

    2) I click on FORMAT OPTIONS to get the above menu.  I make sure to enable AUDIO and VIDEO here. Even though I might have it chosen to do video and audio in the previous menu, if it isn’t chosen here, you won’t get it. Gotta to it in both places.  Click on AUDIO…choose 44.1 and 16-bit stereo. If you want smaller QT files, make it mono, or 22.0 and mono.  I don’t do this. Because audio is very important. If the picture quality sucks…fine.  People can see past that. But if the audio sucks, is noisy…then the QT is unwatchable.  This is the one area I keep the settings in the GOOD range.

    OK, click on VIDEO and you get:

    3) A couple things to mention here.  At first Avid defaults to SORENSON 3.  So click on the drop down menu and choose H.264.  If you leave the DATA RATE on AUTOMATIC, that allows you to adjust the slider.  If you type in a number, RESTRICT TO, then you can’t. I generally keep it on AUTOMATIC and put the quality at MEDIUM.  For smaller files, you can restrict to 1000 or 1500 kbps.  I just find MEDIUM to be a good middle ground. Another important thing to do, is change the encoding from BEST QUALITY, where it defaults, to FASTER.  This is the key to the fast export times.

    Click OK.  Click OK again..the other OK, in the MOVIE settings.  Then click SAVE AS…and name it whatever you will.  This way you don’t need to redo your settings.  Just choose the preset you make and you are ready to go.

    Then watch it progress in real time.

    Now, if you want fast encoding of QT H.264 files…also in real time. Then look at the Matrox solutions. Compress HD is a PCIe card that fits in the MacPro computers. And then there are the MAX versions of their hardware IO devices.  If you use the Matrox H.264 option, that will trigger these devices to kick in and aid the encoding process. Making high res H.264s in real time.  Chew on that.

    (NOTE: I am working with footage from the Canon C300…accessed via AMA and consolidated, not transcoded. So our footage is XDCAM 422…a GOP format. And GOP formats don’t allow for SAME AS SOURCE exports. So I can’t do that and use, say COMPRESSOR and add the TC burn there. If your footage was DNxHD in any flavor, you’d be able to do that. But I wonder if doing that, then taking into Compressor or Sorenson and compressing is any quicker than the real time, direct output from Avid that I have laid out here.)

    I’m going to avoid the tech in this post, and try to concentrate on the creative.  Because to tell the truth, there isn’t a lot of tech involved, just editing.  Yes, can say things like the trim tool in Avid make cutting this show very easy, as the trim tool, in my opinion, is one of the best tool for cutting narrative.  It allows me to nuance the scenes better than I could with FCP.

    Oh, sorry, I’m talking tech.  I’ll stop.  I’ll stop saying things like I’m doing my temp mix on the timeline without the audio mix tool open, I’m using keyframes. Not chopping up the audio tracks and lowering/raising levels in the mixer and adding dissolves, like I always have done in the past.  I’ll skip saying that.

    I guess I could say that I’m sticking to traditional editing, and not relying on a lot of fancy transition effects.  Well, I’m using ONE, and only as a transition between scenes that denote a passage of time. It’s a simple film leak that’s super imposed between the cuts, with speed ramps for 4-5 frames on either end to make it look like a camera starting and stopping.  Other than that, it’s all straight cutting.

    And that’s what I want to really get into…the cutting. The way I approach the cutting. This show is an interesting cross of interviews, narration and recreations.  That makes it a bit of a challenge to cut.  See, I can’t just cut it like a narrative show, just cut the scenes as scripted. I need to make sure that the narration fits, and that interview bites fit. See, I have to have the scenes driven mainly by the sound bites, with the audio from the scenes to be lowered to make room for them, and only have it break through at certain points so that story points are made in the narrative.  It’s a balance…a tough one.  Because the dialog of the actors will be the same as the interview, so the interview audio needs to go over the acting, but then punctuate the scenes with audio from the scenes.  And still allow for space for narration to fill in the gaps.

    Now, while they did take a lot of this into account when they shot, there are still moments where I have more narration than I do scene, so I need to recut the scene after I add the interview and VO so that I can cover what is being said. It can be tough at times, but it can also allow me to find reactions to emphasize what is being said.  It’s a challenge, and all this does add time to the edit. I’m not cutting a doc with VO and sound bites that I just need to fill in with b-roll and music…nor am I cutting straight narrative, where I can rely on performance to carry the story. I need to blend both.  And moreso than previous shows I have cut that had recreations, like Andrew Jackson and the Mexican American War.  Those relied mainly on VO and interview bites, and all the recreations were basically backdrop to those.  Very few sound ups were had.  But this show has a 60/40 split, leaning towards performance over interview.

    OK, with the story part cut, I need to also address audio.  I tend to cut the story and the interviews and VO first, and then go back and add music and sound design. And yes, I mean sound design. More on that in a second.

    MUSIC

    This show will have a composer, and the temp tracks we are relying on are from previous shows and other cues in their library.  The music will all be redone, so what I am doing is just temp, but it needs to sound like the final to make the scenes work…to sell it to the producers and network. So a good amount of time is taken on the music. And as what always happens with music, the timing of the scene changes slightly when I add music, and add sections where the music punctuates the action.

    SOUND EFFECTS

    Well, I have to say “sound design.” No, I am not an audio mixer, but I still need to do quite a bit of sound design. I need to layer music, and small hits, rises, impacts…scary SFX cues and demon breaths and all sorts of audio to make the scary parts work. I mean, you should see them without the scary music or effects. They are creepy, sure. But add the SFX and it REALLY sells it. Audio can get pretty thick…16 tracks of audio, and more than a few are stereo tracks. Go down to the WEEKS 3-4 post and see what my timeline looks like. I might have 3 tracks of video, because I might layer a couple shots, have a layer for transitions, and another for titles. But that’s it.  AUDIO? 16 tracks…the most I am allowed for real time playback. Audio is by far more involved than video.

    But again, this adds time. A lot of time.  Hunting through music cues for the right one, one that you didn’t use before. And wait, what was that one I heard when looking for something in that last act? Where is that one, it will work great here.  And then listening to all sorts of WHOOSHES and IMPACTS and ghostly audio again and again to see what might work, and what just sounds cheesy.

    So I delivered the rough cut of my first episode…and it was 58 minutes long. It needs to be 48 minutes for international distribution, with 5 minutes taken out for domestic. So it’s a tad long.  I’m awaiting notes on that one. In the meantime, I’m in the middle of Act 4 of my second episode (of 6 acts) and making headway. Just today I cut 2:48 of finished, fully mixed and sound designed video.  A little slower than usual, I try to get 4-5 min done a day. But today I was working on a scene that was full of dramatic tension buildup, and ghostly encounter, so it took a little time. I expect notes on my first episode tomorrow, so when that happens, I’ll stop work on episode 2 to address those so we can get that off to the network.  Then back to finish up the rough cut to get that to the producers. And by then my drive with the next episode will arrive.

    No rest until it’s over.  #postdontstop

    OK, this has been an odd couple weeks, as I took half a week off to vacation up at Lake Arrowhead, and then I had a tight tight deadline to get this show done.  But I’ll keep this short and sweet too.  I’ll mention the obstacles I faced, and how I solved them.

    OBSTACLE #1: The heat.

    Yes, it was getting hot in LA.  In the 90’s in the valley where my office…er…garage…is located.  And my garage lacks one major component…insulation.  So while I did buy a 12,000 BTU air conditioner, it really didn’t cool the office down at all.  And that made working out there intolerable, and dangerous for the equipment.  So, I did the only thing I could do at the time…moved into the house.  I set up a small table in my bedroom and set up my new 2012 MacBook Pro (non-retina) along with one of my Dell 24″ monitors and a speaker so that I could continue editing in a nice cool setting. I brought in my nice chair, bought a Griffin laptop mount to get the computer up to a reasonable height to match the Dell, connected the hard drive and was ready to go.  This setup helped with obstacle #2.

    OBSTACLE #2: Slow computer

    Even though it is a tower with loads of RAM (if you think 16GB is loads) and a nice graphics card (Nvidia 285GT) with a Kona 3 card…Avid Symphony seemed to struggle. I would get beach balls periodically that would last about 30-45 seconds, then finally go away. The system would lag behind my keystrokes, meaning I’d hit 5-6 keys…then wait two seconds for the Avid to catch up to me.  And I would get consistent FATAL IO ERRORS…related to the Kona.  And this horrid “K” key bug where I’d press “K” to pause playback, only it wouldn’t, it just slowed playback down until I released it…in which case it resumed at full speed.  I’d need to his the spacebar to stop.  That happened periodically.

    So in moving into the house, I began using my laptop to edit.  And let me tell you, most of those problems went away. By most I mean the “K” key issue persisted, and I got one FATAL IO ERROR…but only after I installed the AJA IoXT box to the system.  And then it only happened once in two weeks. And I didn’t use the IoXT all the time, as my reference monitor had to be left out in the office/garage, as I have no room on my bedroom setup for it.  Ah well.  But overall, the laptop performed a lot better than my tower.  Even encoding an H.264 with TC burn was faster on that laptop.  My 2008 MacPro is showing it’s age.

    OBSTACLE #3: lots of footage, lots of scenes, music to be added…

    Basically…time.  I was running short on time, and I had a lot of footage to cut.  In the end I went a couple days over my deadline, and ended up with a 57 min rough cut.  The cut should be in the 48 min range for international, with three minutes removed for domestic.  So I am 10 minutes long.  No biggie, that just means that the episode will have to be attacked with a machete to cut out enough stuff to get me to time.  It took me longer than usual as I had a small library of music that I needed to choose from, and I’m a bit too much of a perfectionist when it comes to music editing and temp audio mixing.  It’s a blessing, and a curse. My cuts sound good…but take longer to do.  It turned out to be fine, as the producers were still focused on the first episode that another editor cut…so I had some breathing room.  Still, it took eighteen 12-14 hour days to get this cut done.  3 days more than I was allotted for this.  I hope the next episode will go smoother.  I think it will.

    OBSTACLE #4: other things

    Yes, other things needed my attention.  I was on vacation, so was busy trying to work and pack at the same time. Then trying to work with the kids constantly coming in because they heard some cool moment they wanted to see, and they wanted to watch me edit (at that point I switched to using headphones so they couldn’t hear things), and I was trying to deal with two onlines for MSNBC that needed tweaks here and there (Defending Casey Anthony and Ted Bundy: Death Row Tapes.  Casey already aired).

    All in all I like my cut.  I will need to go back and “fancify” things…rock and roll it a little.  Add speed effects and cool transitions and the like.  I did a bit while I was doing the rough after seeing what the first cut had, I had to try to keep the same style, and make it “not boring.” I did mainly focus on the story, but also wanted to have SOME cool things to make it stand out.  And that cool stuff takes a while.  I wonder how long the editors of AMERICAN HORROR STORY get to cut a show?  I’ll see if the assistant editor Misha comments here and lets us know.  He follows me on Twitter, and we’ve had pizza together…so I hope he might.

    OK…the cut is done, and I’m off to eat dinner and watch a movie with my family.  Here’s a picture of my timeline:

    OK, time for another review for a hard drive enclosure: the RAIDAGE GAGE104U40SL-SAUF 1U 4 Bay RAID Enclosure from iStarUSA. This one is cool…it stands out. That’s why, when the makers asked me to review the unit by commenting on a previous post, I leapt at the chance. Well, after first starting to compose the email gently letting them down… “Thank you for your interest in my blog. I’m sorry, but I no longer do hardware reviews for drive enclosures as I find them dull and the same old same old…” But then I got a wild hair and clicked on the link to look at the thing.

    I liked what I saw.

    Here’s why I liked what I saw. This is a slot loading TRAYLESS hard drive enclosure. I can take bare SATA drives I buy off the shelf at Fry’s or order at newegg.com and put them in the unit right away. No trays to screw onto the drives first. Pop open the door, and in they go.

    I’m a HUGE fan of this type of enclosure, because I use bare SATA drives to archive all sorts of things. Camera masters, media managed show masters, show outputs, stock footage, music, and sound effects. And I also use them on occasion to edit from, although that is rare. You see, I currently have a SansDigital unit connected via eSATA that I use as a trayless enclosure, although it isn’t designed to be one. Yes, you can slide the drives in, but the unit wants you to then screw them in, to keep them in place. The drives aren’t as snug in their beds as they should be…they are only held in place by the connectors. So it isn’t the best solution, which is why I mainly use it only for archive solutions.

    But this unit is designed for the bare drives. It holds them in place without the need for trays.

    And it has nice release handles to aid in getting the disks out.

    And it’s VERY quiet. There are fans for cooling, but I don’t hear them. I hear the drives more than them, and when you close the big front door…even that sound becomes very minute. Barely noticeable. My MacPro is louder.


    And there are indicator lights on the front so you can see which slots have drives in them, and if they are active.

    OK, so we have one cool feature… that the unit takes bare SATA drives without trays. Let’s add a couple more cool features.

    CONNECTIVITY.

    This unit pretty much has it all. It covers nearly all the bases. It has eSATA (my current connection of choice), Firewire 800 (two connectors), Firewire 400 (one connector), USB 3.0. You can connect this to just about anything (Yes, for Thunderbolt you will need an adapter). Perfect! I can connect it to my MacPro via eSATA, or to my 2012 MacBook Pro via Firewire 800 or ultra fast USB 3 and use it to back up tapeless media or files from my laptop. Or use it as my media drive. Macs used to lack USB 3, but now they are available on their laptops…and they are a Windows workstation standard, so on a Windows PC you have ultra fast USB 3 connect-ability as well.

    To answer your question before you ask it…no, you cannot connect it to your tower via eSATA and another computer via Firewire or USB 3 and have it show up on both at the same time. It won’t work, I tried. And why two firewire 800 ports? Loop through. Daisy chaining drives is possible with this.

    FOOTPRINT.

    Well, it does have a pretty major footprint. Meaning that it does take up a big part of your desk. But you can set one of your monitors on top of it, or put it off to the side under your decks. Unlike my SansDigital that stacks the drives vertically, this design has the drives side by side. But that is to enable it to do the other cool think I liked about it.

    IT’S RACK MOUNTABLE!

    It takes up 1 U of rack space. That stands for ONE UNIT…one width high. In that respect, it takes up very little space. And since I happen to have a rack or two under my desk, it fit in perfectly. So perfectly that I’m most likely going to buy the unit when testing is over. I like it that much.

    RAID TYPES

    The unit can be configured in many ways.
    – JBOD (Just a Bunch of drives), meaning that each drive shows up as a separate drive. Put four drives in, you see four drives appear on the desktop.
    – RAID 0
    – RAID 1
    – RAID 3
    – RAID 5

    Don’t know what those all mean? Then go here for some light reading:

    Most people use JBOD like I do, for archiving, RAID 0 for speed, or RAID 5 for speed and redundancy.

    SPEEDS

    Yes yes…”how fast is the thing?” I know that’s what you want to know. Alright, I tested it only as a JBOD unit. That’s the default setting it ships with. I tested it in this manner as I didn’t have four drives of the same make/model/size in order to test the other RAID types. Those are all in my other RAID. I did have four drives of varying size, so I tested the speed of the unit in JBOD mode via firewire 800 and eSATA. Those being the fastest and more common connector types.

    With eSATA I got speeds in the 98MB/s to 108MB/s range. A bit faster than I get with a G-Raid connected via eSATA, or my SansDigital. VERY nice.

    Firewire 800 resulted in between 69MB/s and 82MB/s…which is typical for the other drives I have as well.

    For the RAID 0 and RAID 5 testing, I relied on the manufacturer to provide the numbers. I’m sure if I had the 4 drives to test with I’d get the same numbers they did. I’m confident they were truthful in their reporting. They connected it via eSATA to a windows machine.

    Here are the RAID 0 numbers:

    Between 111MB/s and 123MB/s using the AJA test…but upwards of 140MB/s using the ATTO benchmark. I think I trust that one better on a PC.

    And the RAID 5 numbers:

    RAID 5 gave pretty much the same numbers as RAID 0. Between 111MB.s and 119MB/s, and upwards of 140MB/s using the ATTO test. Now, the reasons the numbers are a LOT higher like 300MB/s, is the limitation of eSATA connections. That’s near it’s limit. For faster speeds, look at GigE Ethernet, Fibre and SAS connection speeds. But for the connection types it has, that’s pretty dang decent. Perfectly fine for multiple layers of compressed video formats like ProRes and DNxHD. 3-4 streams in my tests.

    No, it isn’t a speed demon, but what it offers is ease of use. Easy to get drives in and out, so you can buy bare SATA drives (cheaper than ones with enclosures) and swap them out for archiving camera masters, show masters, or going back and forth from project to project. And because it is rack mountable taking up only 1U of space…it’s compact and out of the way.

    By the way, they have a pretty cool video that shows off the unit on YouTube. Check it out.

    Now that I have a laptop with USB 3, and my Tower sporting eSATA…this is on my wish list.

    The units run for $375, and can be found on Newegg.com

    (The unit was returned at the conclusion of the review)

    Week 2 was a full week.  A LONG full week.  There is a lot of footage to go through, a lot of script pages to go through…so my days are ending up being 12-14 hours long.  It’s a good thing I enjoy editing…otherwise that’d be a bit much. But I love what I do.  I guess in this situation that’s a good and bad thing.

    Well, I cannot say enough how good the C300 footage looks. It looks great.  And they are using prime lenses, so it is really sharp. And it does very well in low light.  Some scenes are very dark, but I can still see what is going on.

    Now, this show is pretty unique in that it employs interviews, narration, and recreation audio. But the narration and interviews do tend to cover a lot of the scripted scenes.  I just have to let a few key lines be heard.  So this makes editing a bit tricky.  I have to cut the scene like you would a scene, yet make sure I leave enough room for narration and interviews to cover up the parts that need to be covered up, yet let the lines I want to hear be heard at the right time.  And I want these scenes to make sense if I turned off narration and interviews.  So, what I do first is cut the scene like I would cut the scene.  Then I drag in the narration and sound bites and try to fit them in.  If I need to extend the scene a bit to cover more of either of those, before I can do a sound up on my lines, I then deal with that.  Typically adding more pauses, looks, reactions…breath.  If i need to shorten the scene, I do so but still try to have the dialog make sense. Yes, it is going to be covered up by voiceovers, but still, I want it to work.

    On the technical end, I am working slightly longer hours because I need to group the clips myself…multicam them.  The Assistant Editor will do them for the next episode.  This episode I was told that it wasn’t done (something I agreed to), because most times both cameras don’t cover the same action, and it might be best to just treat them as separate takes.  I agreed to that, and thus why they weren’t grouped, but then I found that grouping them helps me speed up the process.  First, I can watch both angles at the same time when previewing footage.  Second, I’m finding that more than a few times, the lines being read differ from each other.  They aren’t sticking to the script strictly…mainly in scenes with the kids.  They want the kids to act natural, so they are having them adjust the lines sometimes to best fit how they say things. It does help the kids give better performances, but it does make editing more…challenging. I prefer the better performances…let me deal with getting them to match.

    As for the Avid performance, one thing is plaguing me.  Well, a couple things, but the biggest is that the PAUSE button…the K key…isn’t pausing. It isn’t stopping playback about 1 times in 4.  25% of the time, it doesn’t pause.  It’ll just slow down the footage. I then have to press the space bar to get it to stop (space is PLAY, but that also STOPS).  I’ve mentioned this on the COW forums, and the Avid ones.  And I have found others posing the same issues. And there are no solutions, there are workarounds.  One person suggested I remap PLAY to the K key.  And I did, and that stopped the issue. But then raises more issues, like when I press the K key, then press L…things go double speed. Don’t ask why I do that. Habit I formed in FCP that when I pressed play, then moved the playhead with the mouse, FCP then picked up playing…because it doesn’t stop unless you tell it to, even if you move the playhead. Whereas Avid will stop if you move the playhead.  8 years of habit…tough to break.

    Now, I did try fixing it.  It first appeared under Avid 5.5.  But then I updated to 6.0…then 6.0.1.  Still happened.  I Patched using 6.0.1.1…same thing.  I switched to my laptop…SAME THING.  It’s taunting me.  So many people say they don’t have this issue, but I have it on several machines, using several versions of the app.  And others report this issue, so somethings up.

    The second issue is that Avid still seems to not be able to keep up with my keystrokes, and often will lock up with a spinning beachball.  After the 4th I’m going to try to get my laptop set up to the main editing machine…and hope that cures that. But my MacPro tower is running 16GB of RAM, is running an NVIDIA card…is running in 64 bit mode, so I don’t get why this is still happening.  On more than one occasion I’ve had to force quit because it was just locked up.

    There is a third issue, one that plagued Walter Biscardi…and that is one of TAPE NAME.  Or rather, the lack of one.  Unlike FCP, and Adobe, that assign the reel number a name based on the name of the folder you backed up to…or the name of the card if you import directly, Avid Media Composer or Symphony don’t do that.  They don’t assign any source to the clip.  This would help me greatly in tracking down footage masters. And this is a big issue when it comes to going to Resolve, as Walter found out.  And we don’t know how we’ll be finishing the shows just yet. So this is an issue that might affect us more, later.

    But Angus of Avid did say that they know of the issue and will be dealing with it.  Can’t wait guys, thanks.

    While I’ve been editing this, I’ve been onlining a couple other shows on the side.  TED BUNDY: DEATH ROW TAPES and DEFENDING CASEY ANTHONY, both for MSNBC.  These were editing with FCP, and on CASEY I’ve had to go in and do some touch up editing.  And I prep the shows for online and man, is FCP snappy and pain free, and no beach balls.  All the Avid slowness and locking up has made me really miss FCP.

    On the plus side I sent out Act one for review, just to show them the style I’m employing, and I got back good notes.  They like it…and that is a load off. And I’m really digging the trim mode editing Avid utilizes.  I’m able to make tweaks to the sequences so fast, and I am always tweaking shots to fill the void, or to shorten so that the lines I need to be heard, are heard.

    Yesterday I took a stab at editing the show using my laptop.  The laptop in question is the new 2012 MacBook Pro…2.3Ghz i7, 8GB of RAM, matte screen. I took the external drive with the episode and connected it via Firewire 800 (glad I got the non-retina…I need that connector).  I ordered a Thunderbolt to DVI/HDMI adapter from monoprice.com for a very reasonable $14 so that I could connect it to one of my Dell 24″ monitors.  Now, the laptop on the desk is a little low, and I’d like to get it semi close to the level of my Dell that I will be connecting it to, but I didn’t get a laptop stand…not for this test.  I’m too cheap…actually, too busy to go buy one.  So I used a box.

    Yes, a box.

    So as you can see, I have the laptop on the left, complete with project window, bins and mixer.  The large Dell has the Composer and timeline window. Nothing feeding my broadcast monitor yet.  I’m saving up for the AJA IOXT…or at least the AJA T-TAP.  Although of those only the IO XT has dual Thunderbolt, so I could connect the IO box and external monitor.  The T-TAP has one Thunderbolt port, so it’d be a choice of external monitor, or second computer monitor.  Not both….unless I shelled out for an Apple Display.  Not gonna happen.

    So I set out to edit, and edit I did.

    The new computer was definitely faster than my old one…a 2008 Octo-Core 3.0Ghz Mac Pro with 16GB of RAM. It ran circles around it.  It was able to keep up with my keystrokes, where the MacPro lagged behind a few keystrokes.  It scrubbed better, less skippy. Less beach balls.  Faster renders.  It was great.

    Sorta.

    You see, I have a script I need to follow, and I didn’t want to print it out and waste paper.  I like to look at it on the computer. But because I was editing with my computer, I could just lean over an look at the script.  I had to hide the Avid interface, or click away from it…read…then go back.  Distracting to say the least.  And I still needed to check email, look at show notes contained in emails, tweet, and iChat with the wife.  More than a few people on Twitter suggested that I get an iPad for this.  But guys, I just shelled out $2200 for a new laptop, I’m not about to shell out $500 more for an iPad.  Not just for reading the script.  The screen was small.  I’m used to two 24″ Dells to look at.  Suddenly I had a 15″ and a 24″.  It might have been better to have the second monitor to be smaller too. I might look into that.

    So I put up with it for the day, but that was it.  I switched back to the tower the next day…just so I could have my script at my ready. But I miss my laptop already as the edit station.  It was solid.  A pretty good replacement for my tower, at least for running Avid Symphony and FCP.  I haven’t tackled Premiere with it yet.  I did tackle FCP with it today, rendering out an online I am working on at the same time.  The renders were much faster, and I didn’t get any General Errors like I did with the Tower. The laptop was better.

    I might have to print out the script.  Because I under such a crunch, any speed boost would help.