Exploring The Video Feature Improvements On The Canon EOS-1D X
Canon have listened to DSLR movie makers and implemented important new features
Yesterday Canon launched their new flagship DSLR, the EOS-1D X. Please explore my previous post for a full rundown of the tech specs, this time I will be concentrating on the improvements that Canon have made for those that want to use the camera for movie making. The camera offers the first major improvement to Canon’s DSLR video functionality since the firmware update for the 5d Mark II that enabled 24p.
HDSLR video is now three years old and developments have really been few and far between. We know that Canon was caught by surprise when people took so readily to using the EOS-5D Mark II for making movies. It was never designed with that in mind but due to community pressure Canon eventually enabled some clunky audio controls and 24p through a firmware update. Subsequent cameras like the EOS-7D and the EOS-1D Mark IV had very minor improvements like the ability to shoot in 50/60p at a reduced resolution but there were no advancements. The reason for this is the development cycle of a new camera is many years, the 7D and 1D4 were already in development when the 5Dmk2 was released so there were only minor things that Canon had time to incorporate into those cameras. What they really needed was a totally new camera, one that could be built from the ground up with the knowledge that it would need to have some killer DSLR video functionality to push things forwards. Enter the EOS-1D X.
USE COUPON CODE: WARP25
InSecurity Gets Warped by Java Post Production
We recently caught up with Jack Tunnicliffe, owner-operator of Java Post Production, to hear about his work on a new Canadian TV series, InSecurity. Java Post produces the color correction and visual effects for this comedy show that went to air in January 2011. Jack walked us through the effects-heavy opening sequence of InSecurity and also previewed an intriguing scene that involves a composited 'finger screen.' Read our Red Room blog post.
Red Giant Warp
Both InSecurity segments use a lot of heavy-hitting Red Giant products, including Red Giant Warp. To celebrate Java Post Production's amazing work, we are offering a 25% sale on Warp until May 16, 2011. The six powerful plug-ins in Warp give you ultimate control over shadows, reflections, glows and corner point warps.
The Shadow tool renders realistic shadows for text or subjects shot on greenscreen, while the Reflection tool creates perfect mirror-like reflections. Use Radium Glow, Glow Lite and Glow Edge to add sophisticated glows and glimmers to any project. The Corner Pin tool heightens realism in any scene with advanced features for working with warped images, importing tracking data from Mocha for After Effects, and adding secondary transforms and motion blur.
InSecurity for java post production ،
San Francisco, California/Tokyo, Japan, January 24, 2011 — Intel provided a technological preview of the Grass Valley EDIUS 6 nonlinear editing (NLE) software, at the Intel Forum 2011 held in Japan, on January 18, 2011. At the Tokyo press conference, Intel highlighted its newly announced 2nd Generation Intel Core processor, embedded with Intel Quick Sync Video technology.
Grass Valley presented a technology preview of a forthcoming update to EDIUS 6, which has been fully optimized for the new Intel processor for hardware encoding support for H.264 videos, allowing for “faster than realtime” encoding of full HD (1920x1080) H.264 videos on a broad range of computers from notebook PCs to desktop PCs, using the new 2nd Generation Core Processor. More than 500 laptop and desktop PC platform designs are expected from all major computer manufacturers worldwide based on this new processor family. The EDIUS 6 update allows users to export to Blu-ray, or to the AVCHD format, faster than real-time directly from the EDIUS timeline.
"With our second generation Core processors, video editors using Grass Valley's EDIUS 6 will be able to complete their work faster than ever," said Fumihisa Shimono, Director of Software Marketing of Intel. "We know that in video, editing is all about performance and speed, and the combination of our second generation Core processors and EDIUS 6 deliver just that."
“Working closely with Intel, we have fine-tuned EDIUS 6 to deliver the best and fastest post-production solutions to the market," said Jeff Rosica, Executive Vice President of Grass Valley. "With H.264 and AVCHD becoming a key format for distribution, providing a faster solution is exactly what video editors need."
Grass Valley plans to release an update for EDIUS 6 and EDIUS Neo 3, both with support for Intel’s new processor, at the end of Q1 2011.
Five Questions with Codex Digital’s Marc Dando
In Hungary, while directors Maurichio Chernovetzky and Mark Devendorf shot Styria, a horror film inspired by a 19th century vampire tale by Irish novelist Sheridan LeFanu, Budapest-based Colorfront and Codex Digital collaborated on the first-ever workflow that leveraged the power of the ARRI Alexa camera in the ARRIRAW format.
The workflow allowed the filmmakers to capture full 2880 x 1620 pixel resolution, uncompressed 12-bit raw Bayer sensor camera data onto on-board Codex recorders and then quickly produce and grade dailies at Colorfront for review. The ARRIRAW workflow also generated other deliverables and archived on LTO-5 tape. Producers in Budapest and Los Angeles got sync sound dailies in both Apple ProRes 422 and H.264 QuickTime formats via ARRI’s Webgate digital dailies service optimized for delivery to the web, iPhone and iPad. Dailies grades applied during production will serve as the starting point for final DI grading to be completed at Colorfront’s facility in 2011.
Studio Daily sat down with Codex Digital president Marc Dando to talk about this new workflow.
Why did you choose to create a workflow specifically for ARRIRAW?
Cinematographers and post-production pros have welcomed the recent release of Arri’s raw digital camera format, ARRIRAW, because of its excellent image quality and the flexibility it provides during production and post. Yet there remains a good deal of confusion about some of the specifics of this format, how it can be applied with Arri’s Alexa and D-21 cameras, and how to structure a workflow to achieve optimal results. We are very excited about this format and have already seen it employed successfully on a number of productions.
Is Styria your first test of the Codex Digital workflow?
Codex has been conducting workflow tests using full ARRIRAW for several months. In November of last year, we used a Codex Onboard recorder to capture more than 60 hours of ARRIRAW for a feature film. Codex recorders have also been used in London to capture Alexa ARRIRAW data for commercials, including one that went to air in the U.K. on Christmas Day. Additionally, a short stereo 3D film was shot in Australia where two Alexa cameras were captured by a single Codex Onboard recorder. That film was production in conjunction with fxphd and the Australian Cinematographers Society. Currently, several films are planning to shoot using the Alexa/Codex combination.
How is the Codex Digital recorder used with ARRIRAW?
In a typical Codex ARRIRAW workflow, the user captures ARRIRAW data directly to a Codex recorder. After recording, the recorder data packs are removed and taken to a Codex Digital Lab or Desktop Transfer Station for QC and generation of deliverables. Although, it is possible to capture Apple ProRes on SxS cards and ARRIRAW on a Codex recorder simultaneously, many feel that the most robust workflow is to produce editorial deliverables directly from the digital camera original to be certain that editorial see a true representation of them. The generation of deliverables can be done at the same time as archiving material.
I imagine there must be pitfalls with this workflow as with any new workflow? What are they?
In is important to note that post tools must fully support Version 3 ARI files in order to correctly process Alexa ARRIRAW images. The sensor characteristics and ARRIRAW encoding differ significantly between the D-21 and Alexa, so it is vital that the file format is able to describe these differences. This has led to some dangerous confusion. Some early experimenters have captured ARRIRAW from the Alexa and then taken the post path for the D-21. That has inevitably produced suboptimal results, and is not a fair way to judge Alexa ARRIRAW. Codex Onboard Recorders avoid this problem. They can automatically detect which flavor of ARRIRAW is in use, and downstream deliverable files are processed correctly with no manual intervention.
What are the capabilties of the Codex Digital data packs when it comes to recording ARRIRAW?
The Codex 256GB Data Pack can store 25 minutes of Alexa ARRIRAW, and the larger 512GB Data Pack can store 50 minutes of Alexa ARRIRAW data. These recorders can capture D-21 ARRIRAW 4:3/16:9, or Alexa ARRIRAW 16:9. A Codex Onboard Recorder has a live de-Bayered monitoring output for both single and dual camera recording. The recorder employs standard quality de-Bayer for dailies and editorial deliverables, and high quality de-Bayer for VFX and finishing.
The soon-to-be-released ARRI Alexa v3 firmware will embed important metadata within the image data stream, and it is this metadata that will complete the “official” release of ARRIRAW. Codex has implemented tools to allow the insertion of metadata into ARRIRAW clips captured using any version of Alexa firmware. With the roll-out of v3 camera firmware, this metadata will be automatically inserted. ARRI also provides de-Bayering tools. Codex recommends that users run their own independent tests to review material and judge which solution works best for them.
Five Questions with Codex Digital’s Marc Dando ،
The Challenge: To film and light for a 360 degree shot that was set at night and had to be shot in the bright sunlight of the daytime!
I always cringe a little when a Producer says ‘We have this shoot that needs to be set at night, what do you think about shooting Day For Night?’
My first thought is similar to when I get asked to shoot a commercial in ‘one single shot’ or ‘the suicide method’ as a Director friend of mine calls it – ‘oh bugger’!
I always think, why don’t we just shoot it at night..? I like lighting at night it gives you a chance to create something special… However, shooting at night is not aways practical especially when your scene takes place in the middle of the Arabian Desert.
On this occasion the Day for Night approach was probably the right idea.
Source : Digital Cinematography
XPression is a high quality 3D character generator and broadcast production graphics platform incorporating over 20 years of live broadcast graphics experience. It offers a very powerful tool set - yet with an intuitive user interface. XPression was born from a realization of the possibilities presented by combining modern PC technologies, advancements in 3D rendering and a clear understanding of the broadcast and production workflow environments. The result is a powerful tool for design and playout of sophisticated motion graphics that will unleash the creativity of your graphics designer to create compelling graphics that will make your productions shine.
2D has not been forgotten
Many available 3D packages neglect the 2D workflow you got accustomed to, not so XPression. In fact, one of the primary design goals has been never to bother the designer or operator with 3D aspects when in need of 2D features and vice versa. This resulted in a very intuitive, balanced and flexible to use 2D and 3D authoring system, the best of both worlds.
Live, Live, Live!
XPression has been built for live, on demand graphics from the ground up. Edit scenes in the editor while playing them out in the background. Modify every aspect of an object while its animation is running, adjust roll pages while currently active on one of the outputs. Virtually anything you can think of, you can do it live!
Automatic White Balance
It's very tempting to set the white balance control on your DSLR or prosumer video camera to automatic. And there are some excellent excuses for doing so; you keep shooting when the lighting changes, aren’t likely to mistakenly balance on a white card aimed at a secondary light source, and forgetting to balance won't leave you stuck with blue faces or orange-tinted indoor scenes.
Our previous discussions of white balance have all been centered on a simple concept; video cameras need to be told what color light they’re working with. Sunlight has more blue than tungsten. A cloudy day has more blue than sunlight. Shooting in the shade often introduces a greenish component to the light, and so on.
The human brain contains its own white-balancing circuit, automatically giving a consistent “look” to whatever the eye sees. The camera doesn’t. This shortcoming is overcome by pointing the camera at a white wall, car, or piece of paper and pressing a button which tells the camera “Memorize this. This is white.” And everything is fine until the light changes and you have to do the white balance drill again or risk shooting an off-color scene.
Activating an automatic white circuit is somewhat akin to walking around with your finger constantly on the white balance button. The only difference is that when you're shooting a scene you're filling the frame with action, not a white card. And here’s the rub; the auto-white circuit is continuously making the assumption that the brightest portion of the scene is white.
And while that’s often the case, sometimes it's not. So the camera makes a pink shirt white and everything else in the scene shifts blue. Or maybe it's a yellow car passing in and out of frame that causes a transitory color shift. Or a backlit beige window shade that’s far and away the brightest part of the frame. Get the picture?
When it comes to color accuracy, it all boils down to a few simple questions. Do you care that different shots in a sequence might have slightly different color casts, or that the principal subject’s skin tone might shift when a light-colored car drives through the shot? Does the uncontrolled action you are capturing move from indoors to sunlight in the same shot? Will you have time to “fix it in post”? In other words; Is close enough good enough?
Automatic White Balance ،
127Hours: Big Pictures in Small Spaces|How HD Camera Rentals Outfitted Danny Boyle By Bryant Frazer October 19, 2010: Film &Video The team behind 127 Hours contacted HD Camera Rentals in Los Angeles not long after the company figured out how to strap a Silicon Imaging SI-2K POV camera — and a CineDeck recorder — onto an Olympic skier performing a 120-meter jump in an AT&T Winter Olympics 2010 spot. HD Camera Rentals kept aerodynamics and weight in mind as it selected gear for that shoot, which was executed by production company Smuggler. “Our company is called for those impossible shots you can’t pull off,” explains Michael Mansouri, a cinematographer and DIT who founded the company in 2005. “We’ve figured out the best ways to integrate this system into digital cinema without losing its true spirit. It does things film can’t. It’s small, it’s fast, and it’s for immediate gratification. We don’t try to turn our cameras into Panavisions.”