Wednesday, October 25, 2017

CineForm Goes Open Source

16 years is a long time for a piece of software to remain useful. The CineForm codec, initially developed in 2001, was designed to enable real-time consumer video editing. CineForm, Inc. was a team of engineers that knew image and video processing, but very little about codec design (which likely helped.) Back then, DV (remember mini-DV tape?) was popular but too slow on the consumer's Pentium III and Pentium 4 desktops for software based video editing. We had worked out that the Intel processors had plenty of speed for editing without dedicated hardware, but the camera's compression, DV or MPEG based, were too difficult to decode. So in 2001, the CineForm codec became the first "visually lossless" intermediate codec (not a proxy) that replaced the source with a new file with much more speed. We didn't know this was a somewhat new idea. While there were other performance optimized codecs, like HuffYUV, CineForm was the first to offer significant compression, balancing quality, speed and performance like no other before it. Since then, Avid DNxHD and Apple ProRES have followed similar strategies. As CineForm Inc (the start up), the codec became a core differenciator, open source was never felt to be a viable option (we were probably wrong.)

Consumers weren't the only one's with speed and compression issues. So in 2003, the company pivoted from the consumer market to the professional when we created the CineForm HD codec.  This version shares the same DNA as today's compression, with 10 and 12-bit support and the resolution increase needed for film and television production. Video producers aren't interested in codecs, workflow is what was sold. CineForm compression was bundled with products like Aspect HD, Prospect HD, NEO 4K and Neo 3D, selling a workflow that depended on a codec. We knew there was little value in compression modules alone. Typically codecs sold for only a few dollars a unit, so the workflow obfuscated the codecs value. While the decoder was free, the encoder was not sold separately. The little start-up was not brave enough to make the encoder free, let alone its source.

One reason for keeping it closed, you might be surprised to hear, is that the codec's core tech was very simple. The codec idea was sketched out on a single piece of paper and the performance was determined first by counting the number of Intel instructions needed using an Excel spreadsheet -- even that fit on a single page. The codec was primarily written and maintained by two engineers, Brian Schunck and myself, with some great early help from Yan Ye. The simplicity meant that the magic of the compression was best shrouded in secrecy.

In 2011, CineForm was aquired by GoPro, which it used it to drive its HD and 3D editing utility, GoPro Studio and resulted in producing millions of consumer-created edits. Our initial play for real-time consumer video editing finally happened…10 years later. Now the CineForm codec could be released, and it was licensed to many, including Adobe, for free distribution within Creative Cloud. But it was still not open source.

The CineForm codec has shined most brightly when there is a market change, changes like SD to HD, HD to 2K+ compressed RAW, 2K to 4K, 2D to 3D and HD to 4K. At these times of transition, CPU and GPU vendors didn't yet have hardware optimized solutions -- try software HEVC decoding of 4K60 so see this issue. For the last few years, our computers and mobile devices have had hardware accelerated video decoding for most types of media we are producing. Even GoPro switched from the CineForm powered GoPro Studio to the hardware accelerated Quik for Desktop. So for a couple of years, CineForm didn't get much attention. It was used internally for prototyping, but it received very few public updates.

In 2017, the production market changed again, with new opportunities for the CineForm codec to provide solutions… particularly for greater than 4K sources from premium 360° products, like GoPro's own Fusion and Omni cameras. The new Fusion Studio desktop software defaults to CineForm for all the greater than 4K exports. But unlike GoPro Studio, where CineForm was primarily an internal format, Fusion Studio is not an editor. The CineForm files are meant to be edited in external tools, like Adobe Premiere Pro, After Effects, DaVinci Resolve and HitFilm. While many video tools do have native CineForm support, not all do… and some can’t. This gives GoPro more compelling reasons to make CineForm open source to help support our high resolution cameras.

The CineForm SDK is now open sourced, dual licensed under Apache 2.0 or the MIT license (integrator's choice.) You are now welcome to build new applications around the CineForm SDK, build it into existing applications, and extend its abilities with new performance optimizations and explore new image formats. I would love to see it go into many new places and new markets, but in particular, into popular open source players and transcoders. While the CineForm wavelet engine might be simple, 16 years of coding and optimization have complicated the source. So, to help with new upcoming compression-ists, the open source includes a simplified wavelet compression modeling tool -- simplified CineForm -- for education purposes and some codec design and tuning.

This was another great step for GoPro as it contributes more original projects to the open source community. Learn more about the CineForm SDK open source project on GitHub at

Wednesday, August 23, 2017

Eclipse results

After my last post, here is the follow up with the results

 Eclipse 2017 Edit from David Newman on Vimeo.

One thing I wish I did, was set an additional camera for video with a locked exposure.  The light levels changed fast when approaching the totality, too fast for long interval time-lapse to deliver the real-time experience.  Also the auto exposure of the TLV and night-lapsing cameras reduced the drama of the lighting change, so a locked camera would solve both issues.  I used four GoPro cameras for the above video, next time I will use five or more.

Friday, August 18, 2017

Shooting the Eclipse 2017 with a GoPro

I'm planning to find my way into the path of totality, and be there with a bunch of GoPro gear. I've been asked many times how to shoot this event with a GoPro, so here are my thoughts. Disclaimer: this is my completely unpracticed opinion on shooting a total solar eclipse with a GoPro.

Some basics. The whole transit from start to finish, somewhat depending on your location, is about 3 hours. That is pushing the average GoPro a little beyond its battery life in a time-lapse mode, even more beyond in regular video mode. As three hours likely produces pretty boring video, time-lapse is the way to go. If you intend to time-lapse the entire transit, you can use any USB power brick to extend the GoPro's run time; I've done a week long time-lapse via USB power. If you intend to shoot on battery power alone, on a full charge I typically get about 2 to 2.5 hours of time-lapse on a HERO5 Black with a 5-10 second interval.  Plan to start your time-lapse about 1 hour before totality.

Timing. Time-lapse interval for two hour capture, my best estimate is 5 seconds. Which will be a 48 second video when played at 30p.  A shorter video to share would be better, yet if you are lucky to have two minutes of totality, this interval only gets you 24 frames (0.8s) of time in the totality.  If you intend to work on the video with a speed ramp for the less exciting bits, then a 1 or 2 second interval might be better, but watch out for your battery life.

Framing.  As you know the GoPro lens is very wide, forget about getting any close-up views -- well without mounting the GoPro against an eyepiece of a telescope, which I will be doing in one setup.

Practicing: A video frame extracted from GoPro through the eyepiece of a sub-$200 4" telescope with a solar filter.  
With a wide time-lapse, consider how the light will change across the landscape, so compose your framing to capture that.

Filters. A GoPro will not be shooting through those safety filters, those are for your eyes and telescopes etc. Using a solar filter on a GoPro will give you a very small orange dot that moves across the frame, if you are lucky -- don't do this.  If you have ND filters, you can use them or not, modern GoPros are used to shooting images that also contain the sun, the sun's image is too small on the sensor to do damage.

Exposure control. In most cases a GoPro is an auto exposing camera, this is a good thing for those in the path of totality as the camera will adjust for all lighting conditions, give you a good video throughout. The downside for those not in the path of totality the auto-exposure will reduce dramatic level of changing light level. On a Hero 4 Silver and on HERO 4/5 Black, you can lock the ISO to 100 and set a fixed shutter speed, but only in the video modes with Protune enabled, so you will be left to process a lot of video into a timelapse in post.  You will also need ND16 or ND32 filters to make locking the exposure work for a correctly exposed image at the beginning of you capture.

Time-lapse video vs time-lapse photo vs Night-Lapse photo. Time-lapse video (TLV) is the easiest by far, producing a small MP4 that is ready to share, as soon as the cell service recovers from the network load of millions of eclipse chasers filling small country towns. The downside of TLV is there are no Protune controls, it is all automatic. The other two time-lapse modes will produce JPGs (and GPRs if RAW is enabled) and you can have Protune level controls to set the look (GoPro vs Flat), white balance, ISO, sharpness etc.  If you are in the path of totality, choose Night-Lapse, it will still work during daylight, but will take much longer exposures as needed for the dark few minutes.

My Recommendations: For those willing to do color correction and post assemble a time-lapse: Night-Lapse, Auto shutter, 4 second interval, Protune Flat, Native White balance (or 5500K for simpler color correction), ISO Min 100, ISO Max 800.  I will enable RAW.  This will produce 1800 images over 2 hours, one set of JPGs and one set of GPRs, using about 18 GB for storage.  If you want a fast easy time-lapse, use Time-lapse video with a 5 second interval.

Wednesday, May 03, 2017

GPMF - GoPro's Metadata Format

Have you ever noticed how little metadata you get from a video file?  Take any JPEG or RAW photo from an iPhone, a GoPro or a DSLR, and you will find extensive metadata stored in Exif, a photographic metadata standard that stores hundreds of properties such as exposure, focal length and camera model etc.  For your average MP4, MOV or AVI, not so much, particularly without the various sidecar files which are so easily lost. Some of the earliest GoPro HD cameras didn't even say "GoPro" anywhere in their video bit-streams.  I feel a manufacturer ID is the most fundamental of metadata: answering, what made this file?  Even some of the professional cameras I worked with at CineForm weren't much better.  Metadata should also answer, how was this file made? In what environment? At what location? Was the camera moving?  Etc. So why is photo metadata ubiquitous, and video metadata spotty at best.  The lack of useful video metadata relates to standards, or the lack of them. We have no Exif equivalent for video files, particularly for consumer video within MP4, so GoPro created one.

For GoPro cameras we couldn't just place an Exif within an MP4, as the exposure is changing per frame, so that would be a lot of Exif data, and Exif had no clear model for time varying signals. This photo standard also assumes that the metadata applies to a particular frame, that doesn't work so well for gyro and accelerometer data. GoPro needed to store both high frequency sensor data and Exif-like frame-based data to describe the image exposure.

A multiple-year effort begins.

The software ecosystem would love JSON or some XML variant, but embedded camera developers will have none of that, complaining of excessive memory usage, system load, blah blah blah.  The camera developers would love nothing more than storing raw I²C packets (actually proposed once), very low overhead, but completely opaque to software developers.  There needed to be a compromise.

How I accidentally became a Firmware engineer.

To investigate a possible low overhead metadata format, I started with the internal format used by CineForm's Active Metadata, it needed some work and extensions, but it was efficient, easy to read and write, and should appeal to both embedded firmware and software camps.  So I wrote a sample application to demo the reading and writing of metadata stored in any camera-native format.  I though it was only a model, I was thinking if the format was accepted the camera team would write something better. Firmware liked it, used it as is, then filed bug reports against it which I had to fix. That was about two years ago, and I've been part-timing as a firmware engineer ever since, focusing on metadata.

Fast-forward to today.

Here is a taste of GoPro metadata and telemetry:

Freq. (Hz)
3-axis accelerometer
All HERO5 cameras
3-axis gyroscope
All HERO5 cameras
lattitude, longitute, altitude, 2D ground speed, and 3D speed
HERO5 Black with GPS enabled
UTC time and data from GPS
Within the GPS stream
Within the GPS stream: 0 - no lock, 2 or 3 - 2D or 3D Lock
GPS precision (PDOP)
Within the GPS stream, under 300 is good
Image sensor gain
24, 25 or 30
HERO5 v2.0 or greater firmware
Exposure time
24, 25 or 30
HERO5 v2.0 or greater firmware

You can use some of this data today within GoPro's Quik Desktop tool, generating cool overlays, or do even more using the GoPro acquired Dashware tools.   A random video a created 8 months ago using GPMF extracted within Dashware, gauges rendered and composited in HitFilm:

Dashware+Telemtry from David Newman on Vimeo.

However, for this to be truly interoperable, its support needs to go beyond GoPro.  In a big step in that direction, GoPro has open-sourced the GPMF reader, as a way for third parties to collect, process and display video metadata from MP4s.

Developers please check it out  There is extensive GPMF documentation and simple code samples.