February 2014

SMPTE Newswatch Masthead

Hot Button Discussion

AXF Advances 
By Michael Goldman  

It has been noted in recent issues of SMPTE Newswatch that 2014 is shaping up to be a year in which standardization efforts to address certain long-standing, crucial issues in the world of electronic media data transmission will bear major fruit. Among these is the issue of interoperability in data archive systems. To address this issue, in recent years, the SMPTE Working Group TC-31FS30 has been pushing to develop an open specification, the Archive eXchange Format (AXF), and codify it into an official industry standard. AXF addresses the need for interchange of archived data and is intended to end years of separate, proprietary approaches to archiving, in which systems from different manufacturers do not have the ability to simply and efficiently migrate data to and from systems of other manufacturers.

Longtime media technology consultant S. Merrill Weiss, chair of the AXF Working Group, in fact, is hopeful that all major documents for the AXF specification will finally be published before the end of this year. Weiss explains that the work took several years, partly because it required the committee first to develop the concepts, and then to replace an initial technology approach with a more modern, IT-based methodology. As a consequence, the work grew in importance because its impact moved beyond only media applications.

"We went around the horn twice, first using an earlier technology, and then realizing we had to take the concepts we had formulated and put them into different protocols that were more attuned to where the information technology industry was going," Weiss explains. "As a consequence, what we have done is now applicable way beyond just the media and entertainment space. If we have done our jobs well, then AXF should have virtually universal applicability for anyone who wants to store data in bulk. And by bulk, I mean anywhere from one up to millions of small or large files of any kind, which can stretch well beyond the storage capacity of any specific medium, and independent of any particular hardware manufacturer."

That last point has been a central objective of the AXF development process - "to set up a system in which we would be independent of the hardware on which the content was stored, but also independent of the operating system and other parts of the platform on which the content was initially stored, and the application sitting on top of the protocol that created it," Weiss adds. "In other words, with AXF, you can change any of those other things, and still be able to retain the content. That was the main goal."

At the end of the day, AXF was formulated as a wrapper, or container, to hold virtually unlimited collections of files and metadata related to any particular piece of content in any combination. These containers are known as "AXF Objects" in that they essentially can package in different ways all the specific information different kinds of systems would need to restore the content data later on, relying on the Extensible Markup Language (XML) to define the information in a way that any modern computer system to which the data is downloaded could then read and recover. In this sense, AXF Objects are agnostic to and future-proofed from changes in technology and formats.

The primary categories of structures within AXF Objects that are of particular importance are AXF Object headers, which are structures that hold specific XML metadata, including unique identifiers for each AXF Object and for each file that it contains, among other things; AXF generic metadata containers, which can package varying additional, optional metadata; AXF Object payloads, which contain the content data of the files held within AXF Objects, including complete data structures and file footers necessary to build and rebuild the databases used by archive systems to manage archived files; and AXF Object footers, which contain the same information as AXF Object headers, but also additional information generated at the time of an AXF Object's completion. The Object footers make Objects particularly resilient because they hold enough additional information for media to be transferred to entirely foreign systems, which can ingest and recover the complete content of the media, as long as they understand AXF. 

"AXF Objects are self-contained; they have the ability to support hierarchical relationships between files and folders," Weiss says. "And you can transfer Objects from one archive system into remote storage, in the Cloud, for instance, and then retrieve them from Cloud storage into some different archive system, and still be able to read them without loss of any essence or metadata. So the utility of AXF goes up as a result of that. In fact, as the trend toward no longer differentiating between memory and storage moves forward, when we get to that point, then AXF will become that much more important, because it now gives you a way to aggregate content with specified relationships between content elements, and to retain those relationships forever."

Weiss is particularly excited about the fact that AXF's design does not limit Objects based on size or scale. Indeed, AXF Objects are designed to hold any kind of file of any size.

"The limit might be can you, [the end user], handle that much data in a way that works for production or post-production workflows, but from the AXF perspective, it currently can handle over 18 Exabytes [about 17 billion gigabytes] of data in each file of collected content in spanned sets - and that many files also," he says. "But that is just where we have set the size of the number space for the moment. If we ever get to the point where we need to exceed that number, all we have to do is use a larger number space by updating the AXF standard in the future, going to a thousand or a million times the current value, depending on what the need is over time. So, effectively, AXF is unlimited in how much it can hold in the way of the size of any one file or the number of files. It should be noted, by the way, that while AXF is quite efficient at holding content of vast size, it is also efficient at holding small content stores, so it is quite scalable." 

By "spanned sets," Weiss is referring to the protocol used to store AXF Objects on more than one medium - essentially a process of automatically segmenting, storing on multiple media, and reassembling Objects, when necessary. This permits storing AXF Objects considerably larger than can be contained on the individual media on which they are stored. It contributes to the scalability of AXF, and is one of many features or structures in AXF, designed to assure that AXF Objects can be stored on any type or generation of media. Another is the notion of "collected sets," which permits archive operators to make changes to Objects or files within Objects, while still preserving all earlier versions along the way - even when write-once storage is used.

Weiss adds that AXF also provides for the use of checksums to confirm the accuracy of data stored in AXF Objects, and says there also is redundancy built into the data structures of AXF to help improve the recoverability of data in AXF Objects. These and many other features, however, are not designed to relieve content owners of the responsibility to provide redundant storage for high-value content, but rather, AXF makes it more efficient to maintain and manage content when they do so, he suggests.

"AXF allows facilities to support or enable that sort of storage and gives them maximum opportunities to get content back from any and all media types should any of them get damaged," he explains. "A future version, in fact, is planned to include a provision for error correction capabilities for the content itself."

Weiss conjectures that, as the AXF standard rolls out, it may have applications across the data universe, far beyond the media and entertainment world, but he adds that studios and broadcast entities have a chance to move forward and incorporate AXF into infrastructures that, until now, were built on concepts that, frequently, were diametrically opposed to what AXF is trying to do - i.e. archive systems that were rigidly proprietary in nature. To that end, great thought has also been put into an implementation strategy that should allow equipment manufacturers and content owners to export existing content into the AXF domain in a strategic way from their current archive systems - systems that are unlikely to be abandoned wholesale anytime soon for business and financial reasons, among other factors.

The implementation strategy offers a path for employing AXF with current archive systems without dumping existing hardware for holding and managing the archives that they contain, unless or until content owners are ready to do so.

"Essentially, the strategy is to help archive owners and manufacturers develop utilities that allow them to format media for AXF, if necessary, and to export to it," Weiss says. "If your tape library is in the native format of one particular manufacturer, you could with the appropriate utility, convert from that native format into AXF and store on appropriately formatted media. Then, you could take that media to a different system from a different manufacturer, and if that system had a corresponding utility to get AXF into its native format, you could move the material into that system. In this way, AXF can act as a lingua franca between systems that have not yet adopted AXF natively and have different native formats." 

An additional benefit for companies that decide to adopt AXF natively is that AXF metadata can serve as a de facto backup tool for an archive system's database should it become corrupted. Although it would be time-consuming, AXF metadata could be recalled to rebuild the database - a sort of "ultimate bailout," if necessary, as Weiss calls it.

"Such methods also can be useful when you have mergers and acquisitions and have to transfer content or combine or consolidate archives," he adds.

Weiss says the Working Group currently is dealing with comments on the first part of the AXF suite of documents, the basic AXF protocol and its components, and he expects that standard to be published shortly. The second in the suite, which will detail how AXF metadata can be used to facilitate workflows, should be ready for balloting later this year, Weiss hopes, and publication around the end of the year. That part of the work is aimed at enabling content creators to employ AXF metadata to assist them in indexing and managing content as it moves through production, post-production, finishing, distribution, and eventually, into archives. 

"We don't know how quickly the world will convert to AXF," Weiss says. "It will depend on how quickly manufacturers and others develop utilities to let you convert from native formats to and from AXF. People could use it just for interchange, or natively if they choose. These are things that I can't predict. But AXF was created based on what users told us their needs were, so that they could recover their content from archives in the absence of the systems that created those archives, and in fact, even in the absence of the manufacturers whose systems created the archives, in order to protect their investments in that content. We think we have taken a big step toward helping them do that."

Some of the companies behind development of the original open AXF specification maintain a website with information about the spec you can find here, and you can also read Weiss' article in the January/February issue of the SMPTE Motion Imaging Journal about the technical details of AXF.  

News Briefs

27 Kilometers of 8K
Japan public broadcaster NHK continues its push to prove what it calls the Super Hi-Vision 8K broadcast format will someday be viable, with the latest development being what NHK calls a successful long-distance broadcast of a terrestrial Super-Hi-Vision 8K signal. According to a recent article in PCWorld, NHK's test in mid-January sent an 8K UHDTV signal from a broadcast site in Southern Japan to a receiving station 27 kilometers away using just a single UHF channel. A previous test in 2012 had sent such a signal 4.2 kilometers. NHK is working toward the goal of being able to possibly broadcast all or part of the 2020 Tokyo Olympics in 8K. For more on recent advancements in the world of 8K UHDTV, check out the September 2013 issue of SMPTE Newswatch.   

Digital Workflows Evolve   
The Creative Cow website recently published a comprehensive two-part series by Deborah Kaufman talking to technology experts and adopters about digital technology workflow trends and issues heading out of 2013 and into 2014. In the second part of the series, Kaufman chatted with several experts about the realities of the fact that the digital transition is no longer a "transition." It has arrived, and now needs to continue to evolve. Some of her sources talked about the importance of ACES color management architecture to new and improved workflows, others talked about new kinds of camera arrays built around lightfield technology, methods for improving lensing systems for 4K capture, what role higher frame rates will play in the theatrical and stereoscopic paradigms, new kinds of data management and archiving systems online collaboration methodologies, and many other important issues. You can find part 1 of the Kaufman's coverage here.