Video - NewsML-G2 Quick Start Guide

1. Introduction

Now that streamed media is part of everyone’s day-to-day experience on the Web, organisations with little or no tradition of "broadcast media" production need to be able to process audio and video.

NewsML-G2 allows all media organisations, whether traditional broadcasters or not, to access and exchange audio and video in a professional workflow, by providing features and Extension Points that enable proprietary formats to be "mapped" to NewsML-G2 to achieve freedom of exchange amongst a wider circle of information partners.

This Quick Start guide is split into two parts:

  • Part I deals with a video that is available in multiple different renditions and the example focuses on expressing the technical characteristics of each rendition of the content.

  • Part II shows an example of video content has been assembled from multiple sources, each with distinct metadata.

We recommend reading the Quick Start Guide to NewsML-G2 Basics before this Quick Start Guide.

2. Part I – Multiple Renditions of a Single Video

The following example is based on a sample NewsML-G2 video item from Agence France Presse (but is not a guide to processing AFP’s NewsML-G2 news services).

LISTING 4: Multiple Renditions of a Video in NewsML-G2

All Scheme Aliases used in the listing below indicate IPTC NewsCodes vocabularies, except for ex-afpdescRole.

<?xml version="1.0" encoding="utf-8"?> <newsItem xmlns="http://iptc.org/std/nar/2006-10-01/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://iptc.org/std/nar/2006-10-01/ ./NewsML-G2_2.30-spec-All-Power.xsd" guid="urn:newsml:afp.com:20140131:CNG.3424d3807bc.391@video_1359566" version="14" standard="NewsML-G2" standardversion="2.30" conformance="power" xml:lang="en-US"> <catalogRef href="http://www.iptc.org/std/catalog/catalog.IPTC-G2-Standards_37.xml" /> <catalogRef href="http://cv.afp.com/std/catalog/catalog.AFP-IPTC-G2_3.xml" /> <itemMeta> <itemClass qcode="ninat:video" /> <provider qcode="nprov:AFP" /> <versionCreated>2021-10-31T11:37:23+01:00</versionCreated> <firstCreated>2014-01-30T13:29:38+00:00</firstCreated> <pubStatus qcode="stat:usable" /> </itemMeta> <contentMeta> <icon contenttype="image/jpeg" height="62" href="http://spar-iris-p-sco-http-int-vip.afp.com/components/9601ac3" rendition="rnd:thumbnail" width="110" /> <creditline>AFP</creditline> <description role="ex-afpdescRole:synthe">- Amir Hussein Abdullahian, Iranian foreign ministry's undersecretary for Arab and African affairs - Panos Moumtzis (man), UNHCR regional coordinator for Syrian refugees </description> <description role="ex-afpdescRole:script">SHOTLIST: KUWAIT. JANUARY 30, 2014. SOURCE: AFPTV -VAR inside the conference room -VAR of Ban Ki-moon -MS of King Abdullah II of Jordan -MS of Michel Sleiman, president of Lebanon -MS of Tunisian president Moncef Marzouki SOUNDBITE 1 - Amir Hussein Abdullahian (man), Iranian foreign ministry's undersecretary for Arab and African affairs (Farsi, 10 sec): "Those who send arms to Syria are behind the daily killings there." SOUNDBITE 2 - Amir Hussein Abdullahian (man), Iranian foreign ministry's undersecretary for Arab and African affairs (Farsi, 9 sec): "We regret that some countries, such as the United States, have created a very high level of extremism in Syria." SOUNDBITE 3 - Panos Moumtzis (man), UNHCR regional coordinator for Syrian refugees (Arabic, 12 sec): "The United Nations is providing humanitarian assistance to more than four million people inside Syria, two million of them displaced." SOUNDBITE 4 - Panos Moumtzis (man), UNHCR regional coordinator for Syrian refugees (Arabic, 17 sec): "The funding will first go to UN relief organizations, who are working inside Syria and in neighbouring countries. Funding will also go to the more than 55 NGOs in Syria with whom we cooperate and coordinate to deliver aid." </description> <language tag="en" /> </contentMeta> <contentSet> <remoteContent contenttype="video/mpeg-2" href="http://components.afp.com/ab652af034e.mpg" rendition="ex-vidrnd:dvd" size="54593540" width="720" height="576" duration="69" durationunit="timeunit:seconds" videocodec="vcdc:c015" videoframerate="25" videodefinition="videodef:sd" colourindicator="colin:colour" videoaspectratio="4:3" videoscaling="sov:letterboxed" /> <remoteContent contenttype="video/mp4-1920x1080" href="http://components.afp.com/3e353716caa.1920x1080.mp4" rendition="ex-vidrnd:HD1080" size="87591736" width="1920" height="1080" duration="69" durationunit="timeunit:seconds" videocodec="vcdc:c041" videoframerate="25" videodefinition="videodef:hd" colourindicator="colin:colour" videoaspectratio="16:9" videoscaling="sov:unscaled" /> <remoteContent contenttype="video/mp4-1280x720" href="http://components.afp.com/5ba0d14a64f.1280x720.mp4" rendition="ex-vidrnd:HD720" size="71010540" width="1280" height="720" duration="69" durationunit="timeunit:seconds" videocodec="vcdc:c041" videoframerate="25" videodefinition="videodef:hd" colourindicator="colin:colour" videoaspectratio="16:9" videoscaling="sov:unscaled" /> </contentSet> </newsItem>

2.1. Document structure

The building blocks of the NewsML-G2 Item are the <newsItem> root element, with additional wrapping elements for metadata about the News Item (itemMeta), metadata about the content (contentMeta) and the content itself (contentSet).

The root <newsItem> attributes are:

<newsItem xmlns="http://iptc.org/std/nar/2006-10-01/" guid="urn:newsml:afp.com:20160131:CNG.3424d3807bc.391@video_1359566" version="14" standard="NewsML-G2" standardversion="2.30" conformance="power" xml:lang="en-US">

This is followed by Catalog references:

<catalogRef href="http://www.iptc.org/std/catalog/catalog.IPTC-G2-Standards_37.xml" /> <catalogRef href="http://cv.afp.com/std/catalog/catalog.AFP-IPTC-G2_3.xml" />

2.2. Item Metadata <itemMeta>

The <itemClass> property uses a QCode from the IPTC News Item Nature NewsCodes to denote that the Item conveys a picture. Note that <provider> uses the recommended IPTC Provider NewsCodes, a controlled vocabulary of providers registered with the IPTC, recommended scheme alias "nprov":

2.3. Content Metadata <contentMeta>

The <icon> element tells receivers how to retrieve an image to use as an iconic image for the content, for example a still image extracted from the video. It’s possible to have multiple icons to suit different applications, each qualified by @rendition.

Two <description> elements are qualified by @role: first a summary, second a more detailed shotlist:

2.4. Video Content

Video is conveyed within the NewsML-G2 <contentSet> using the <remoteContent> element; where there are multiple alternate renditions of SAME content, <remoteContent> can be repeated for each rendition within the same Item.

The <remoteContent> element references binary objects that exist independently of the current NewsML-G2 document. In this example there is an instance of <remote Content> for each of three renditions of the video.

Each remote content instance contains attributes that can conceptually be split into three groups:

  • Target resource attributes enable the receiver to accurately identify the remote resource, its content type and size;

  • Content attributes enable the processor to distinguish the different business purposes of the content using @rendition;

  • Content Characteristics contain technical metadata such as dimensions, duration and format.

Frequently used attributes from these groups are described below, but note that the NewsML-G2 XML structure that delimits the groups may not be visible in all XML editors. For a detailed description of these attribute groups, see the NewsML-G2 Specification (This can be downloaded by visiting http://www.newsml-g2.org/spec and following the link to NewsML-G2.)

2.5. Target Resource Attributes

This group of attributes express administrative metadata, such as identification and versioning, for the referenced content, which could be a file on a mounted file system, a Web resource, or an object within a content management system. NewsML-G2 flexibly supports alternative methods of identifying and locating the externally-stored content.

The two attributes of <remoteContent> that identify and optionally locate the content are Hyperlink (@href) and Resource Identifier Reference (@residref). Either one MUST be used to identify the target resource. They MAY optionally be used together.

Although @href and @resideref are superficially similar, their intended use is:

  • @href locates any resource, using an IRI.

  • @residref identifies a managed resource, using an identifier that may be globally unique.

An IRI, for example:

Resource Identifier Reference (@residref)

An XML Schema string e.g.

The provider may use residrefformat or residrefformaturi to specify the format of the @residref. The recommended CV to be used is https://cv.iptc.org/newscodes/valueformat/ with a recommended scheme alias of "valfmt":

Version

An XML Schema positive integer denoting the version of the target resource. In the absence of this attribute, recipients should assume that the target is the latest available version

Content Type

The Media Type of the target resource

Format

A refinement of a Content Type using a value from a controlled vocabulary:

Content Type Variant (@contenttypevariant)

A refinement of a Content Type using a string:

Size

Indicates the size of the target resource in bytes.

2.6. News Content Attributes

This group of attributes of <remoteContent> enables a processor or human operator to distinguish between different components; in this case the alternative resolutions of the video.

Rendition

The rendition attribute MUST use a QCode. Providers may have their own schemes, or use the IPTC NewsCodes for rendition, which has a Scheme URI of http://cv.iptc.org/newscodes/rendition/ and recommended Scheme Alias of "rnd". This example uses a fictional provider-specific scheme with a Scheme Alias of "ex-vidrnd":

To avoid processing ambiguity, each specific rendition value should be used only once per News Item, except when the same rendition is available from multiple remote locations. In this case, the same value of rendition may be given to several Remote Content elements.

2.7. News Content Characteristics

This third a group of attributes of <remoteContent> is provided to enable further efficiencies in processing and describes physical characteristics of the referenced object specific its media type. Text, for example, may use @wordcount; Audio and video are provided with attributes appropriate to streamed media, such as @audiobitrate, @videoframerate. The appropriate attributes for video are described below.

Duration (@duration and @durationunit)

Indicates the duration of the content in seconds by default, but can be expressed by some other measure of temporal reference (e.g. frames) when using the optional @durationunit. From NewsML-G2 2.14, the data-type of @duration is a string; earlier versions use non-negative integer. The reason for the change is that video duration is often expressed using non-integer values.

For example, expressing duration as an SMPTE time code requires the following NewsML-G2:

The recommended CV for @durationunit is the IPTC Time Unit NewsCodes whose URI is . The recommended alias for the scheme is "timeunit".

Video Codec (@videocodec)

A QCode value indicating the encoding of the video – for example one of the encodings used in this example is MPEG-2 Video Simple Profile. This is indicated by the IPTC Video Codec NewsCodes with a recommended Scheme Alias "vcdc", and the corresponding code is "c015".

Video Frame Rate (@videoframerate)

A decimal value indicating the rate, in frames per second [fps] at which the video should be played out to achieve the correct visual effect. Common values (in fps) are 25, 50, 60 and 29.97 (drop-frame rate):

Video Aspect Ratio (@videoaspectratio)

A string value, e.g. 4:3, 16:9

Video Scaling (@videoscaling)

The @videoscaling attribute describes how the aspect ratio of a video has been changed from the original in order to accommodate a different display dimension:

The value of the property is a QCode; the recommended CV is the IPTC Video Scaling NewsCodes (Scheme URI: )

The recommended Scheme Alias is "sov", and the codes and their definitions are as follows:

Code

Definition

Code

Definition

unscaled

no scaling applied

mixed

two or more different aspect ratios are used in the video over the timeline

pillarboxed

bars to the left and right

letterboxed

bars to the top and bottom

windowboxed

pillar boxed plus letter boxed

zoomed

scaling to avoid any borders

Video Definition (@videodefinition)

Editors may need to know whether video content is HD or SD, as this may not be obvious from the technical specification ("HD", for example, is an umbrella term covering many different sets of technical characteristics). The @videodefinition attribute carries this information:

The value of the property can be either "hd" or "sd", as defined by the Video Definition NewsCodes CV. The Scheme URI is and the recommended scheme alias is "videodef".

Colour Indicator <colourindicator>

Indicates whether the still or moving image is coloured or black and white (note the UK spelling of colour). The recommended vocabulary is the IPTC Colour Indicator NewsCodes (Scheme URI: ) with a recommended Scheme Alias of "colin". The value of the property is "bw" or "colour":

The completed Remote Content wrapper will be:

2.8. Audio metadata

There are specific properties for describing the technical characteristics of audio, for example:

Audio Bit Rate (@audiobitrate)

A positive integer indicating kilobits per second (Kbps)

Audio Sample Rate (@audiosamplerate)

A positive integer indicating the sample rate in Hertz (Hz)

For a detailed description of all of the News Content Characteristics for Video and Audio content, see section News Content Characteristics in the NewsML-G2 Specification Document.

3. Part 2 – Multi-part video

We recommend reading the Quick Start Guide to NewsML-G2 Basics and the preceding Part 1 of
this guide to video before reading Part 2.

Audio and video, including animation, have a temporal dimension: the nature of the content is expected to change over its duration: in this example a single piece of video has been created from a number of shots – shorter segments of content from different creators - that were combined during an editing process.

Note that this complies with the basic NewsML-G2 rule that "one piece of content = one newsItem". Although the video may be composed of material from many sources, it remains a single piece of journalistic content created by the video editor. This is analogous to a text story that is compiled by a single reporter or editor from several different reports.

NewsML-G2 supports this by enabling the expression of metadata about separate identifiable parts of content using <partMeta> in addition to metadata structures that apply to the whole content.

The example video is about a retrospective exhibition in Berlin of works by the German humourist and animator Vicco von Bülow. It consists of a number of shots, so provides a shotlist summarising the visual content of each shot, and a dopesheet, giving an editorial summary of the video’s content.

The document structure and the NewsML-G2 properties included in the example have been previously described, except for the <partMeta> wrapper, which is described in detail below. A full code listing for the example is included at the end.

The example is based on a sample NewsML-G2 video item from the European Broadcasting Union (EBU). The News Item references a multi-part broadcast video and contains separate metadata for each segment of the content, including a keyframe, and additionally describes the technical characteristics of the video.

Please note that it may resemble but does NOT represent the EBU’s NewsML-G2 implementation.

LISTING 5: Multi-part Video in NewsML-G2

All Scheme Aliases used in the listing below indicate IPTC NewsCodes vocabularies, except for the following: ex-addressType, ex-codeorigin, ex-codesource, ex-cptype, ex-descrole, ex-geo, ex-langusecode, ex-prov, ex-providercode, ex-rolecode, ex-servicecode and ex-vidrnd.

3.1. Part Metadata

NewsML-G2 Items can have many <partMeta> wrappers, each expressing properties for an identifiably separate part of the content; in this example each of the shots, or segments, which make up the video. The properties for each segment include:

  • an ID for the segment, and a sequence number

  • a keyframe, or icon that may help to visually identify the content of the segment

  • the start and end positions of the segment within the content

It is also possible to assert any Administrative or Descriptive Metadata for each <partMeta> element, if required.

The id and sequence number for the shot are expressed as attributes of <partMeta> and the <partMeta> element is repeated for each video segment. Below is a complete example of a single segment:

These elements of video <partMeta> are discussed below.

Add keyframe using <icon>

A keyframe for the video segment is expressed as the child element <icon> with @href pointing to the keyframe image as a resource on the Web:

Timing metadata

The <timeDelim> property indicates the start and end positions of this segment within the video, and the units being used to express these values, as shown for example:

This @timeunit uses a QCode to indicate that @start and @end are expressed in Edit Units, the smallest editable units of the content; in the case of video this is frames. Edit Unit is the assumed default value of @timeunit if this attribute is not present. It is one of the values of the IPTC Time Unit NewsCodes (recommended Scheme Alias "timeunit"), which is used in this example.

The values in the scheme are:

  • editUnit: the time delimiter is expressed in smallest editable unit of the content: frames (video) or samples (audio) and requires the frame rate or sampling rate to be known. This must be defined by the referenced rendition of the content.

  • timeCode: the format of the timestamp is hh:mm:ss:ff (ff for frames).

  • timeCodeDropFrame: the format of the timestamp is hh:mm:ss:ff (ff for frames).

  • normalPlayTime: the format of the timestamp is hh:mm:ss.sss (milliseconds).

  • seconds

  • milliseconds.

The value of @start expresses the non-inclusive start of the segment of the timeline; the value of @end expresses the inclusive end of the segment of the timeline. For example, a 30 second segment at 25 frames per second may be expressed using Edit Unit as:

A following 30 second segment would start at "750" and end at "1500".

The same segment would be expressed using milliseconds as:

and the following 30 second segment would start at "30000" and end at "60000"

When specifying the start and end points of a segment of video, be aware that these are unlikely to be frame-accurate for the same segment rendered in different technical formats; if frame-rates are different, the viewer is likely to see a different result for each rendition.

It is therefore highly recommended when expressing time delimiters using frames or timecodes that @renditionref is used to specify separate time delimiters corresponding to alternative renditions of the same shot, as follows:

Each @renditionref identifies a corresponding @rendition in <remoteContent>:

Description and Language

The example also indicates the language being used in the shot, and the context in which it is used. In this case, @role uses a QCode from a proprietary EBU scheme to indicate that the soundtrack of the shot is a voiceover in English.

Implementers may also use the IPTC Language Role NewsCodes (recommended Scheme Alias "lrol") for this purpose.

Using <description>, we can also indicate what the viewer can expect to see in this segment:

3.2. Video Content

The <contentSet> wrapper contains a single rendition of the video inside the <remoteContent> element. Note that the video frame rate is included, as this is required to calculate points in the timeline when using time delimiters based on Edit Unit:

4. Further Resources

The IPTC Video Metadata Hub Recommendation (VMHub) was launched in October 2016 as a comprehensive solution to the exchange of video metadata between multiple existing standards. Visit the Video Metadata Hub pages on the IPTC website to learn more.