Home Newswire Enhanced TV metadata increases ad effectiveness and boosts propensity to purchase

Enhanced TV metadata increases ad effectiveness and boosts propensity to purchase

image1 (9K)
Share on

The first day of the IBC conference provided two complementary technical papers on the application of enhanced metadata to TV advertising and commerce and its potential business benefits.

The first of these was a paper from Prime Focus Technologies in India*, which argued that contextual metadata could be enhanced to maximise TV advertising yield and expand inventory, describing a trial implementation of the concept. The second, from Belgium startup Appiness, offered a way of enhancing content with commercial data to promote sales of onscreen products on a second-screen app.

Ramki Sankaranarayanan, Founder and CEO of Prime Focus Technologies, was first up, arguing that although the new digital multiscreen environment theoretically offered new opportunities for targeted TV advertising, it suffered from a number of limitations in practice.

“Modern-day publishers have a problem of populating and making sure that the inventory is actually sold,” said Sankaranarayanan, adding that they were also unhappy about low yield-levels.

Meanwhile, the content attributes currently used for targeting remained relatively crude, using parameters such as content type, title, genre and language, while the user profiles these were married to was equally restricted, to basic parameters such as age, gender and location.

This meant that “from an advertiser perspective, he or she has a very limited pool of inventory to choose from,” said Sankaranarayanan, concluding that the industry needed a solution that “increases the ad inventory and relevance

without impacting the viewing experience.”

The Prime Focus paper describes a method of addressing this through the use of “in-video” contextual data such as the ‘mood’ of the scene, the characters in it, their emotions and actions, and so on. This is described using certain keywords or metadata, and related to an exact point or time-interval within the video.

In practice, Prime Focus’s trial used automated tools to analyse video-frames to extract this information, which was then manually checked for accuracy. The resulting keywords were then coupled with the existing content metadata (title, genre, etc.) and user demographic data (age, location, etc.) to provide an ‘enriched’ stream of metadata which was then passed onto the ad-decision system.

“Owing to the additional layer of in-video metadata that has come in, the ad system is able to make a better decision about which ad to show – thus in turn increasing its relevance for that in-video ‘moment’,” the paper argues, pointing out that “since each frame/time interval is described using certain metadata, the number of such ‘moments’ where an ad can be shown also goes up – potentially [providing] more opportunities for a publisher/broadcaster to show an ad.”

Prime Focus’s trial tested the concept on a representative sample of viewers – some of them employees – divided into two groups, one of which was a control group shown video content in which ads were randomly placed; while the other, experimental group were shown content populated with ‘contextual’ ads, chosen using enhanced metadata and placed at appropriate points throughout the video.

The two groups were then tested on their subsequent recall of the ads using a variety of methods, showing that in general, recall was around twice as good for the contextual ads powered by enhanced metadata as for the randomly-allocated ads (see Figure 1).

Figure 1: Ad effectiveness

Control group response (%) Experimental group response (%)
General recall 47 80
Theme recall 20 46
Brand recall 22 50
Message recall 17 32
Purchase intent 10 18

Source: Prime Focus Technologies in India

Stephanie Scheller, Head of Business Development at Appiness in Belgium, followed Sankaranarayanan with a paper describing a different approach to enriching video metadata, explaining how her company had developed a media processor that enriches video files with metadata linked to e-commerce stores**.

In essence, the system uses machine learning to identify the video frames in certain TV series that contain particular commercial products, time-stamp them, and then associate them with a dynamic e-commerce database to provide relevant information about the products shown.

The system is currently implemented in Belgium through a second-screen application called Spott, which – while the TV show is being shown on the main or ‘first’ screen – displays a synchronised, enhanced version on a tablet or phone which highlights products onscreen at various points in the action, allowing the viewer to discover information about the actors and their clothing, the props used in the scenes and, for instance, to find out what ingredients a chef might be using in a cooking show. Ultimately, these products are then purchasable online.

Research carried out among 500 users by Appiness last year through research company iMinds Living Lab found that users saw the service as enriching their viewing experience, with 89% of respondents classified as ‘innovators’ judging that their TV experience while using Spott would be ‘better’ (‘early adopter’ respondents scored 70%).

Asked what their perceived likelihood of using the application to purchase items would be, 41% of ‘innovators’ said they would be likely to buy something through it at least once a month; against 24% of ‘early adopters’.

* Using Metadata To Maximize Yield And Expand Inventory In TV – Contextual Advertising
** Metadata Enriching Technology As The Key To Effective Target Audience Engagement And Content Monetization


Share on