Home Analysis One way to use face recognition to improve TV, as Royal Wedding...

One way to use face recognition to improve TV, as Royal Wedding guests get identified in real-time

image1 (9K)
Share on

Sky News is working with Amazon Web Services (AWS) and two of its AWS technology partners (GrayMeta and UI Centric) to provide one of the best examples yet of what face recognition and machine learning can offer the TV industry. For the Royal Wedding on May 19 in Windsor (Prince Harry and Meghan Markle) face recognition software in the cloud will determine who each guest is as they arrive at St. George’s Chapel. Extended information about each guest will be streamed as a data file to the Sky News website and Sky News apps, where viewers can choose to benefit from the special ‘Who’s Who’ second screen (and companion screen) experience.

The Amazon Rekognition video and image analysis service is at the heart of this innovative application. The ‘machine’ has been trained ahead of the wedding to recognise the guests, using photos. Everyone attending will be recognised in real-time on the day, with the system able to cope with pairs or groups of people arriving together. Once the identity of a guest is confirmed, metadata files associated with them will be streamed separately to the viewing devices (like mobile phone or tablet) and the video feed and the metadata feed will be synchronised in the device. Viewers who have selected ‘on’ for metadata tagging will see wedding guests, with information about them displayed alongside.

Sky News wanted a special user experience for the event (where the broadcaster has the only UHD coverage, too). The information tags that accompany a guest appear whether you are watching in real-time (true live) or after rewinding the live stream to a ‘replay’ experience – and they will be available with the on-demand assets after the event. If you missed the arrival of someone you wanted to see, you return to when they arrived on a ‘timeline’, or you can select a name from the guest list and the app will take you directly to the point in the coverage where that guest arrived at the chapel.

“This is innovative in the way it uses machine learning to enhance the user experience of a live-streamed major news event,” Sky and Amazon said in a press release. David Gibbs, Director of Digital News and Sports Products at Sky, adds: “We are excited by the software’s potential and the ability to give audiences new ways of consuming content. This new functionality allows Royal Wedding viewers greater insight into one of the biggest live events of the year, wherever they are.”

GreyMeta is responsible for creating the metadata, using its GrayMeta data analysis platform. Sky has combined this with Amazon Rekognition, which is described by AWS as a service that can identify objects, people, text, scenes and activities using deep learning-based image and video analysis. These capabilities can be integrated into a service/application via an API.

UI Centric has designed and developed the front-end application and video player behind the new user experience. AWS and AWS Elemental are responsible for a good deal of the remaining technology involved, starting with live on-site encoding in Windsor for the streaming/multiscreen feed (using AWS Elemental Live).

The video is sent directly to the cloud for packaging and playout, using AWS Elemental Media Services, which also covers live-to-VOD processing (using the live feed to create on-demand assets). Sky News is using the Amazon CloudFront CDN for distribution over the Internet to viewers.

AWS was demonstrating its face recognition capabilities at NAB last month, using football (identification of footballers during a match). This will be one of the first times face recognition has been harnessed for a media application and for a live event in Europe.


Share on