Share

Transform live video for mobile audiences with AWS Elemental Inference

Today, AWS is announcing AWS Elemental Inference, a fully managed AI service that automatically transforms and maximizes live and on-demand video broadcasts to engage audiences and generate revenue. At launch, AWS Elemental Inference will enable video customers to adapt video content into vertical formats optimized for mobile and social platforms in real time.

 

With AWS Elemental Inference, broadcasters and streamers can reach audiences on social and mobile platforms like TikTok, Instagram Reels, and YouTube Shorts without manual post-production work or AI expertise.

 

Today’s viewers consume content differently than they did even a few years ago. However, most live broadcasts are produced in landscape format for traditional viewing. Converting these broadcasts into vertical formats for mobile platforms typically requires time-consuming manual editing that causes broadcasters to miss viral moments and lose audiences to mobile-first destinations.

 

How AWS Elemental Inference works

The service uses an agentic AI application by analyzing video in real time and automatically applies the right optimizations at the right moments. Detection of vertical video cropping and clip generation happens independently, executing multi-step transformations that require no human intervention to extract value.

 

AWS Elemental Inference analyzes video and automatically applies AI capabilities with no human-in-the-loop prompting required. While you focus on quality video production, AWS Elemental Inference autonomously optimizes content to create personalized content experience for your audiences.

 

AWS Elemental Inference applies AI capabilities in parallel with live video, achieving 6-10 second latency compared to minutes for traditional post-processing approaches. This “process once, optimize everywhere” method runs multiple AI features simultaneously on the same video stream, eliminating the need to reprocess content for each capability.

 

The service integrates seamlessly with AWS Elemental MediaLive, so you can enable AI features without modifying your existing video architecture. AWS Elemental Inference uses fully managed foundation models (FMs) that are automatically updated and optimized, so you don’t need dedicated AI teams or specialized expertise.

 

Key features at launch

 

Vertical video creation – AI-powered cropping intelligently transforms landscape broadcasts into vertical formats (9:16 aspect ratio) optimized for social and mobile platforms. The service tracks subjects and keeps key action visible, maintaining broadcast quality while automatically reformatting content for mobile viewing.

 

Clip generation with advanced metadata analysis – Automatically detects and extracts clips from live content highlights for real-time distribution. For live broadcasts, this means identifying game-winning plays, touchdowns, and emotional peaks with precise start and end points—reducing manual editing from hours to minutes.

 

Now available

AWS Elemental Inference is available today in 4 AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Mumbai). You can enable AWS Elemental Inference through the AWS Elemental MediaLive console or integrate it into your workflows using the APIs. With consumption-based pricing, you pay only for the features you use and the video you process, with no upfront costs or commitments. This means you can scale during peak events and optimize costs during quieter periods. Keep an eye on this space as more features and capabilities will be introduced throughout this year, including tighter integration with core Elemental services and features to help customers monetize their video content better.

 

To learn more about AWS Elemental Inference, visit the AWS Elemental Inference product page.

 

Image Credit: Image Courtesy of Fox Sports

Broadcast Beat - Production Industry Resource