<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Eyevinn Technology Blog</title>
    <link>https://www.eyevinn.se/blog.html</link>
    <description>Video streaming consulting, open source, AI innovations</description>
    <language>en-US</language>
    <lastBuildDate>Tue, 31 Mar 2026 20:03:10 GMT</lastBuildDate>
    
    <item>
      <title>Reimagining Broadcast Intercom with Open Source</title>
      <link>https://www.eyevinn.se/blog/reimagining-broadcast-intercom.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/reimagining-broadcast-intercom.html</guid>
      <pubDate>Mon, 16 Feb 2026 00:00:00 GMT</pubDate>
      <dc:creator>Alexander Björneheim</dc:creator>
      <category>open-source</category>
      <description>How we collaborated with Nordic broadcasters to build a modern, software-based intercom system that challenges decades of proprietary hardware dominance.</description>
      <content:encoded><![CDATA[<h1>Reimagining Broadcast Intercom with Open Source</h1>
<p>During a sports production event, an insightful conversation raised a simple but powerful question: <strong>Why are we still tied to complex, proprietary intercom systems when modern, open-source technologies could offer simpler, more flexible alternatives?</strong></p>
<p>That question didn&#39;t just fade away—it sparked action. It led to a unique cooperation between SVT, YLE, NRK, TV2 Norge, and Eyevinn Technology to reimagine intercom systems for live broadcast production.</p>
<h2>The Problem with Traditional Intercom</h2>
<p>Broadcast intercom systems have been dominated by proprietary hardware for decades. These systems are:</p>
<ul>
<li><strong>Expensive</strong>: Hardware-based systems require significant capital investment</li>
<li><strong>Inflexible</strong>: Difficult to scale or adapt to changing production needs</li>
<li><strong>Complex</strong>: Require specialized knowledge and maintenance</li>
<li><strong>Vendor lock-in</strong>: Tied to specific manufacturers with ongoing license fees</li>
</ul>
<p>For a medium-sized broadcaster, the cost of a traditional intercom system can easily exceed €100,000, with ongoing maintenance and upgrade costs.</p>
<h2>The Open Source Alternative</h2>
<p>Our solution leverages <strong>WebRTC</strong> technology to create a browser-based intercom system that runs on standard hardware. Key features include:</p>
<h3>Ultra-Low Latency</h3>
<p>Sub-200ms latency ensures real-time communication critical for live production environments.</p>
<h3>Zero Hardware Requirements</h3>
<p>Runs entirely in web browsers—no specialized hardware needed. Production teams can use laptops, tablets, or smartphones.</p>
<h3>End-to-End Encryption</h3>
<p>Built-in WebRTC encryption ensures secure communication channels.</p>
<h3>SIP Integration</h3>
<p>Seamless integration with existing SIP-based systems enables hybrid workflows during migration.</p>
<h2>Technical Architecture</h2>
<p>The system is built on three core components:</p>
<ol>
<li><strong>WebRTC Signaling Server</strong>: Manages connection establishment and routing</li>
<li><strong>Media Server</strong>: Handles audio mixing and distribution for large-scale productions</li>
<li><strong>Web Client</strong>: Browser-based interface with intuitive controls</li>
</ol>
<p>All components are containerized and deployable on any cloud infrastructure or on-premises.</p>
<h2>Real-World Impact</h2>
<p>The Open Intercom project has been deployed in production environments across multiple Nordic broadcasters. Results include:</p>
<ul>
<li><strong>90% cost reduction</strong> compared to traditional hardware solutions</li>
<li><strong>Faster deployment</strong>: New intercom channels can be provisioned in minutes, not days</li>
<li><strong>Greater flexibility</strong>: Remote production teams can join from anywhere with internet connectivity</li>
<li><strong>No vendor lock-in</strong>: Full source code ownership and transparency</li>
</ul>
<h2>Open Source Strategy</h2>
<p>The entire project is <a href="https://github.com/Eyevinn/intercom-manager">available on GitHub</a> under an open source license. This approach provides:</p>
<ul>
<li><strong>Transparency</strong>: Full visibility into how the system works</li>
<li><strong>Community contributions</strong>: Improvements and features from the broader broadcast community</li>
<li><strong>Future-proof</strong>: No dependency on a single vendor&#39;s roadmap</li>
</ul>
<h2>Try It Yourself</h2>
<p>You can experience the Open Intercom system live at <a href="https://intercom.apps.osaas.io">intercom.apps.osaas.io</a>. The demo environment showcases the full functionality including multi-party audio, low-latency communication, and intuitive controls.</p>
<h2>What&#39;s Next</h2>
<p>The Open Intercom project continues to evolve with features like:</p>
<ul>
<li>Advanced audio routing and mixing capabilities</li>
<li>Integration with production automation systems</li>
<li>Mobile apps for iOS and Android</li>
<li>Enhanced monitoring and diagnostics</li>
</ul>
<p>This project demonstrates that <strong>open source collaboration can challenge decades-old industry standards</strong> and deliver better, more cost-effective solutions for broadcasters.</p>
<hr>
<p><em>Interested in implementing Open Intercom for your production environment? <a href="mailto:info@eyevinn.se">Contact us</a> to discuss your requirements.</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>AI-Powered Media Orchestration: From Concept to Production</title>
      <link>https://www.eyevinn.se/blog/ai-powered-media-orchestration.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/ai-powered-media-orchestration.html</guid>
      <pubDate>Sun, 15 Feb 2026 00:00:00 GMT</pubDate>
      <dc:creator>Alexander Björneheim</dc:creator>
      <category>ai</category>
      <description>How Eyevinn combines Claude AI with the Open Source Cloud platform to automate complex media workflows and deliver production-ready solutions, not theoretical proofs-of-concept.</description>
      <content:encoded><![CDATA[<h1>AI-Powered Media Orchestration: From Concept to Production</h1>
<p>The media industry has been discussing AI for years. But most implementations fall into two camps: <strong>overhyped vendor promises</strong> that never materialize, or <strong>theoretical research projects</strong> that never reach production.</p>
<p>At Eyevinn, we&#39;re taking a different approach: <strong>combining Claude AI with our Open Source Cloud platform to deliver production-ready automation today.</strong></p>
<h2>The Challenge: Complexity at Scale</h2>
<p>Modern video streaming workflows involve dozens of interconnected services:</p>
<ul>
<li>Transcoders</li>
<li>Packagers</li>
<li>CDN configurations</li>
<li>Live production tools</li>
<li>Monitoring and analytics</li>
<li>Ad insertion systems</li>
</ul>
<p>For broadcasters and streaming platforms, <strong>configuring these systems correctly requires deep technical expertise</strong> and often takes weeks or months. Manual configuration is error-prone and difficult to replicate across environments.</p>
<h2>The AI Solution: Intelligent Orchestration</h2>
<p>We&#39;ve developed AI-powered orchestration that can:</p>
<h3>1. Architect and Build VOD Pipelines</h3>
<p><strong>Challenge</strong>: Designing an optimal VOD processing pipeline requires expertise in transcoding profiles, packaging formats, CDN optimization, and cost management.</p>
<p><strong>AI Solution</strong>: Claude AI analyzes content requirements (resolution, bitrate, target devices) and automatically:</p>
<ul>
<li>Selects optimal transcoding profiles</li>
<li>Configures packaging for HLS and DASH</li>
<li>Sets up CDN distribution</li>
<li>Implements monitoring and analytics</li>
</ul>
<p><strong>Result</strong>: What previously took 2-3 weeks of specialist time now happens in minutes, with consistent quality across deployments.</p>
<h3>2. Live Production Automation</h3>
<p><strong>Challenge</strong>: Setting up broadcast intercoms, video routers, and monitoring for live events involves coordinating multiple systems and configurations.</p>
<p><strong>AI Solution</strong>: Natural language instructions like &quot;Set up a 4-channel intercom for sports production with remote commentary positions&quot; triggers AI to:</p>
<ul>
<li>Deploy Open Intercom instances</li>
<li>Configure audio routing</li>
<li>Set up monitoring dashboards</li>
<li>Establish backup connections</li>
</ul>
<p><strong>Result</strong>: Live production setup time reduced from hours to minutes.</p>
<h3>3. Real-Time Media Transformation</h3>
<p><strong>Challenge</strong>: Converting SRT feeds to browser-compatible streaming formats traditionally requires specialized hardware encoders.</p>
<p><strong>AI Solution</strong>: AI orchestrates containerized services to:</p>
<ul>
<li>Receive SRT input streams</li>
<li>Transcode to low-latency HLS</li>
<li>Deliver browser-compatible playback</li>
<li>Monitor quality and latency metrics</li>
</ul>
<p><strong>Result</strong>: Instant preview capabilities without hardware investment.</p>
<h2>Technical Architecture</h2>
<p>Our approach combines three key components:</p>
<h3>Claude AI (Reasoning Engine)</h3>
<ul>
<li>Understands natural language requirements</li>
<li>Makes architectural decisions</li>
<li>Generates configuration code</li>
<li>Validates deployments</li>
</ul>
<h3>Open Source Cloud (Execution Platform)</h3>
<ul>
<li>200+ community-contributed services</li>
<li>Containerized, cloud-agnostic deployment</li>
<li>Transparent, no vendor lock-in</li>
<li>Production-grade infrastructure</li>
</ul>
<h3>Eyevinn Expertise (Domain Knowledge)</h3>
<ul>
<li>Deep video streaming knowledge</li>
<li>Best practices and patterns</li>
<li>Production validation</li>
<li>Ongoing optimization</li>
</ul>
<h2>Real-World Results</h2>
<p>Our clients are seeing measurable impact:</p>
<ul>
<li><strong>ITV (UK broadcaster)</strong>: Reduced VOD pipeline deployment from 3 weeks to 2 days</li>
<li><strong>TV4 (Swedish broadcaster)</strong>: Automated live production setup saving 40+ hours per event</li>
<li><strong>Streaming platform client</strong>: Cut infrastructure costs by 35% through AI-optimized configurations</li>
</ul>
<h2>Why This Works (When Others Don&#39;t)</h2>
<p>Three critical factors differentiate production-ready AI from vendor vaporware:</p>
<h3>1. Real Infrastructure</h3>
<p>We deploy on actual cloud infrastructure (AWS, Google Cloud, Azure), not simulations. If it doesn&#39;t work in production, it doesn&#39;t count.</p>
<h3>2. Open Source Foundation</h3>
<p>All orchestrated services are open source and transparent. Clients can inspect, modify, and own their deployments. No black-box vendor dependencies.</p>
<h3>3. Domain Expertise</h3>
<p>AI is powerful, but it needs guardrails. Our video streaming expertise ensures AI-generated architectures follow best practices and avoid common pitfalls.</p>
<h2>Getting Started</h2>
<p>Interested in AI-powered media orchestration for your workflows? Here&#39;s how to start:</p>
<ol>
<li><strong>Define requirements</strong>: What workflows are time-consuming or error-prone?</li>
<li><strong>Proof of concept</strong>: We&#39;ll build a working demo of AI automation for your specific use case</li>
<li><strong>Production pilot</strong>: Deploy to a controlled production environment</li>
<li><strong>Scale</strong>: Expand to additional workflows and use cases</li>
</ol>
<h2>The Future of Media Workflows</h2>
<p>AI orchestration isn&#39;t replacing media engineers—it&#39;s <strong>augmenting their capabilities</strong>. Engineers focus on strategy and optimization while AI handles repetitive configuration and deployment tasks.</p>
<p>This shift enables:</p>
<ul>
<li><strong>Faster time-to-market</strong> for new services</li>
<li><strong>Consistent quality</strong> across deployments</li>
<li><strong>Cost optimization</strong> through intelligent resource allocation</li>
<li><strong>Reduced human error</strong> in complex configurations</li>
</ul>
<p>The future of media workflows is <strong>intelligent, automated, and open</strong>. And it&#39;s available today.</p>
<hr>
<p><em>Want to see AI orchestration in action? <a href="https://www.eyevinn.se/media-orchestration.html">Watch our demo videos</a> or <a href="mailto:info@eyevinn.se">schedule a consultation</a>.</em></p>
]]></content:encoded>
    </item>
    <item>
      <title>Open Intercom with software-based audio ingest support now general available</title>
      <link>https://www.eyevinn.se/blog/open-intercom-with-software-based-audio-ingest-support-now-general-available.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/open-intercom-with-software-based-audio-ingest-support-now-general-available.html</guid>
      <pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>open-source</category>
      <description>With the addition to support the WHIP protocol(https://www. ietf.</description>
      <content:encoded><![CDATA[<p>With the addition to support the <a href="https://www.ietf.org/archive/id/draft-ietf-wish-whip-01.html">WHIP protocol</a> in the open source production intercom solution, Open Intercom, the solution now supports bringing an external audio signal into an intercom call without requiring an extra audio hardware device.</p>
<p><img src="/blog/images/open-intercom-with-software-based-audio-ingest-support-now-general-available/image-0-403af00e.png" alt=""></p>
<p>WebRTC-HTTP Ingest Protocol is a protocol designed for ingesting media into a WebRTC-based media service in a standardized way. With a growing support among broadcast software and media gateways we chose to implement this protocol as the interface for external (non human) audio signals to join a call.</p>
<p>This is one of the many features included in the latest release of Open Intercom that is now general available and available as an <a href="https://app.osaas.io/browse/eyevinn-intercom-manager?utm_source=medium&utm_medium=blog&utm_campaign=whipga&utm_id=whipga">open web service in Eyevinn Open Source Cloud</a>. This is the quickest and easiest way to get started with Open Intercom, and as it is open source you can bring it to your own facilities whenever you like.</p>
<p>In this blog we will walk you through how to get started with this new feature and we assume that you already have access to an Open Intercom solution. If not, follow <a href="https://docs.osaas.io/osaas.wiki/Service%3A-Intercom.html">this guide</a> first on how to quickly setup your own Open Intercom solution with a few clicks of a button.</p>
<h3>Bring an external audio signal to your call</h3>
<p>Join a call in the Intercom web user interface.</p>
<p><img src="/blog/images/open-intercom-with-software-based-audio-ingest-support-now-general-available/image-1-9a84fc67.png" alt=""></p>
<ol>
<li>Press the ‘Get WHIP URL’-button located at the bottom left on each call.</li>
<li>Then, enter a descriptive username for the WHIP session. The username will be shown in the participant list of the call.</li>
<li>Then, copy the generated URL. In this example, I have decided to call my WHIP session ‘pgm’.</li>
</ol>
<p><img src="/blog/images/open-intercom-with-software-based-audio-ingest-support-now-general-available/image-2-8bc58e2a.png" alt=""></p>
<h3>Open Broadcaster Software (OBS)</h3>
<p>In this guide we will use the popular <a href="http://obsproject.com/">Open Broadcaster Software (OBS)</a> to be the source of this audio signal.</p>
<p>Open OBS and go to “Settings”</p>
<p><img src="/blog/images/open-intercom-with-software-based-audio-ingest-support-now-general-available/image-3-1eddd401.png" alt=""></p>
<ul>
<li>From there, select “Stream” in the navigation menu on the left side.</li>
<li>Then, in the dropdown labelled “Service”, select WHIP.</li>
<li>In the “Server” input field, paste the WHIP URL you copied in the Open Intercom web user interface.</li>
<li>Press “Apply” followed by “OK” to apply your settings.</li>
</ul>
<p><img src="/blog/images/open-intercom-with-software-based-audio-ingest-support-now-general-available/image-4-091e0561.png" alt=""></p>
<p>When ready, press “Start Streaming”.</p>
<p>You should see this audio source in the participant list of the call you are in shortly. It will show up with the name you previously entered in the “username” field when generating the WHIP URL.</p>
<p><img src="/blog/images/open-intercom-with-software-based-audio-ingest-support-now-general-available/image-5-f286eff0.png" alt=""></p>
<p>If you press “Stop Streaming” in the OBS UI, this audio source will disappear from the call’s participant list.</p>
<h3>Built for the Way You Work Today</h3>
<p>With the rise of remote production, Open Intercom aligns with how modern teams want to work: distributed, flexible, and efficient. This setup demonstrates how you can integrate external audio sources with this browser-based intercom without compromising audio quality or reliability.</p>
<p>And because it’s open-source, Open Intercom can be tailored to your needs — hosted on your own infrastructure, extended with custom features, or integrated with your control UIs and production logic.</p>
<p>Whether you’re looking to augment your current system or replace it entirely, Open Intercom is built for broadcasters who value flexibility, quality, and control — without the cost or constraints of traditional intercom hardware.</p>
<p>Interested in trying Open Intercom in your setup? <a href="https://app.osaas.io/browse/eyevinn-intercom-manager?utm_source=medium&utm_medium=blog&utm_campaign=whipga&utm_id=whipga">Sign up for free at Eyevinn Open Source Cloud</a> and launch your own instance with a few clicks of a button. Explore, experiment or get in touch with us at <a href="mailto:sales@eyevinn.se">sales@eyevinn.se</a> for support and custom integration options.</p>
<h3>Additional Reading</h3>
<ul>
<li><a href="https://medium.com/@eyevinntechnology/modern-broadcasting-transformed-how-browser-based-open-intercom-replaces-costly-hardware-systems-76d8b47ba83e">https://medium.com/@eyevinntechnology/modern-broadcasting-transformed-how-browser-based-open-intercom-replaces-costly-hardware-systems-76d8b47ba83e</a><!-- MEDIUM LINK --></li>
<li><a href="https://www.osaas.io?utm_source=medium&utm_medium=blog&utm_campaign=whipga&utm_id=whipga">Eyevinn Open Source Cloud</a></li>
</ul>
<p><img src="/blog/images/open-intercom-with-software-based-audio-ingest-support-now-general-available/image-6-82db58d8.jpg" alt=""></p>
]]></content:encoded>
    </item>
    <item>
      <title>Vibe Coding and Open Source Cloud: A Game-Changing Duo for App Development</title>
      <link>https://www.eyevinn.se/blog/vibe-coding-and-open-source-cloud-a-game-changing-duo-for-app-development.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/vibe-coding-and-open-source-cloud-a-game-changing-duo-for-app-development.html</guid>
      <pubDate>Tue, 22 Jul 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>ai</category>
      <description>(https://cdn-images-1. medium.</description>
      <content:encoded><![CDATA[<p><img src="/blog/images/vibe-coding-and-open-source-cloud-a-game-changing-duo-for-app-development/image-0-473f78bd.png" alt=""></p>
<p>A new development paradigm is emerging at the intersection of AI-assisted coding and open source cloud infrastructure. Vibe coding, a style where developers use AI and let tools like ChatGPT or Claude generate much of the code, is rapidly becoming mainstream.</p>
<p>At the same time, Eyevinn Open Source Cloud (OSC) are turning hundreds of open source projects into ready-to-use cloud services, free of vendor lock-in. Together, vibe coding and open source cloud services form a winning combination for developers and startups, enabling unprecedented speed and flexibility in building applications. In this article, we’ll explore what vibe coding is, why infrastructure is the missing piece, and how Open Source Cloud fills that gap to empower creators.</p>
<h3><strong>The Rise of Vibe Coding (AI-Powered Development)</strong></h3>
<p>Vibe coding refers to a development approach where AI takes the lead on writing code, and the human developer guides or fine-tunes the process. Instead of manually writing every line, programmers collaborate with AI coding assistants to generate functions, modules, and even entire apps almost effortlessly. In fact, a majority of the code in new startups is AI-generated, even when founders had strong coding skills. This “fast and fluid” approach is becoming the new norm because it dramatically accelerates development cycles without sacrificing quality.</p>
<p>Popular AI coding tools make vibe coding accessible to everyone. LLM-based copilots like ChatGPT, GitHub Copilot, and Claude Code can interpret natural language prompts and produce working code in various programming languages. More advanced agent-driven tools such as Cursor or Windsurf go further, they break down high-level tasks into subtasks, write code for each part, and integrate the pieces, all from a single prompt. With these tools, a solo developer can describe an idea (e.g. “build a web app for X with a database and user login”) and watch an AI agent scaffold much of the application automatically.</p>
<p>However, vibe coding alone doesn’t completely eliminate the work. While an AI can crank out your software logic, turning that code into a live, usable product requires infrastructure. This is where many startups still hit a bottleneck, you need to set up databases, storage, and hosting environments to run the AI-generated application. In short, even if coding feels magically easy now, someone still has to handle the cloud.</p>
<h3><strong>The Missing Piece: Infrastructure &amp; Deployment Challenges</strong></h3>
<p>Imagine a small startup that just used an AI assistant to build a prototype web application. The app’s code might be ready in record time, but the team now faces the classic question: “Where do we deploy this, and how do we hook up the needed database and storage?”</p>
<p>Traditionally, there are two options, both with downsides:</p>
<ul>
<li>Self-Hosting or DIY Infrastructure: Setting up your own servers or local environment for the database, file storage, and application runtime. This often involves installing databases (like PostgreSQL or MongoDB), configuring web servers or containers, and managing hardware or VM instances. It’s doable for seasoned developers but time-consuming and not scalable. Every hour spent wrestling with Docker, Nginx, or database config is an hour not spent improving your app’s features.</li>
<li>Proprietary Cloud Services: Using a big cloud provider (AWS, Azure, GCP, etc.) to host your app and data. While these offer convenience, they come with cost and lock-in concerns. Managed services like AWS RDS or S3 can become expensive for a bootstrapped startup, and relying on proprietary platforms can trap you in a single ecosystem. Migrating away later is painful, and you might be forced to adapt to that provider’s quirks and pricing. In short, you trade flexibility for convenience.</li>
</ul>
<p>What’s clear is that infrastructure setup remains a hurdle for quickly sharing or launching the AI-built application. As one engineer put it, having a web app running locally is one thing, but making it accessible to users or stakeholders “requires a lot of infrastructure, time and effort you don’t want to spend just to show something that is work in progress”. This gap between code running on your laptop and a live service on the internet is where many projects stumble.</p>
<h3>Open Source Cloud: Infrastructure at your fingertips</h3>
<p>Open Source Cloud (OSC) addresses this very problem by providing a buffet of managed infrastructure built entirely on open-source tools. The idea is simple: take proven open source software (databases, storage systems, streaming engines, etc.) and offer them as on-demand cloud services, similar to AWS offerings, but without proprietary tech or walled gardens. OSC turns powerful open source tools into accessible Open Web Services with MCP support, unlocking 500+ tools that AI agents can use to build real solutions and applications for you”.</p>
<p>In practice, this means as a developer you can spin up components like databases or file storage as simple as vibe coding, knowing they are open source under the hood.</p>
<p>OSC solves the infrastructure challenge for startups by making powerful open-source software easily accessible as cloud services. Instead of manually setting up databases, storage systems, or web environments, developers can instantly deploy tools like PostgreSQL, MariaDB, or MinIO storage.</p>
<p>Because OSC exclusively uses open-source software, you maintain complete flexibility. If your needs change or you decide to self-host later, you can easily migrate your infrastructure without rewriting code. Plus, OSC unique revenue-sharing model supports the open-source community, meaning every time you deploy a service, you’re contributing directly to the creators of the tools you use.</p>
<p>In short, OSC provides the convenience and ease of managed cloud services while ensuring the freedom, transparency, and affordability of open source — perfectly complementing AI-assisted coding to streamline your path from idea to production.</p>
<h3>Combining Vibe Coding with Open Source Cloud</h3>
<p>The real magic happens when you use AI coding agents in tandem with Open Source Cloud’s capabilities. OSC was built with an “AI agent-first” philosophy, meaning it’s designed to be easily controlled by AI assistants (not just human clicks). This is enabled by Anthropic’s Model Context Protocol (MCP) integration. MCP is essentially a standardized way for AI models to interact with external tools and services through natural language instructions. In the context of OSC, MCP allows an AI agent (like an LLM Cursor or Claude) to directly call the OSC API and manage cloud services on your behalf.</p>
<p><img src="/blog/images/vibe-coding-and-open-source-cloud-a-game-changing-duo-for-app-development/image-1-71e3ea1c.png" alt=""></p>
<p>Consider how this works in a startup scenario: you describe your goal to an AI agent, for example, “Set up a new web application with a user database, an API server, and file storage for images”. With vibe coding, the AI can generate the application’s code (the web frontend, backend API, etc.). Thanks to OSC’s MCP support, the same AI agent can concurrently take care of the infrastructure: it will create a PostgreSQL database service for user data, allocate a MinIO storage bucket for images, and deploy the web app using Web Runner, all by communicating with the Open Source Cloud platform in the background.</p>
<p>The result is a fully working application stack assembled end-to-end by the AI, from code to cloud. You simply describe your needs in plain language and “AI agents will automatically deploy services, configure dashboards, and set up data pipelines, all without requiring deep technical knowledge.</p>
<p>The combination of vibe coding and open source cloud services essentially turns natural language into full-stack solutions. It’s a game-changer for productivity. A process that once took a team of specialists and weeks of effort, writing code, provisioning servers, configuring integrations, can now potentially be done in a single afternoon by one person and an AI helper.</p>
<p>For developers and founders, this means you can focus more on product vision and user experience, while the AI + open source cloud combo handles the boilerplate setup and infrastructure logic.</p>
<h3>Conclusion: A New Era of Effortless Innovation</h3>
<p>The combination of vibe coding and open source cloud infrastructure represents a fundamental shift in how software is built and launched. We are entering an era where building an application is as simple as describing what you need, and both the code and the cloud resources materialize almost automatically.</p>
<p>For developers and startup founders, this is a massive opportunity: it lowers barriers, speeds up innovation, and lets even small teams compete with the agility of much larger organizations.</p>
<p>Open Source Cloud, with its library of open web services and AI-friendly design, is a prime example of this transformation. It augments the power of AI coding tools by providing the missing puzzle piece, the environment where that AI-written code runs.</p>
<p>The result is a seamless pipeline from idea to implementation: you “vibe” with your AI to craft the software, and the open source cloud ensures it’s instantly available to users, no cloud lock-in strings attached.</p>
<p>For developers and startups, now is the time to embrace this game-changing combo. You can prototype boldly, iterate faster, and deploy confidently, knowing that the code you write (or that AI writes for you) is backed by flexible, transparent infrastructure. In practical terms, this means more time building the unique value of your product and less time wrestling with servers and services.</p>
<p>The future of app development is here, and it’s powered by AI and open source, working hand in hand.</p>
<p>Sign up for an <a href="https://www.osaas.io">Eyevinn Open Source Cloud account (for free)</a> and <a href="https://docs.osaas.io/osaas.wiki/User-Guide%3A-Enable-OSC-with-AI-agents.html">enable OSC in your AI agent to get started</a>.</p>
<p><img src="/blog/images/vibe-coding-and-open-source-cloud-a-game-changing-duo-for-app-development/image-2-183e2025.jpg" alt=""></p>
]]></content:encoded>
    </item>
    <item>
      <title>Enable Your AI Agent to Orchestrate Your Media Workflows</title>
      <link>https://www.eyevinn.se/blog/enable-your-ai-agent-to-orchestrate-your-media-workflows.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/enable-your-ai-agent-to-orchestrate-your-media-workflows.html</guid>
      <pubDate>Sat, 05 Jul 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>technology</category>
      <description>Most AI agents are limited to conversation and advice.  By integrating with Open Source Cloud (OSC)(https://www.</description>
      <content:encoded><![CDATA[<p>Most AI agents are limited to conversation and advice. By integrating with <a href="https://www.osaas.io">Open Source Cloud (OSC)</a>, your AI agent can move beyond chat to actually orchestrate real workflows in your media infrastructure. Here are videos of some examples.</p>
<h3>Video Post-Production Orchestration</h3>
<p>Your AI agent can directly orchestrate video post-production tasks through simple conversation. In our demonstrations, the agent crops video footage and converts it to black and white, orchestrating the technical workflow automatically. What normally requires dedicated editing suites and manual operator time becomes an automated workflow managed by your AI assistant.</p>
<p><a href="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FyXhsCUuXDkI%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DyXhsCUuXDkI&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FyXhsCUuXDkI%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube">▶️ Video</a></p>
<p>The agent also orchestrates subtitle generation for video content using AI-powered transcription services. Simply provide a video file, and your agent coordinates the audio processing pipeline to create broadcast-quality subtitle files, streamlining localization workflows without manual transcription overhead.</p>
<p><a href="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FBzelV5TUenI%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DBzelV5TUenI&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FBzelV5TUenI%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube">▶️ Video</a></p>
<h3>Streaming Pipeline Orchestration</h3>
<p>For broadcast and streaming operations, the agent orchestrates video transcoding and prepares content for multi-platform distribution. This includes coordinating ABR (Adaptive Bitrate) ladder creation, managing multiple resolution variant generation, and configuring streaming-optimized encoding profiles — complex media processing pipelines that the agent orchestrates through natural language commands.</p>
<p><img src="/blog/images/enable-your-ai-agent-to-orchestrate-your-media-workflows/image-0-49edf161.png" alt=""></p>
<p><a href="https://web.player.eyevinn.technology/?manifest=https%3A%2F%2Fvod.demo.osaas.io%2Fmcp-encore-shaka-demo-audio-fylcn%2Findex.m3u8">VIDEO</a></p>
<p>The agent orchestrates complete live streaming infrastructure optimized for OBS workflows. This includes coordinating RTMP endpoint configuration, managing bandwidth allocation across services, and ensuring broadcast-quality signal paths. Technical streaming infrastructure orchestration becomes as straightforward as briefing your AI assistant.</p>
<p><img src="/blog/images/enable-your-ai-agent-to-orchestrate-your-media-workflows/image-1-b0d2d584.png" alt=""></p>
<p><a href="https://web.player.eyevinn.technology/?manifest=https%3A%2F%2Fvod.demo.osaas.io%2Fmcp-live-encoding-jawzp%2Findex.m3u8">VIDEO</a></p>
<h3>Production Communication Orchestration</h3>
<p>During live production, seamless crew communication requires coordinated systems. Our demonstrations show the agent orchestrating intercom solutions that integrate with OBS-based production workflows. The agent coordinates communication matrices, manages talkback channel routing, and handles participant workflows — ensuring reliable crew coordination during live broadcasts.</p>
<p><a href="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FqOvGJMMyH3Y%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DqOvGJMMyH3Y&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FqOvGJMMyH3Y%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube">▶️ Video</a></p>
<h3>Stream Quality Orchestration</h3>
<p>Broadcast quality assurance becomes an orchestrated process through the agent’s ability to coordinate HLS stream monitoring in real-time. It orchestrates continuity checks, detects encoding errors, analyzes bitrate consistency, and coordinates response workflows when signal issues are identified before they impact viewers. This orchestrated monitoring maintains broadcast standards without dedicated operator oversight.</p>
<p><img src="/blog/images/enable-your-ai-agent-to-orchestrate-your-media-workflows/image-2-fb0db4ca.png" alt=""></p>
<p><a href="https://web.player.eyevinn.technology/?manifest=https%3A%2F%2Fvod.demo.osaas.io%2Fmcp-hls-monitor-qt10y%2Findex.m3u8">VIDEO</a></p>
<h3>From Media Planning to Workflow Orchestration</h3>
<p>These examples demonstrate production-ready technology for media organizations. Each workflow shown in our video demonstrations represents real broadcast infrastructure orchestration through natural language interaction. Your AI agent transforms from a planning tool into a workflow orchestrator capable of coordinating complex media pipelines and broadcast infrastructure.</p>
<p>The integration bridges the gap between creative vision and technical implementation — your agent doesn’t just understand media requirements, it orchestrates their execution across your entire production environment.</p>
<p>Sign up for an <a href="https://www.osaas.io">Eyevinn Open Source Cloud account (for free)</a> and <a href="https://docs.osaas.io/osaas.wiki/User-Guide%3A-Enable-OSC-with-AI-agents.html">enable OSC in your AI agent to get started</a>.</p>
<p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0ff3661a0531" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
]]></content:encoded>
    </item>
    <item>
      <title>Build your own VOD Content Processing with Open Source Cloud</title>
      <link>https://www.eyevinn.se/blog/build-your-own-vod-content-processing-with-open-source-cloud.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/build-your-own-vod-content-processing-with-open-source-cloud.html</guid>
      <pubDate>Thu, 12 Jun 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>open-source</category>
      <description>This open source VOD pipeline combines modular web services to automatically transform uploaded videos into streamable packages with AI-generated subtitles.  Built on Eyevinn Open Source Cloud, it off...</description>
      <content:encoded><![CDATA[<p><strong>This open source VOD pipeline combines modular web services to automatically transform uploaded videos into streamable packages with AI-generated subtitles. Built on Eyevinn Open Source Cloud, it offers production-ready automation without vendor lock-in — deploy as-is or fork for custom needs.</strong></p>
<p>With open web services based on open source in Eyevinn Open Source Cloud you can build your own VOD Content Processing pipeline including transcoding, automatic subtitling and creation of packages for on-demand streaming.</p>
<p><img src="https://cdn-images-1.medium.com/max/1024/1*ivVqAGwCyL0Jc7lrEVCiXQ.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>The <a href="https://github.com/Eyevinn/media-supply-orchestrator">open source media supply orchestration</a> bring the open web services for transcoding, automatic subtitling and packaging together and you can either deploy it as it is, or deploy your own fork of it with the modifications you need. This solution features:</p>
<ul>
<li>Automatic generation of subtitles.</li>
<li>Transcoding and creating VOD package for streaming in HLS and MPEG-DASH.</li>
<li>Automated process triggered when a file is uploaded to a bucket and result stored in another bucket.</li>
</ul>
<p>The following open web services are used in this solution:</p>
<ul>
<li><a href="https://app.osaas.io/dashboard/service/minio-minio">MinIO Server</a> — providing storage buckets.</li>
<li><a href="https://app.osaas.io/dashboard/service/encore">SVT Encore</a> — providing the video transcoding functionality.</li>
<li><a href="https://app.osaas.io/dashboard/service/eyevinn-auto-subtitles">Subtitle Generator</a> — uses OpenAI Whisper model for transcribing the audio to text for automated captioning of the content.</li>
<li><a href="https://app.osaas.io/dashboard/service/eyevinn-shaka-packager-s3">Shaka Packager</a> — bundles the transcoded video files and the subtitle file in a VOD package ready for streaming.</li>
<li><a href="https://app.osaas.io/dashboard/service/eyevinn-web-runner">Web Runner</a> — for deployment of the media supply orchestrator.</li>
<li><a href="https://app.osaas.io/dashboard/service/eyevinn-app-config-svc">Application Config Service</a> — for managing the configuration of the media supply orchestrator.</li>
<li><a href="https://app.osaas.io/dashboard/service/valkey-io-valkey">Valkey</a> — for storing configuration values in a database.</li>
</ul>
<p>All open web services are based on open source and can be deployed to your own cloud infrastructure if you later decide to do so.</p>
<h3>Building the pipeline</h3>
<p>This video tutorial demonstrates how you build this VOD content processing pipeline using the Eyevinn Open Source Cloud web console.</p>
<p><a href="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FpJuf_fVsq3I%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DpJuf_fVsq3I&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FpJuf_fVsq3I%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube">▶️ Video</a></p>
<p>First step is to create the MinIO server and the three buckets we will use: “input”, “abrsubs” and “origin”. The <a href="https://docs.osaas.io/osaas.wiki/Service%3A-MinIO.html">MinIO service getting started guide</a> instructs you how to work with the MinIO service in Eyevinn Open Source Cloud.</p>
<p>Next step is to create a Subtitle Generator service. For this service you need to have an <a href="https://platform.openai.com/docs/overview">OpenAI account and OpenAI API key</a> as it uses the OpenAI whisper-1 model for the transcription. <a href="https://docs.osaas.io/osaas.wiki/Service%3A-Subtitle-Generator.html">Configure the Subtitle Generator</a> service to have access to the buckets on the MinIO server you created in the first step.</p>
<p>Third step is to create the SVT Encoder for transcoding and creating the different video variants needed for seamless streaming over internet. Configure the transcoder to have access to the MinIO server and buckets.</p>
<p>The Shaka Packager service we only need to ensure we have activated it or is included in the subscription plan.</p>
<p>Now we have all the open web services that we will use configured and ready. Next step is to deploy the orchestrator that drives this process forward. What will trigger this process is when a new file with suffix mp4 is created on the input bucket.</p>
<h3>Orchestration</h3>
<p>For the orchestration of this process we will deploy an open source media supply orchestration <a href="https://github.com/Eyevinn/media-supply-orchestrator">project available on GitHub</a>. It is a NodeJS based application that subscribes for file events from the input bucket and sends requests to the Subtitle Generator and SVT Encore video transcoder to start subtitling and transcoding jobs. It provides callback webhooks that the subtitle generator and video transcoder can POST updates to. The orchestration manages the process and dependencies for different tasks.</p>
<p>When a file is uploaded to the input bucket a workflow is started. A subtitling job and transcoding job is started in parallell and when both jobs are completed a VOD packager job is started. When the VOD packager job is completed the package is uploaded to the origin bucket, the file on the input bucket is removed and files on temporary storage bucket are removed.</p>
<p>This is a simple workflow and if you want to extend or modify the orchestration you can create your own fork of the project and contributions to the project are also welcomed.</p>
<p>The orchestrator can be self-hosted outside of Eyevinn Open Source Cloud but if you don’t have your own infrastructure you can deploy this as a Web Runner in Open Source Cloud. You configure the Web Runner to fetch the source code either from this GitHub repository or to your fork of it. The code is then downloaded and run as a <a href="https://app.osaas.io/dashboard/service/eyevinn-web-runner">Web Runner instance</a> in your team account.</p>
<p>A detailed step-by-step guide to setup this VOD processing pipeline and the orchestration is available in the <a href="https://github.com/Eyevinn/media-supply-orchestrator">GitHub repository for the orchestrator</a>.</p>
<h3>Conclusion</h3>
<p>This open source VOD content processing pipeline demonstrates the power of combining modular web services to create a comprehensive media workflow. By leveraging the Eyevinn Open Source Cloud ecosystem, content creators and distributors can build a production-ready system that automatically transforms uploaded video files into streamable packages complete with AI-generated subtitles — all without vendor lock-in.</p>
<p>The solution’s strength lies in its flexibility and transparency. Each component, from MinIO storage to SVT Encore transcoding and OpenAI Whisper-powered subtitling, operates independently and can be replaced or extended as needs evolve. The orchestration layer ties everything together with a clean workflow that handles parallel processing and dependency management automatically.</p>
<p>Whether you’re a small content creator looking to automate your video processing or an enterprise seeking to reduce reliance on proprietary cloud services, this pipeline provides a solid foundation. You can deploy it as-is for immediate results, fork it for custom modifications, or use it as a blueprint for building your own media supply chain. With all components based on open source software, you maintain full control over your infrastructure and can migrate to your own cloud environment whenever you choose.</p>
<p>The complete setup guide and source code are available on GitHub, making it easy to get started and contribute back to the community that makes solutions like this possible.</p>
<p>Eyevinn Open Source Cloud: <a href="http://www.osaas.io">www.osaas.io</a></p>
<p><em>Join the community on</em> <a href="https://slack.osaas.io"><em>Slack</em></a> <em>for real-time support and connect with other Open Source Cloud users.</em></p>
<p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a3e95c94772f" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
]]></content:encoded>
    </item>
    <item>
      <title>From Concept to Collaboration: Reinventing Broadcast Intercom with Open Source</title>
      <link>https://www.eyevinn.se/blog/from-concept-to-collaboration-reinventing-broadcast-intercom-with-open-source.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/from-concept-to-collaboration-reinventing-broadcast-intercom-with-open-source.html</guid>
      <pubDate>Fri, 30 May 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>production</category>
      <description>(https://cdn-images-1. medium.</description>
      <content:encoded><![CDATA[<p><img src="https://cdn-images-1.medium.com/max/1024/1*dH1M3Gib0JRinkm4n9-YjQ.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>During a sports production event, an insightful conversation raised a simple but powerful question: Why are we still tied to complex, proprietary intercom systems when modern, open-source technologies could offer simpler, more flexible alternatives?</p>
<p>That question didn’t just fade away, it sparked action. It led to a unique cooperation between SVT, YLE, NRK, TV2 Norge, and Eyevinn Technology to reimagine intercom systems for live broadcast production.</p>
<h3>A New Kind of Collaboration</h3>
<p>This initiative stands out not just for its technical innovation, but for its collaborative model. Rather than waiting for traditional vendors to adapt or respond, the broadcasters chose to act. Together, they pooled resources and expertise to form a shared innovation initiative with Eyevinn Technology, a media software specialist known for its forward-thinking engineers and deep understanding of broadcast workflows.</p>
<p>This wasn’t a typical vendor-client relationship. It was a true partnership where:</p>
<ul>
<li>Broadcasters brought real-world needs and operational insights from their production teams.</li>
<li>Eyevinn Technology brought modern software engineering skills, open-source know-how, and a passion for challenging the status quo.</li>
</ul>
<p>This blend of domain knowledge and engineering excellence created a foundation for innovation tailored to actual user needs.</p>
<h3>From Challenge to Proof of Concept</h3>
<p>The working model was lean and pragmatic. First, the problem was clearly framed: proprietary intercom systems were too rigid, expensive, and confusing for newer generations of production staff. There was a desire to shift from hardware-dependent, legacy systems to software-first solutions aligned with the needs of digital-native production teams.</p>
<p>To test this ambition, the group quickly scoped and developed a Proof of Concept (PoC). The goal was simple: show that a modern, WebRTC-based intercom system could replicate, and improve upon, traditional intercom functionality while remaining intuitive and cost-effective.</p>
<p>The PoC used open standards and cutting-edge technologies, including:</p>
<ul>
<li>WebRTC for real-time browser-based communication</li>
<li>RESTful APIs to manage the WebRTC media service (Symphony Media Bridge) for media routing</li>
<li>A simple, browser-based front-end interface accessible on various devices, including mobile phones</li>
</ul>
<p>The result? The concept worked, and it worked well. Even during the early tests, the user feedback was overwhelmingly positive, especially regarding ease of use, responsiveness, and the flexibility to adapt to different production environments.</p>
<h3>Agile Development Meets Organizational Transformation</h3>
<p>With a successful PoC in hand, the team didn’t pause, they accelerated. Development continued using agile methodologies, with iterative sprints, user feedback loops, and continuous testing in live environments. This allowed for rapid evolution of features and interface improvements, ensuring that the final product stayed grounded in real-world needs.</p>
<p>Crucially, this was more than a technology shift — it was a transformation process.</p>
<ul>
<li>Broadcasters were not just passive users. They were active participants in design reviews, sprint demos, and decision-making sessions.</li>
<li>Engineering and production teams worked side-by-side, ensuring that the technical direction was always aligned with operational reality.</li>
<li>Change management was built-in: the solution was introduced progressively, training and workflows were adapted gradually, and adoption was organic rather than forced.</li>
</ul>
<p>By bringing users into the development process, the transition to the new intercom system became an enabler of cultural change. Production teams became more confident and self-reliant, no longer dependent on specialized technicians to manage communication systems. The familiar tools, push-to-talk, headset integration, channel selection, were still there, but now accessible via a streamlined web interface that “just works.”</p>
<h3>Live Today, and Ready for Tomorrow</h3>
<p>This open-source intercom system is no longer a prototype. It’s now in active use in live television production — delivering the scalability, simplicity, and reliability that modern broadcasters demand.</p>
<p>You can read more about its real-world implementation here: <a href="https://medium.com/@eyevinntechnology/modern-broadcasting-transformed-how-browser-based-open-intercom-replaces-costly-hardware-systems-76d8b47ba83e">Modern Broadcasting Transformed</a><!-- MEDIUM LINK --></p>
<h3>A Blueprint for Future Innovation</h3>
<p>This initiative offers a blueprint for how modern broadcasting challenges can — and should — be solved:</p>
<ul>
<li>Start with user needs, not vendor roadmaps.</li>
<li>Move fast with proofs of concept that reduce risk and validate direction.</li>
<li>Collaborate across boundaries between broadcasters and software experts.</li>
<li>Iterate and scale, all while supporting organizational change.</li>
</ul>
<p>At Eyevinn, we believe this model, open, agile, and collaborative, is the future of media technology development. The success of this intercom project shows that when the right minds come together with shared purpose and clear focus, transformation doesn’t just happen. It accelerates.</p>
<p>Eyevinn Technology remains committed to driving innovation in broadcast and media solutions. For more information or enterprise support, contact <a href="mailto:sales@eyevinn.se">sales@eyevinn.se</a>.</p>
<p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a833d3b86daf" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
]]></content:encoded>
    </item>
    <item>
      <title>Break Free and Own Your Insights</title>
      <link>https://www.eyevinn.se/blog/break-free-and-own-your-insights.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/break-free-and-own-your-insights.html</guid>
      <pubDate>Mon, 05 May 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>analytics</category>
      <description>(https://cdn-images-1. medium.</description>
      <content:encoded><![CDATA[<p><img src="https://cdn-images-1.medium.com/max/1024/1*8HblChLdZN9_76X7-LLSOQ.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<h3><strong>Break Free and Own Your Insights: Introducing Eyevinn Open Analytics</strong></h3>
<p>In today’s streaming landscape, delivering exceptional experiences to viewers is no longer a competitive edge, it’s a baseline requirement. Yet the tools to monitor and optimize Quality of Experience (QoE) are often expensive, opaque, and tightly coupled to proprietary platforms. With Eyevinn Open Analytics, we’re introducing a new model: one where you own the data, the stack, and the roadmap.</p>
<p>Built on open standards and designed for flexibility, this analytics suite puts modern, AI-powered telemetry within reach for every player in the streaming value chain, from nimble AVOD startups to enterprise OTT providers.</p>
<h3>Why Analytics Is a Strategic Priority</h3>
<p>Video analytics have become mission-critical for media companies aiming to:</p>
<ul>
<li>Proactively identify and resolve playback issues before users complain</li>
<li>Understand user behavior at the session level to guide feature development</li>
<li>Optimize delivery paths and CDNs to improve cost-efficiency</li>
<li>Implement data-driven A/B testing and performance benchmarking</li>
<li>Track key QoE metrics such as rebuffering, startup time, bitrate switches, and session drop-offs</li>
</ul>
<p>Eyevinn Open Analytics enables all of the above, while removing the dependency on costly vendor solutions that charge per session, stream, or endpoint.</p>
<h3>Built on Open Source: Flexibility Without Compromise</h3>
<p>At its core, Open Analytics is an end-to-end, open-source telemetry solution. Every layer of the stack, from the data collection SDKs to the processing backends and visualizations, is yours to inspect, extend, and deploy.</p>
<p>Key Technical Principles:</p>
<ul>
<li>No vendor lock-in: The stack is open-source under permissive licenses.</li>
<li>Cloud-agnostic: Deploy on any infrastructure — public cloud, private data center, or hybrid.</li>
<li>Full transparency and observability: Access the entire pipeline, from player events to dashboard output.</li>
<li>GDPR and compliance-ready: Control your data retention, processing, and storage in compliance with regional legislation.</li>
<li>Horizontally scalable architecture: Handle from thousands to tens of millions of playstarts per month.</li>
</ul>
<p>This architecture supports flexible, low-cost scaling with a Total Cost of Ownership that is 50–100% lower than traditional commercial analytics licenses at scale. A typical production deployment with 20–50M monthly playstarts can cut costs from hundreds of thousands of dollars annually to just a few thousand, without compromising capability.</p>
<h3>Analytics Meets Natural Language</h3>
<p>The real game-changer? Built-in AI prompting via integration using the Model Context Protocol (MCP).</p>
<p>Using natural language, you can query the underlying analytics database like you would interact with an analyst or BI team. For example:</p>
<ul>
<li>“Show me the top 5 error codes during live playbacks over the weekend.”</li>
<li>“Compare average rebuffering ratios for AVPlayer vs. ExoPlayer.”</li>
<li>“Which geography had the highest drop-off during ad breaks?”</li>
</ul>
<p><img src="https://cdn-images-1.medium.com/max/488/1*v_-veLi1m2et4g86LgxB9w.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>Instead of learning new query languages or building custom dashboards for every question, stakeholders, from developers to product managers, can simply ask and get answers. This reduces the time from question to insight dramatically and enables true data democratization within your organization.</p>
<h3>How It Works: Architecture in Focus</h3>
<p><img src="https://cdn-images-1.medium.com/max/768/1*7718vOqJFmEWvBVEYVzvpQ.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>Eyevinn Open Analytics is composed of:</p>
<ul>
<li>EPAS (Eyevinn Player Analytics Specification) — A structured standard for playback events across players like AVPlayer, ExoPlayer, and Shaka.</li>
<li>Lightweight SDKs — Embedded into the video player (e.g., AVPlayer) with just 3 lines of code.</li>
<li>Event Sink and Storage — Accepts and stores telemetry data in real-time.</li>
<li>Grafana Dashboards — Pre-built and customizable dashboards for immediate operational use.</li>
<li>MCP AI Query Layer — Enables AI-driven exploration, summaries, and prompt-based analysis.</li>
</ul>
<p>Everything runs on top of a cloud-native stack that’s easy to monitor, update, and secure.</p>
<h3>Getting Started: From POC to Production in Minutes</h3>
<p>One of the most powerful aspects of Eyevinn Open Analytics is how quickly you can get it up and running.</p>
<p><img src="https://cdn-images-1.medium.com/max/908/1*5Js8NE2PJiYgos8aE2KLSw.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<h3>Step 1: Deploy in the Open Source Cloud</h3>
<p>Spin up a full Open Analytics instance in minutes via the Eyevinn Open Source Cloud (OSC). You can either configure it manually or let our AI assistant guide you using natural language. Say goodbye to lengthy provisioning scripts or waiting on vendor integrations.</p>
<ul>
<li>Access your environment immediately</li>
<li>Zero hardware or DevOps knowledge required</li>
<li>Pay only for hosting or migrate later to your own infrastructure</li>
</ul>
<h3>Step 2: Integrate the SDK</h3>
<p>Add the Player Analytics SDK to your video players. This typically takes:</p>
<ul>
<li>3 lines of code</li>
<li>Less than 30 minutes of developer time</li>
<li>No changes to core playback logic</li>
</ul>
<p>import VideoStreamTracker #PLAYER ANALYTICS LIBRARY  </p>
<p>private var eventLogger: AVPlayerEventLogger #ADDED EVENTLOGGER VARIABLE  </p>
<p>#POINT OUT WHERE TO SEND LOGS<br>    init() {<br>        eventLogger = AVPlayerEventLogger(player: player, eventSinkUrl:<br>            URL(string: &quot;<a href="https://eyevinnlab-epasdev.eyevinn-player-analytics-eventsink.auto.prod.osaas.io">https://eyevinnlab-epasdev.eyevinn-player-analytics-eventsink.auto.prod.osaas.io</a>&quot;)!)<br>    }<br>#initiate player…</p>
<h3>Step 3: Visualize and Explore</h3>
<p>Start monitoring your data in real time through Grafana:</p>
<ul>
<li>Use preconfigured Eyevinn dashboards or build your own</li>
<li>Connect directly to the Player Analytics database</li>
<li>Configure alerts, custom metrics, or scheduled reports</li>
</ul>
<p>Then take it further with AI-based querying via MCP, enabling anyone on your team to ask questions in plain language and receive immediate insights.</p>
<h3>Try It Today</h3>
<p>You can begin your journey with Eyevinn Open Analytics right now. Launch your free proof of concept, explore the dashboards, and experience AI-powered insights, no contracts, no commitments.</p>
<p>Get Started: <a href="https://docs.osaas.io/osaas.wiki/Solution%3A-Eyevinn-Open-Analytics.html">Eyevinn Open Analytics Guide</a></p>
<blockquote>
<p><strong>Own your data. Own your decisions.</strong></p>
</blockquote>
<p>With Eyevinn Open Analytics, you’re not just observing playback, you’re building a smarter, faster, and more agile streaming business.</p>
<p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b061236bd10e" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
]]></content:encoded>
    </item>
    <item>
      <title>Modern Broadcasting Transformed: How Browser-Based Open Intercom Replaces Costly Hardware Systems</title>
      <link>https://www.eyevinn.se/blog/modern-broadcasting-transformed-how-browser-based-open-intercom-replaces-costly-hardware-systems.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/modern-broadcasting-transformed-how-browser-based-open-intercom-replaces-costly-hardware-systems.html</guid>
      <pubDate>Mon, 28 Apr 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>production</category>
      <description>Eyevinn’s Open Intercom is revolutionizing broadcast communication by replacing expensive, inflexible hardware systems with a lightweight, browser-based solution.  Sweden’s public broadcaster SVT has</description>
      <content:encoded><![CDATA[<p><strong>Eyevinn’s Open Intercom is revolutionizing broadcast communication by replacing expensive, inflexible hardware systems with a lightweight, browser-based solution. Sweden’s public broadcaster SVT has demonstrated how this open-source technology can handle everything from complex daily productions to spontaneous field deployments — all while reducing costs and technical complexity. By enabling teams to communicate through any computer or smartphone, Open Intercom meets the growing demand for distributed, flexible production workflows without sacrificing quality or reliability.</strong></p>
<p>Traditional intercom systems have long been a cornerstone of live production — but they come with high costs, inflexible setups, and heavy hardware dependencies. That’s why we built <strong>Open Intercom</strong>: a lightweight, open-source, browser-based intercom solution that gives broadcasters full control over their communications — without locking them into proprietary systems.</p>
<p>One of the broadcasters leading the way with Open Intercom is <strong>SVT</strong>, Sweden’s public service television company. Their experiences show how Open Intercom can be used not only to replace traditional systems, but also to expand what’s possible in modern, distributed production environments.</p>
<h3><strong>Lightweight Setup, Heavyweight Impact</strong></h3>
<p>Open Intercom is designed to run entirely in a browser — on any computer or smartphone — with minimal setup. For broadcasters like SVT, this has meant a dramatic reduction in time and equipment required to get productions up and running.</p>
<p>For example, entire events — rallies, niche sports, and even multi-day productions — have been managed using only laptops and headsets. Field reporters, commentators, and support staff simply connect to the Open Intercom backend, which SVT hosts in a containerized environment alongside their audio bridge.</p>
<p>No matrix, no proprietary panels — just browser tabs and clear audio.</p>
<h3><strong>Rethinking the Role of Traditional Systems</strong></h3>
<p>While traditional intercom systems still have their place, SVT has found that they’re often overkill for the majority of today’s productions. As a technical producer at SVT puts it, “Only about 5% of our productions really require the complexity of traditional intercom hardware. The other 95% could easily be done with something far simpler.” By using Open Intercom for the vast majority of lightweight or remote setups, SVT avoids the high costs and rigid workflows tied to systems designed for much larger productions. It’s a shift in mindset — from one-size-fits-all infrastructure to modular, right-sized tools that scale with the actual needs of a production.</p>
<h3><strong>Works on Its Own, or Alongside Legacy Systems</strong></h3>
<p>While Open Intercom is fully capable as a standalone intercom, it also plays well with existing infrastructure. SVT uses a custom bridge with virtual audio cards to integrate the Open Intercom with their legacy intercom-system, allowing for hybrid workflows without complex workarounds. They simply setup a session where the Open Intercom have both speaker and microphone on while the virtual audio card is connected to the legacy-intercom.</p>
<p><img src="https://cdn-images-1.medium.com/max/1024/1*kRNaWVko5OUYWzuN7Xrwiw.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>Whether routing mix-minus feeds to commentators or enabling reporters to monitor live program audio, Open Intercom can bridge digital and analog environments without friction.</p>
<h3><strong>From Ad-Hoc Spotting to Full Remote Productions</strong></h3>
<p>SVT has used Open Intercom in creative ways that reflect its flexibility:</p>
<ul>
<li><strong>Remote spotting at ski events</strong>, where volunteers without any broadcast background report positions via Open Intercom from their phones.</li>
<li><strong>Self-produced live sports</strong>, where a single editor runs the entire production — from switching video to clipping highlights — while coordinating with commentators, all within a web browser.</li>
<li><strong>Quick setup of field comms</strong>, allowing editorial teams to rapidly deploy intercom lines to cover unexpected news events or sports moments.</li>
</ul>
<p>The takeaway: whether you need a robust daily setup or a spontaneous field deployment, Open Intercom fits.</p>
<h3><strong>Designed for Scalability — Not Complexity</strong></h3>
<p>Unlike traditional systems where each new button or device adds complexity, Open Intercom scales through simplicity. Users only join the lines they need, and production leads can easily spin up isolated groups (e.g. “Camera Ops,” “Sound,” or “Program Feed”) with just a few clicks.</p>
<p>For production teams, this means less time managing the tool — and more time focusing on the actual production.</p>
<h3><strong>Built for the Way You Work Today</strong></h3>
<p>With the rise of remote production, Open Intercom aligns with how modern teams want to work: distributed, flexible, and efficient. SVT’s setup demonstrates how you can integrate browser-based intercom into your workflows without compromising audio quality or reliability.</p>
<p>And because it’s open-source, Open Intercom can be tailored to your needs — hosted on your own infrastructure, extended with custom features, or integrated with your control UIs and production logic.</p>
<p>Whether you’re looking to augment your current system or replace it entirely, Open Intercom is built for broadcasters who value flexibility, quality, and control — without the cost or constraints of traditional intercom hardware.</p>
<p>Interested in trying Open Intercom in your setup? <a href="https://github.com/Eyevinn/intercom-frontend">Explore the code</a>, <a href="https://github.com/Eyevinn/intercom-manager/">read the docs</a>, or get in touch with us at <a href="mailto:sales@eyevinn.se">sales@eyevinn.se</a> for support and custom integration options.</p>
<p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=76d8b47ba83e" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
]]></content:encoded>
    </item>
    <item>
      <title>Streamline Your Workflow: Browser-Based Intercom Control with Elgato Stream Deck</title>
      <link>https://www.eyevinn.se/blog/streamline-your-workflow-browser-based-intercom-control-with-elgato-stream-deck.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/streamline-your-workflow-browser-based-intercom-control-with-elgato-stream-deck.html</guid>
      <pubDate>Thu, 24 Apr 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>technology</category>
      <description>Intercom systems are critical in broadcast and live production environments, yet they’ve traditionally relied on proprietary solutions that are costly, inflexible, and hard to integrate with modern to...</description>
      <content:encoded><![CDATA[<p>Intercom systems are critical in broadcast and live production environments, yet they’ve traditionally relied on proprietary solutions that are costly, inflexible, and hard to integrate with modern tools. The <a href="https://eyevinntechnology.medium.com/the-future-of-broadcast-communication-open-source-intercom-solution-6e06ce0fddf1">Eyevinn Open Intercom</a><!-- MEDIUM LINK -->, built on WebRTC, offers a scalable, intuitive, and browser-based alternative that’s easy to deploy.</p>
<p>While Eyevinn Open Intercom already removes many of the barriers of traditional systems, control has so far required the use of a mouse — introducing potential delays in fast-paced settings.</p>
<p>By integrating Eyevinn Open Intercom with <a href="https://bitfocus.io/companion">Bitfocus Companion</a>, users can now streamline key intercom actions such as:</p>
<ul>
<li>Toggle microphone and speaker mute</li>
<li>Adjust call volume</li>
<li>Push-to-talk</li>
<li>Mute and unmute microphones across all calls</li>
</ul>
<p>All with a single button press on an Elgato Stream Deck.</p>
<p>This blog post walks through how the integration works and how you can get started.</p>
<p><img src="https://cdn-images-1.medium.com/max/556/1*IO71_-jv24wHegQ-5HEWrA.jpeg" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>On the Stream Deck, you are able to see the different actions and the line name.</p>
<h3>How it works</h3>
<p>Eyevinn has developed a Bitfocus Companion module that hosts a WebSocket server. The port of the WebSocket server can be specified in the Companion interface.</p>
<p>From the browser running Eyevinn Open Intercom, the user can connect to this WebSocket. Once an Elgato Stream Deck is connected, it appears in the Companion interface, where users can:</p>
<ul>
<li>Select buttons from a preset created by Eyevinn</li>
<li>Create custom buttons with available module actions</li>
</ul>
<p>Once configured, users can control intercom functions directly from the Stream Deck — no browser navigation required.</p>
<p><img src="https://cdn-images-1.medium.com/max/902/1*1rwquopCTePopmgiRFf64w.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<h3>Get Started</h3>
<p>Try Eyevinn Open Intercom at <a href="https://app.osaas.io/dashboard/service/eyevinn-intercom-manager">https://app.osaas.io/dashboard/service/eyevinn-intercom-manager</a>.</p>
<p>For setup and deployment instructions, visit: <a href="https://blog.osaas.io/2025/02/01/open-source-intercom-solution">https://blog.osaas.io/2025/02/01/open-source-intercom-solution</a></p>
<p>To learn how to connect Open Intercom to Bitfocus Companion, see our full guide here: <a href="https://docs.osaas.io/osaas.wiki/User-Guide%3A-Cloud-Intercom.html">https://docs.osaas.io/osaas.wiki/User-Guide%3A-Cloud-Intercom.html</a></p>
<p>Eyevinn Technology is committed to driving innovation in broadcast and media workflows. For enterprise support or more information, contact: <a href="mailto:sales@eyevinn.se">sales@eyevinn.se</a></p>
<p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=988be8cf83d6" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
]]></content:encoded>
    </item>
    <item>
      <title>Vendor Independent Token Protection</title>
      <link>https://www.eyevinn.se/blog/vendor-independent-token-protection.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/vendor-independent-token-protection.html</guid>
      <pubDate>Wed, 19 Mar 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>open-source</category>
      <description>Most content delivery network (CDN) vendors provide a mechanism to protect the access to the content with signed tokens.  The tokens and protection mechanisms are in many cases specific to the CDN ven...</description>
      <content:encoded><![CDATA[<p>Most content delivery network (CDN) vendors provide a mechanism to protect the access to the content with signed tokens. The tokens and protection mechanisms are in many cases specific to the CDN vendor and adds a risk for you and your solution to be locked in with one specific vendor.</p>
<p>To address this a standard called Common Access Token (<a href="https://shop.cta.tech/products/cta-5007">CTA-5007</a>) was developed by the Consumer Technology Association. Common Access Token (or CAT for short) is a simple, extensible, policy-bearing bearer token for content access. Primary use case is to allow content providers an interoperable way to enforce access policies. The token is not limited to this use case and can also be used as an OAUTH bearer token, a URI signing or a general mechanism for conveying delivery policies.</p>
<h3>Common Access Token Library</h3>
<p>We at Eyevinn have developed and published a beta version of a <a href="https://www.npmjs.com/package/@eyevinn/cat">Javascript NPM library</a> (written in Typescript) to facilitate the adoption of this standard. Currently it support basic claims such as expiration and renewal, URI limiting claim, audience, issuer and replay prevention. It is released under open source and we will continue to add support for more claims (per specification) and we are welcoming code contributions. It is <a href="https://github.com/Eyevinn/node-cat">available on our GitHub</a>.</p>
<h3>Common Access Token Validator Service</h3>
<p>Based on this library we have also developed and open sourced a <a href="https://github.com/Eyevinn/cat-validate">token validation service</a>. The service provides an endpoint that can be used as an authentication endpoint for a web server proxy before serving the content to the client. This project is also available as an open web service in <a href="https://www.osaas.io">Open Source Cloud</a> to further reduce the barrier to get started.</p>
<p><img src="https://cdn-images-1.medium.com/max/1024/1*-tOnHnH49f_fZbo-WIasXg.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>When a user / client request to fetch a file, e.g. asset.txt, it provides a common access token (CAT) in an HTTP header. This token has been generated and given to the client by some mechanism outside of this context. The token contains claims that limit what the user can access. The contents of a token cannot be changed as the token stores a signature that will not match if the contents of the token was changed.</p>
<p>For example it can contain an expiration time (<strong>exp</strong>) which means that after a specific date and time the user can no longer access the content. In combination with this the renewal (<strong>catr</strong>) claim instructs the recipient how the token can be renewed. It can specify when it should be renewed and how the new token is delivered to the client.</p>
<p>Another claim is a claim that limits the URI (<strong>catu</strong>) to which the token can provide access. You can configure the claim to limit to a specific host, port, path, filename, extension or a matching regular expression.</p>
<p>Every token has a unique identifier and one claim (<strong>catr</strong>) can specify whether a token can be used only once or multiple times.</p>
<p>The specification contains over 20 claims and we will not go through all of them in this post.</p>
<h4>Getting Started</h4>
<p>The easiest way to get started is to use the <a href="https://app.osaas.io/dashboard/service/eyevinn-cat-validate">Common Access Token Validation service available in Eyevinn Open Source Cloud</a>. For token count usage we need to first setup a <a href="https://app.osaas.io/dashboard/service/valkey-io-valkey">Valkey key-value store</a> that is also available as an open web service. Follow the instructions in the <a href="https://docs.osaas.io/osaas.wiki/Service%3A-Valkey.html">Open Source Cloud documentation</a> on how to set this up.</p>
<p><img src="https://cdn-images-1.medium.com/max/414/1*XMKpBWHea_JWojl8IvjlOA.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>Then navigate to the <a href="https://app.osaas.io/dashboard/service/eyevinn-cat-validate">Common Access Token Validator service</a> in the Open Source Cloud web console and press the button “Create validator”.</p>
<p><img src="https://cdn-images-1.medium.com/max/460/1*GeLPJ-P7IZ0JLDEDvyYGag.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>The secret signingkey is a string in the format KEYID:KEYHEX where KEYID is an identifier of the key and KEYHEX is the hexadecimal representation of the key. This is the encryption key that will be used to verify the signature of the token.</p>
<p><img src="https://cdn-images-1.medium.com/max/406/1*76kadIMUKq9QbVls9R876Q.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>The validation endpoint in our example is <a href="https://eyevinnlab-blog.eyevinn-cat-validate.auto.prod.osaas.io/validate">https://eyevinnlab-blog.eyevinn-cat-validate.auto.prod.osaas.io/validate</a></p>
<h4>Configuring Nginx web server</h4>
<p>Now we have the validation endpoint up and running and we can configure our Nginx web server to use this endpoint to validate that the client has access to the requested resources. Here is an example of an nginx configuration file.</p>
<p>events {<br>    worker_connections  1024;<br>    # worker_processes and worker_connections allows you to calculate maxclients value:<br>    # max_clients = worker_processes * worker_connections<br>}  </p>
<p>http {<br>  server {<br>      listen 80;  </p>
<pre><code>  location / {  
      auth\_request /\_oauth2\_token\_introspection;  
      root /data/www;  
      index index.html index.htm;  
  }  

  location = /\_oauth2\_token\_introspection {  
      internal;  
      proxy\_method      GET;  
      proxy\_pass        https://eyevinnlab-blog.eyevinn-cat-validate.auto.prod.osaas.io/validate;  
  }  
</code></pre>
<p>  }<br>}</p>
<p>This configuration defines that all requests on location / should use the /_oauth2_token_introspection resource for authentication. This resource in turn is an instruction to proxy the request to our validation endpoint using the HTTP method GET . Our validation endpoint will return status code200 when all claims in the token are fulfilled and client is allowed access. Otherwise it will return status code401</p>
<p>We can save this configuration file we call example-nginx.conf and then start nginx using the official Docker image.</p>
<p>% docker run --rm -p 8080:80 \<br>  -v ./example-nginx.conf:/etc/nginx/nginx.conf \<br>  -v /tmp/www:/data/www \<br>  nginx</p>
<p>We can then try this out using curl and providing the common access token in the header to get the file blog_cat_validator.png.</p>
<p>% curl -v -H &#39;CTA-Common-Access-Token: <TOKEN>&#39; \<br>   <a href="http://localhost:8080/blog%5C_cat%5C_validator.png">http://localhost:8080/blog\_cat\_validator.png</a> &gt; /dev/null<br>&gt; GET /blog_cat_validator.png HTTP/1.1<br>&gt; Host: localhost:8080<br>&gt; User-Agent: curl/8.7.1<br>&gt; Accept: */*<br>&gt; CTA-Common-Access-Token: 0YRDoQEFoQRMU3ltbWV0cmljMjU2eL5kOTAxMDNhNzAxNjc2NTc5NjU3NjY5NmU2ZTAyNjU2YTZmNmU2MTczMDM2MzZmNmU2NTA0MWE2N2RiMjE3NDA2MWE2N2RiMjEzODE5MDE0M2Q5MDEwM2E0MDAwMjA0Nzc2Mzc0NjEyZDYzNmY2ZDZkNmY2ZTJkNjE2MzYzNjU3MzczMmQ3NDZmNmI2NTZlMDExODc4MDIxODFlMDc1MDNhZWY4ZjIzNmMxMjIzNzJmMThjNGJmNWFjNDYzNDM1WCDr0F1YoZnnTVpCg/fRmVaCfryKZIg0tz+YOMemfgKhdw==<br>&gt; &lt; HTTP/1.1 200 OK<br>&lt; Server: nginx/1.27.4<br>&lt; Date: Wed, 19 Mar 2025 19:55:46 GMT<br>&lt; Content-Type: text/plain<br>&lt; Content-Length: 39913<br>&lt; Last-Modified: Wed, 19 Mar 2025 19:50:27 GMT<br>&lt; Connection: keep-alive<br>&lt; ETag: &quot;67db2003-9be9&quot;<br>&lt; Accept-Ranges: bytes<br>&lt;   </p>
<p>This token was set to expire after one minute so if we shortly try it again we will instead get.</p>
<p>% curl -v -H &#39;CTA-Common-Access-Token: <TOKEN>&#39; \<br>   <a href="http://localhost:8080/blog%5C_cat%5C_validator.png">http://localhost:8080/blog\_cat\_validator.png</a> &gt; /dev/null<br>&gt; GET /blog_cat_validator.png HTTP/1.1<br>&gt; Host: localhost:8080<br>&gt; User-Agent: curl/8.7.1<br>&gt; Accept: */*<br>&gt; CTA-Common-Access-Token: 0YRDoQEFoQRMU3ltbWV0cmljMjU2eL5kOTAxMDNhNzAxNjc2NTc5NjU3NjY5NmU2ZTAyNjU2YTZmNmU2MTczMDM2MzZmNmU2NTA0MWE2N2RiMjE3NDA2MWE2N2RiMjEzODE5MDE0M2Q5MDEwM2E0MDAwMjA0Nzc2Mzc0NjEyZDYzNmY2ZDZkNmY2ZTJkNjE2MzYzNjU3MzczMmQ3NDZmNmI2NTZlMDExODc4MDIxODFlMDc1MDNhZWY4ZjIzNmMxMjIzNzJmMThjNGJmNWFjNDYzNDM1WCDr0F1YoZnnTVpCg/fRmVaCfryKZIg0tz+YOMemfgKhdw==<br>&gt;<br>&lt; HTTP/1.1 401 Unauthorized<br>&lt; Server: nginx/1.27.4<br>&lt; Date: Wed, 19 Mar 2025 19:58:31 GMT<br>&lt; Content-Type: text/html<br>&lt; Content-Length: 179<br>&lt; Connection: keep-alive<br>&lt;</p>
<p>We can then in the logs of the validator also see the authorized usage of the token.</p>
<p><img src="https://cdn-images-1.medium.com/max/913/1*1B8lQ4pjmkfqEz6_IVyt9A.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<h3>Conclusion</h3>
<p>Common Access Token offers a standardized way of controlling access to content who ever is responsible for the delivery of the content. Content Delivery Network providers that offers support for Common Access Token today is only <a href="https://techdocs.akamai.com/edgeworkers/docs/cat">Akamai</a> to our best of knowledge as this standard is relatively new and we hope for a wider adoption. Our open source library and open web service is our contribution to facilitate this adoption.</p>
<p>Join our <a href="https://slack.osaas.io/">Slack workspace</a> for Open Source Cloud real-time support and to connect with other users.</p>
<p><em>Let us know in the comments below if you are a vendor that supports Common Access Token today and we will update this blog post.</em></p>
<p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d61d688cbcae" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
]]></content:encoded>
    </item>
    <item>
      <title>Common Access Token Interoperability Testing with Open Source Cloud</title>
      <link>https://www.eyevinn.se/blog/common-access-token-interoperability-testing-with-open-source-cloud.html</link>
      <guid isPermaLink="true">https://www.eyevinn.se/blog/common-access-token-interoperability-testing-with-open-source-cloud.html</guid>
      <pubDate>Sat, 15 Mar 2025 00:00:00 GMT</pubDate>
      <dc:creator>Eyevinn Technology</dc:creator>
      <category>open-source</category>
      <description>Common Access Token (CAT) is a simple, extensible, policy-bearing bearer token for content access.  The primary use case for this token is to allow content providers to enforce access policies efficie...</description>
      <content:encoded><![CDATA[<p>Common Access Token (CAT) is a simple, extensible, policy-bearing bearer token for content access. The primary use case for this token is to allow content providers to enforce access policies efficiently, flexibly, and interoperably. This token is usable as an OAUTH bearer token, a URI signing token, or more generally as a mechanism for conveying delivery policy.</p>
<p><img src="https://cdn-images-1.medium.com/max/1024/1*9h9c0PHjl2yJ5YL2sAc_Iw.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>A standard developed by Web Application Video Ecosystem (CTA-WAVE) and specified in document <a href="https://shop.cta.tech/products/cta-5007">CTA-5007</a>.</p>
<p><img src="https://cdn-images-1.medium.com/max/900/1*1l_sAfwfnD0kD6TXABOLig.jpeg" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>To facilitate for a broader adoption an <a href="https://app.osaas.io/dashboard/service/andersnas-nodecat">open web service in Eyevinn Open Source Cloud</a> based on <a href="https://github.com/andersnas/nodecat">open source</a> is available and can be used for interoperability testing an implementation of CAT.</p>
<p><img src="https://cdn-images-1.medium.com/max/630/1*SPXkwqj3W7cPEYzqAamP8w.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p><em>Interoperability testing with open web service in Eyevinn Open Source Cloud</em></p>
<p>When implementing and adopting to a new standard it is critical that you can validate your implementation with another party. In this blog we will describe how to get started.</p>
<p>To follow the tutorial in this blog you need an Eyevinn Open Source Cloud account. <a href="https://app.osaas.io">Sign up for an account here</a>.</p>
<p>One service (at a time) is included in the Basic tier so you can try this out all for free.</p>
<h3>Obtain an OSC Personal Access Token</h3>
<p>Your personal access token grants you access the Open Source Cloud APIs that we will use. Navigate to Settings / API in the Open Source Cloud web console.</p>
<p><img src="https://cdn-images-1.medium.com/max/1024/1*O3RbRMcVZWMh80dJKIxj5A.png" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
<p>Copy the personal access token and save it as an environment variable OSC_ACCESS_TOKEN in your terminal.</p>
<p>% export OSC_ACCESS_TOKEN=<your-personal-access-token></p>
<p>Depending on which part of the CAT implementation you want to verify you can either <strong>generate</strong> a common access token to verify that your validation process is according to the specification, or <strong>validate</strong> a common access token that you have generated.</p>
<p>First we will install the <a href="https://js.docs.osaas.io">Eyevinn OSC client SDK for Javascript</a>.</p>
<p>% npm install --save @osaas/client-core @osaas/client-web</p>
<h3>Generate a Common Access Token</h3>
<p>To verify that your CAT validation implementation is interoperable with another part we will <a href="https://js.docs.osaas.io/module-@osaas_client-web.html#.generateCommonAccessToken">generate a common access token using the client SDK</a>.</p>
<p>import { Context } from &#39;@osaas/client-core&#39;;<br>import { generateCommonAccessToken } from &#39;@osaas/client-web&#39;;  </p>
<p>const ctx = new Context();<br>const token = await generateCommonAccessToken(<br>  ctx,<br>  {<br>    iss: &#39;eyevinn&#39;,<br>    sub: &#39;jonas&#39;<br>  },<br>  {<br>    signingKey:<br>      &#39;403697de87af64611c1d32a05dab0fe1fcb715a86ab435f1ec99192d79569388&#39;<br>  }<br>);</p>
<p>Adding this to an integration test (using Jest) could for example look like this.</p>
<p>import { Context } from &#39;@osaas/client-core&#39;;<br>import { generateCommonAccessToken } from &#39;@osaas/client-web&#39;;  </p>
<p>describe(&#39;My CAT implementation&#39;, () =&gt; {<br>  test(&#39;can validate a token someone else generated&#39;, async () =&gt; {<br>    const ctx = new Context();<br>    const token = await generateCommonAccessToken(<br>      ctx,<br>      {<br>        iss: &#39;eyevinn&#39;,<br>        sub: &#39;jonas&#39;<br>      },<br>      {<br>        signingKey:<br>          &#39;403697de87af64611c1d32a05dab0fe1fcb715a86ab435f1ec99192d79569388&#39;<br>      }<br>    );<br>    const result = MyCATValidator(token, &#39;403697de87af64611c1d32a05dab0fe1fcb715a86ab435f1ec99192d79569388&#39;);<br>    expect(result.ok).toBe(true);<br>  });<br>});</p>
<h3>Validate a Common Access Token</h3>
<p>To verify that a common access token that your implementation has generated is interoperable with another validator we could write a test for that.</p>
<p>import { Context } from &#39;@osaas/client-core&#39;;<br>import { validateCommonAccessToken } from &#39;@osaas/client-web&#39;;  </p>
<p>describe(&#39;My CAT implementation&#39;, () =&gt; {<br>  test(&#39;can generate a valid token&#39;, async () =&gt; {<br>    const token = MyCATGenerator(<br>      { iss: &#39;eyevinn&#39; },<br>      &#39;403697de87af64611c1d32a05dab0fe1fcb715a86ab435f1ec99192d79569388&#39;<br>    );<br>    const ctx = new Context();<br>    const result = await validateCommonAccessToken(<br>      ctx,<br>      token,<br>      {<br>        signingKey:<br>          &#39;403697de87af64611c1d32a05dab0fe1fcb715a86ab435f1ec99192d79569388&#39;<br>      }<br>    );<br>    expect(result.payload).toEqual({ iss: &#39;eyevinn&#39;, ... });<br>  });<br>});</p>
<p>These tests and additional variants can be added and executed as part of a continuous integration testing pipeline.</p>
<h3>Conclusion</h3>
<p>With this open web service you can validate that your implementation of Common Access Token is interoperable with another party and you can include this validation as part of a continuous testing workflow.</p>
<h3>Additional Resources</h3>
<ul>
<li><a href="https://shop.cta.tech/products/cta-5007">CTA-5007 specification</a></li>
<li><a href="https://docs.osaas.io/osaas.wiki/Home.html">Open Source Cloud Documentation</a></li>
<li><a href="https://js.docs.osaas.io">Open Source Cloud Javascript SDK Documentation</a></li>
</ul>
<p>Join our <a href="https://slack.osaas.io/">Slack workspace</a> for real-time support and to connect with other users.</p>
<p><em>We developed and launched</em> <a href="https://www.osaas.io/"><em>Open Source Cloud</em></a> <em>to reduce the barrier to getting started with open source and at the same time contribute to a sustainable model for open source by giving back a share of the revenue to the creator.</em></p>
<p><em>Open source provides full transparency of the building blocks your solution is built on, and prevents you from being locked in with a single vendor.</em></p>
<p><em>Building solutions based on open source requires that you build, deploy, maintain, and host it yourself. What if it could be as easy that with only a click of a button, you can have it as software as a service? And that there was an easy way to support the creator financially?</em></p>
<p><em>This is what we solve with Open Source Cloud!</em></p>
<p><a href="https://www.eyevinn.se/">Eyevinn Technology</a> helps companies in the TV, media, and entertainment sectors optimize costs and boost profitability through enhanced media solutions.</p>
<p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=173640de165f" alt=""><!-- IMAGE DOWNLOAD FAILED --></p>
]]></content:encoded>
    </item>
  </channel>
</rss>