A livestream has been planned. Maybe it’s not even your first and you’ve already been able to gather some experience. And yet: you are unsure how many visitors to expect, whether your technology is sufficiently dimensioned or what other preparations are necessary? Whether broadcasting a major sporting event or a product launch, your audience expects nothing short of a great experience without delays, buffering or reloading, no matter how busy your technology infrastructure is. As a minimum, the following questions should be addressed well in advance:
- What happens if the number of users suddenly doubles?
- What happens when all users log in at the same time to access their accounts?
- Can these users depend on adequate download speeds and good app performance at all times?
- Will all your customers enjoy the engaging experience you want to deliver?
In short, will your technology withstand the pressure of popularity? If the answer is “no”, the costs for you increase, while the number of viewers decreases. Customers migrate to other websites, meaning that advertising and transaction revenues are reduced. Brand perception suffers. And let’s not even talk about the bad PR you get, especially when customers quickly spread their dissatisfaction on social media.
It’s the opening match of a major football tournament, and we’re in the Video Quality Control Room (QCR) of a major livestream provider. The video feed is started – from cameras, to encoders, to network feeds, to the internet, and to the viewers and devices across the provider’s national market. As kick-off approaches, thousands (perhaps even millions) of viewers access the stream on their smart TVs or notebooks. The game begins. What happens then in the QCR? Either a collective sigh of relief or wild confusion. The livestream either runs smoothly, or it requires troubleshooting and handling glitches right from the start.
Each provider has its own technology configuration and each livestream workflow comprises multiple hardware and software vendors and service providers. These variables must be balanced individually and collectively against the set performance goals. Conducting thorough testing answers three questions:
- Do all components – video encoders, network feeds, etc. – work as specified? After programming, each component is tested and the test routines are kept for regression testing in the event that changes are made at a later date. This is the usual DevOps process.
- Do all components work together end-to-end? The stumbling block in integration testing is getting a good result and then thinking, “OK, we’re ready.” Components can fail for many reasons, including excessive load. Therefore, integration tests must include failover scenarios to ensure that the stream continues to function even if individual components fail.
- Can the livestream be provided with the necessary scaling, in order to meet peak attendance at the specified performance level? The load tests themselves require scaling and precision, and too often this is where savings are made. If the simulated load represents too few spectators from too few locations around the world, there is always the fear at the start of an event that the performance may not be up to scratch.
The above questions, along with the detailed solutions proposed and tips for suitable software, can be found here as free downloads, provided in cooperation with Akamai: