Everything is a computer – just hit update

Architecting next generation products for seamless mid-life upgrades.

In November 2020, Sonos initiated a program which offered a 30% discount to existing customers upgrading from previous models. Why? Following its S2 software upgrade, compatibility issues with older systems caused uproar amongst owners whose system processors were no longer fit for purpose. Users wanted and expected new features to be delivered after-market, but their ongoing loyalty cost Sonos two thirds of its gross profit for each upgrade. Conversely, Volkswagen have recently announced their intention to offer cars “crammed full of tech and features which the car’s owner / user / subscriber can turn on and off as and when they need them – but only pay for while the gadget is being used.”  These features will be installed and upgraded throughout the lifecycle of the vehicle. 

With ongoing product updates a necessity in an ever-evolving marketplace, with ever changing needs, it’s clear that there is value in flexible technology platforms far beyond expedience in the development phase. It offers OEMs the ability to continue to deliver value throughout the product lifecycle and increasingly, consumers expect it.

If field upgrades are here to stay, how can we be better prepared for future software loads, so that we avoid the Sonos problem? While we can’t foresee the nature of those loads, we can ensure that our systems behave in a more predictable way, and therefore achieve more. 

Continuing with the audio theme, it’s an established fact, that humans are much more tolerant of bad video than we are of bad audio, enduring poor quality visuals with decent sound for far longer than a poppy, clicky or noisy audio stream. This audio disturbance is caused by an electronic audio system that can’t keep up with processing incoming audio samples. This will be an annoyance to the human ear but may not affect understanding. This can, however, create a more catastrophic impact if audio quality is at the heart of the products value, or if the ‘listener’ is a voice interface on a machine – in this scenario, you risk losing the whole capability.

To understand the challenge, we must consider the processing attributes we care about.  Firstly, what kind of processing do we need? – this could be general purpose processing, digital signal processing (DSP), artificial intelligence (AI) processing or IO processing. Secondly, when do we need it? – the timely delivery of results is vital to the correct behaviour of a certain type of compute called “real-time processing”, or just “real-time.”  Real-time is further delineated into hard and soft real-time – if a system has hard real-time deadlines, bad things happen if they are missed even once – think about a safety braking system on an industrial machine. A soft real-time system might be something like a media player – if it misses a deadline once it will recover, but multiple times – pops and clicks and unhappy users ensue. 

The third and most obvious attribute to consider is, how much performance do we need?  Many system designers try to solve for all three by over-delivering on performance – employing significant applications processors that make up for their shortcomings by delivering far more processing than the application requires. The designer justifies the inefficiency, additional cost and energy consumption of an over-engineered system with a time to market advantage and an unreliable sense of comfort from having significant “head-room.” What happens if we re-consider the challenge when the precise needs of the application are unknown – because it hasn’t been conceived of yet, and won’t be installed or upgraded until long after the design has left the factory. How do you test for that?

You can’t. The best you can do is to make the system robust, to ensure that time-critical functions are protected from other parts of the application that might otherwise steal their resources and undermine their performance. The most robust way to do this is to give everything its own resources – thereby ensuring that anything that is required will always be available exactly when it is needed. 

Unfortunately, in traditional architectures, that approach results in intolerable cost, which has driven the opposite reaction – to share multiple tasks amongst a far smaller number of processing resources. We use Operating Systems (OS) to provide an illusion to the software programmer of having more independent resources than the underlying system contains. Operating systems do a great job for code that is not real-time, but can only do so much for time-critical execution – even Real-Time Operating Systems (RTOS) struggle with more demanding “hard real-time” requirements. 

This is one of the reasons why System-On-Chip (SoC) solutions are frequently littered with specialist hardware to deliver features that the more flexible compute resources cannot.  The result is a complex mix of disparate resource (a heterogeneous architecture) that combine to be unpredictable and difficult to program – even before the product is in the field. Now, add to that an expectation that our products will be frequently updated and upgraded over the air, and these architectures have a very uncertain future. A future in which a product’s critical capabilities (like audio) may be compromised by field upgrades to acquire trivial new features – more brands will be damaged and more costly recovery programs will be required.

Perhaps our mantra needs to change from “easy to use” to “easy to re-use” as we are increasingly driven to deploy technology that enables development and deployment long after the product has left the shelves. The answer must be simplicity – if only we could simply deliver flexible, raw capability that can be completely shaped and reshaped at will. We are a long way from being able to manipulate silicon in that way, but we can do more to ensure that the architectures we put into the field continue to be easy to target with upgrades and new features throughout the lifecycle.

Returning to our key attributes, our processing resource must be adaptable enough to execute general purpose processing, DSP, AI or IO processing at any given time according to the needs of the application – this delivers maximum flexibility and the greatest likelihood that enough processing will be available for any future requirement.  This processing must be delivered when it’s needed, no matter what other demands are placed on the system. This can be solved by having many processing resources, some of which are dedicated to hard real-time tasks, and others which can be grouped under the supervision of an operating system to deliver the remainder of the application. Of course, we cannot cost-effectively solve for a processing performance requirement that may grow when the device is in the field. What we can do is create an environment that is completely adaptable and predictable, where the performance that is present can be used to its fullest without compromising critical features and destroying user experiences – with no more need for expensive and unreliable “head-room”, or perhaps more appropriately “hope-room”.

In a world of field upgrades, we cannot continue to steer more demands onto single processors and trust the operating system to solve for resource and timing conflict – that is a ticking time-bomb on our ever more important real-time data processing. The idea that this is the inexpensive option is also broken – even if you regard 30% of an application processor subsystem as “free” at the time when the system is designed, we must think about the implications of our architectural choices in the after-market – ask Sonos. 

xcore is designed to solve each one of these challenges for intelligent embedded systems.  It’s a homogeneous multicore architecture in which fully flexible processing resources can be dedicated to the performance and timing needs of the application in real-time.  Individual cores can deliver hard-real-time processing at nano-second resolution, whilst farms of other cores deliver the rest of the application through familiar operating system abstractions. The xcore architecture is so predictable that timing can be guaranteed statically at design time – even if your hardware left the lab long ago.

The ability to reliably upgrade xcore firmware in the field has been part of our value proposition for over a decade, and millions of products in the field are already benefiting from xcore performance and predictability. These products range from toys to industrial machines, from IO protocol bridges to intelligent streetlamps, from soft real-time audio to hard real-time motor control. The xcore architecture offers a compelling platform on which to invest in a future of more diverse applications with rapidly evolving requirements.


If you would like to discuss xcore in more detail, please contact your regional Sales Director:

Scroll to Top