Back to top

How Determinism and Heterogeneous Computing Impact Ultra Low-Latency Applications

24 January 2022

At Fidus, we predicted a massive hardware resurgence in the high-tech space, and so far, we’ve seen it happen. In the middle of a software boom, that wasn’t necessarily a popular opinion, but it makes sense. It’s difficult to differentiate a leading-edge product based on software alone, especially when it comes to latency and bandwidth requirements. Creating a differentiated product takes a thoughtful approach to heterogeneous computing — mixing hardware and software to apply the most appropriate type of computing where it’s needed at any given time.

What is Heterogeneous Computing?

Heterogeneous computing offers a mix of architectures (i.e., scalar, vector, matrix, and spatial) deployed in CPU, GPU, specialized accelerator, and FPGA sockets to improve system performance.

The current mantra of, “One size doesn’t fit all” is entirely true. Hybrid solutions require smart choices in technology selection and the right architectural decisions to achieve desired latency and bandwidth results. By quantifying general requirements, taking determinism into consideration, and accounting for second-order effects, engineers can define precise requirements in ultra-low latency applications and then apply the necessary architecture and computing techniques to achieve their goals.

Add Precision to General Requirements

It’s critical to understand exactly where latency and bandwidth specifications are coming from and why each value is important for the success of your product. In our experience, engineering teams tend to think their systems need to be faster than they actually need to be.

When design teams say: “We need to make this car stop as fast as it can.”

The truth may be: “We need to make this car stop in time to prevent an accident.”

The difference between those two requirements could massively impact the entire development cycle, the overall cost of the proposed solution, and the end user. There are second-order effects to consider when abruptly bringing a vehicle to a stop.

It’s human nature to aim for the best/fastest solution, but that approach oversimplifies the problem and takes the solution to an extreme. When you peel back the layers enough, you start to understand what you really need at a system level and avoid over-constraining the system.

By understanding precise latency requirements, you can develop a final solution that meets exact demands. Beyond ironclad higher-order objectives around signal speed, execution time, and other critical performance metrics, understanding deterministic requirements and anticipating the second-order effects of the technical solutions you’re considering are two of many next steps in the process of setting precise requirements.

Differentiate Between Low Latency and Determinism

If you search for these terms online, you’ll probably come across pages of articles with titles like “Determinism is the New Latency.” Determinism isn’t the “new” anything. There are few applications where latency doesn’t matter at all, but there are plenty of applications where latency doesn’t need to be aggressively minimized. Determinism implies a deadline – an action that needs to be completed within a specified amount of time. Latency is the delay between two points in time. For example, the delay between when an instruction is received and when it is processed.

Consider an engineering team tasked with designing a braking system in an electric vehicle. They might decide to design the system such that it requires an aggressively constrained latency to activate the brake pad once the driver hits the pedal. With this level of determinism in mind, they’d design the software system around that exact goal. To achieve such a specific activation, the individual latency of different system components must be well understood and well designed. If the action took any longer, there could be a lot more car accidents. Alternatively, activating the brake pad in response so precisely could drive up implementation costs and complexities to impractical levels.

Many designers want their systems to be deterministic in a low-latency fashion. In other words, to achieve certain deterministic deadlines, each individual action or task leading up to a deadline must be implemented with low latency; this includes receiving data from sensors, processing data, and receiving and acting on feedback. However, it’s important to remember that based on your determinism requirements, your system might not need as little latency as you think. Designing for low-latency, determinism, or some combination of the two has implications for system architecture. For a highly deterministic application, it’s imperative to profile each task the system will need to perform and understand the intermediate deadlines that must be met. Once those deadline requirements are well-understood, designers can choose which tasks are implemented using standard software on application processors (CPUs) and which tasks will be accelerated on an FPGA or other acceleration device.

Account for Second-Order Effects

To accommodate low latency, you need higher processing throughput. While it’s no surprise that many high-throughput systems consume more power and dissipate more heat, it’s important to remember how those factors impact your overall design. If you need X to do Y, and you need it in a small, Z-sized housing that will be deployed in a high ambient temperature environment (with no fans or conduction cooling), don’t expect to create a 60-watt system. This is a great example of second order effects. Component manufacturers are increasing power and decreasing package size to meet the demands of the industry, but in some cases, components consuming even only a few watts can easily overheat if they don’t contain an accommodating thermal solution.

The Bottom Line

Fidus has expertise in all aspects of system design. When we approach low-latency, high-bandwidth applications, we have the knowledge base we need to challenge solution bias. We don’t let ourselves get stuck in a routine – defaulting to the same architecture or implementation again and again – and we don’t want to see that happen to our customers either.

Latest articles

Back to Blog
Debugging Complex FPGA-Software Interactions

This deep dive explores how to tackle debugging challenges at the intersection of FPGA hardware and software. From clock domain crossings to distributed system issues, learn strategies, tools, and cultural best practices that reduce debug time and build more resilient embedded systems.

Read now
FPGA Co-Processors for Real-Time Edge Analytics: Design Patterns and Best Practices

FPGA Co-Processors are redefining what’s possible at the edge—enabling real-time analytics with precision, efficiency, and scalability. This guide explores proven design patterns, integration models, and optimization strategies to help engineering teams build smarter, faster embedded systems.

Read now
Secure Boot and Runtime Security in FPGA-Based Embedded Systems

This in-depth guide explores the evolving security challenges in FPGA-based embedded systems. Learn how to implement secure boot, protect against runtime threats, and build resilient architectures that meet the demands of aerospace, automotive, and medical applications. See how FPGAs like AMD Versal adaptive SoCs support robust security from design through deployment.

Read now

Experience has taught us how to solve problems on any scale

Trust us to deliver on time. That’s why 95% of our customers come back.

Contact us