Achieving 3D Visualization with Low-Latency, High-Bandwidth Data Acquisition, Transfer, and Storage
High-bandwidth, low-latency solutions come with tradeoffs. To find the right solution for 3D visualization, consider the following requirements:
Back to top
At Fidus, we predicted a massive hardware resurgence in the high-tech space, and so far, we’ve seen it happen. In the middle of a software boom, that wasn’t necessarily a popular opinion, but it makes sense. It’s difficult to differentiate a leading-edge product based on software alone, especially when it comes to latency and bandwidth requirements. Creating a differentiated product takes a thoughtful approach to heterogeneous computing — mixing hardware and software to apply the most appropriate type of computing where it’s needed at any given time.
What is Heterogeneous Computing?
Heterogeneous computing offers a mix of architectures (i.e., scalar, vector, matrix, and spatial) deployed in CPU, GPU, specialized accelerator, and FPGA sockets to improve system performance.
The current mantra of, “One size doesn’t fit all” is entirely true. Hybrid solutions require smart choices in technology selection and the right architectural decisions to achieve desired latency and bandwidth results. By quantifying general requirements, taking determinism into consideration, and accounting for second-order effects, engineers can define precise requirements in ultra-low latency applications and then apply the necessary architecture and computing techniques to achieve their goals.
Add Precision to General Requirements
It’s critical to understand exactly where latency and bandwidth specifications are coming from and why each value is important for the success of your product. In our experience, engineering teams tend to think their systems need to be faster than they actually need to be.
When design teams say: “We need to make this car stop as fast as it can.”
The truth may be: “We need to make this car stop in time to prevent an accident.”
The difference between those two requirements could massively impact the entire development cycle, the overall cost of the proposed solution, and the end user. There are second-order effects to consider when abruptly bringing a vehicle to a stop.
It’s human nature to aim for the best/fastest solution, but that approach oversimplifies the problem and takes the solution to an extreme. When you peel back the layers enough, you start to understand what you really need at a system level and avoid over-constraining the system.
By understanding precise latency requirements, you can develop a final solution that meets exact demands. Beyond ironclad higher-order objectives around signal speed, execution time, and other critical performance metrics, understanding deterministic requirements and anticipating the second-order effects of the technical solutions you’re considering are two of many next steps in the process of setting precise requirements.
Differentiate Between Low Latency and Determinism
If you search for these terms online, you’ll probably come across pages of articles with titles like “Determinism is the New Latency.” Determinism isn’t the “new” anything. There are few applications where latency doesn’t matter at all, but there are plenty of applications where latency doesn’t need to be aggressively minimized. Determinism implies a deadline – an action that needs to be completed within a specified amount of time. Latency is the delay between two points in time. For example, the delay between when an instruction is received and when it is processed.
Consider an engineering team tasked with designing a braking system in an electric vehicle. They might decide to design the system such that it requires an aggressively constrained latency to activate the brake pad once the driver hits the pedal. With this level of determinism in mind, they’d design the software system around that exact goal. To achieve such a specific activation, the individual latency of different system components must be well understood and well designed. If the action took any longer, there could be a lot more car accidents. Alternatively, activating the brake pad in response so precisely could drive up implementation costs and complexities to impractical levels.
Many designers want their systems to be deterministic in a low-latency fashion. In other words, to achieve certain deterministic deadlines, each individual action or task leading up to a deadline must be implemented with low latency; this includes receiving data from sensors, processing data, and receiving and acting on feedback. However, it’s important to remember that based on your determinism requirements, your system might not need as little latency as you think. Designing for low-latency, determinism, or some combination of the two has implications for system architecture. For a highly deterministic application, it’s imperative to profile each task the system will need to perform and understand the intermediate deadlines that must be met. Once those deadline requirements are well-understood, designers can choose which tasks are implemented using standard software on application processors (CPUs) and which tasks will be accelerated on an FPGA or other acceleration device.
Account for Second-Order Effects
To accommodate low latency, you need higher processing throughput. While it’s no surprise that many high-throughput systems consume more power and dissipate more heat, it’s important to remember how those factors impact your overall design. If you need X to do Y, and you need it in a small, Z-sized housing that will be deployed in a high ambient temperature environment (with no fans or conduction cooling), don’t expect to create a 60-watt system. This is a great example of second order effects. Component manufacturers are increasing power and decreasing package size to meet the demands of the industry, but in some cases, components consuming even only a few watts can easily overheat if they don’t contain an accommodating thermal solution.
The Bottom Line
Fidus has expertise in all aspects of system design. When we approach low-latency, high-bandwidth applications, we have the knowledge base we need to challenge solution bias. We don’t let ourselves get stuck in a routine – defaulting to the same architecture or implementation again and again – and we don’t want to see that happen to our customers either.
High-bandwidth, low-latency solutions come with tradeoffs. To find the right solution for 3D visualization, consider the following requirements:
Today’s analysis and emulation of genetic sequences demands a low-latency, high-bandwidth solution to transfer massive amounts of data between processors.
Our last blog in this series discussed how to deal with component shortages by using replacement parts.
Trust us to deliver on time. That’s why 95% of our customers come back.