The debate on the future of edge computing still goes strong in some corners of the electronics industry. Like most cases of new technology, it may not live up to all the hype, but this important computing paradigm will likely create immense value in a few key areas and computing applications. As an electronics engineer or systems architect, it’s your job to figure out what those applications are and how they can be practically implemented in commercial systems.
To help shed some light on the important time-critical applications enabled by edge computing, we prepared this article highlighting four applications where edge computing creates significant value for end users and systems architects. The goal in this article is to cut through the hype and shed some light on the more practical aspects of this important technology.
Four Applications in an Edge Computing Ecosystem
The edge computing model is based on a simple concept: bring the compute required in some applications closer to the end user, thereby eliminating the need to send data to the cloud and thus reducing network traffic. Edge computing can be a critical enabler of applications that require high compute and low latency simultaneously. The two ideas tend to go against each other as data-intensive service delivery tends to carry higher latency.
The four application areas outlined below are chosen because they are time-critical, yet they tend to require more compute than could typically be fit onto the end device. These are also just a few areas where edge computing can offer a low-latency solution; systems designers could certainly envision many more application areas where bringing processing closer to end users creates major value and improves service delivery.
Edge AI Processing
AI is probably the highest-compute application being implemented in consumer and commercial devices. Typically, AI processing would be performed in the cloud as part of a larger application whenever compute resources are not available on end devices or user equipment. With an edge computing that is specialized for AI-processing (either on-chip or in a co-processor architecture), computational time and load can be significantly reduced.
As part of model development for deployment in an edge computing system or end-user devices, certain acceleration steps can be implemented to further reduce computational requirements in neural networks. These are outlined below and will be discussed in more depth in a later article.
With a sufficiently high-compute processor or chipset architecture, and model optimization practices like those listed above, it’s possible to segment low and high compute tasks between the end device and an edge server without increasing traffic in the network backhaul. The pre-processing tasks can also reduce the amount of data sent over wireless links to further improve latency in service delivery.
Infrastructure is slowly becoming smarter, and as more data becomes available, the computing workloads will continue to increase. An edge computing approach allows companies to create a more secure network for sharing and processing infrastructure data for many tasks, reducing the need for human monitoring and maintenance. Another important area supported by edge computing is integration of data from ADAS systems and traffic monitoring systems to support autonomous vehicles. This high compute area will continue to see growth, primarily driven by vehicles and infrastructure monitoring.
As much of the world begins to focus on geographically diversifying its supply chains, automation in smart manufacturing will see new investment and development. Edge computing can support further automation with on-demand processing to serve multiple production assets. To ensure greater security in a production environment, these systems could be deployed on-premises, which totally eliminates the need for a public network and gives companies greater control over manufacturing operations.
This is another area where systems are becoming more complex, with more devices being interconnected and sharing more data. Devices deployed for security have greater emphasis on signal acquisition from sensors and subsequent processing of multiple data types. The latter area is where edge computing can play an important role. The data captured by advanced security falls within the following areas:
- Computer (both still and video streaming)
- Low-frequency and high-frequency radio sensors
- Acoustic and optical sensors
- On-device processing and fusion of data and autonomous decision-making with an embedded AI model
In some environments where internet access is compromised, unreliable, or denied, an edge computing approach can offer direct access to high-compute resources without a link to the cloud. An edge server allows data capture and warehousing in a much more secure environment compared to a publicly accessible telecom network or cloud service. The defense industry in the US and Europe is currently taking this approach to embedded computing very seriously, and many new embedded products are reaching the market.
When you’re ready to design the electronics and peripherals for your edge computing systems, use Allegro PCB Designer, the industry’s best PCB design and analysis software from Cadence. Allegro users can access a complete set of schematic capture features, mixed-signal simulations in PSpice, and powerful CAD features, and much more.
About the AuthorFollow on Linkedin Visit Website More Content by Cadence PCB Solutions