We all know and love the cloud. What's not to love about not having to bother with what your own devices can do, and having near-infinite, elastic storage and compute power at your fingertips?
Well, a few things actually. In the end, as the aphorism goes, the cloud is just someone else's computer. Okay, it may be millions of computers, thoughtfully arranged in clusters in super efficient data centers -- but all those are someone else's computers.
Still, does it matter, if that someone can provide everything you need, probably more efficiently than your own organization could, along with guarantees in terms of security? In many cases, it doesn't. But it does matter a great deal when it comes to autonomous vehicles.
Autonomy and cloud don't go well together
To understand why, let's consider the notion of autonomy. Autonomy is defined as 'independence or freedom, as of the will or one's actions'. Can you be autonomous, when relying on someone else's computer? Not really.
Yes, there is redundancy, and yes, there may even be SLAs in place. But when all is said and done, using the cloud means you are connecting to someone else's computer, usually over the internet. When you are in a moving vehicle, and this vehicle relies on cloud-based compute for its essential functions, what happens if you run into connectivity issues?
This is not the same as a lag in loading your favorite cat pictures. A lag in a moving vehicle scenario is a matter of life and death. So what can be done in situations like these? Enter edge computing.
Edge computing is the notion of having compute as close to the data as possible, in scenarios where data is generated outside of the data center. What this translates to in real life is very small, prefabricated data centers.
Small is a relative term, of course. Is something the size of a container small? Maybe, if you compare it to a data center like the ones cloud providers have. But it's not something most of us could, or would, have in our homes.
Still, our homes are hosts to some of the primary use cases for edge computing. Connected devices communicating over IoT sensors for smart home or smart city scenarios are a good match for edge computing. Fully blown, these scenarios could involve a substantial number of devices, collecting and sharing a substantial amount of data.
A Vapor IO Kinetic Edge micro data center operating alongside a cellular tower. Edge data centers can come in many sizes and shapes. Image: Vapor IO
Vapor IO
In scenarios like this, incurring the cost of a round trip to the cloud does not make sense. Using a small, local data center is much more viable. Of course, this begs the question -- how small is small, and how local is local?
A container deployed by your local 5G antenna is relatively small, and relatively local. A couple of computers running controller software in your basement connecting to your devices over wi-fi is smaller, and more local. Devices that come with their on-board compute and can connect to each other without a central controller are even smaller, and more local.
All of the above can be considered edge computing examples, and can be applied to autonomous vehicles, too -- just replace 'basement' with 'trunk'. The smaller you go, the more local you can get, and thus you gain in round-trip times; this is the advantage that edge computing provides. The flip side of this is, the smaller you go, the less compute power you can accommodate, and thus you lose in compute times.
Moore's law going strong, but computing on the edge is complex
Moore's law, the empirical rule that states compute power roughly doubles every two years, has been questioned for a while now, but somehow it seems to still be in effect. As a result, an average mobile phone today has more compute power than was available around the globe some years back. In 1969, astronauts had access to only 72KB of computer memory. By comparison, a 64GB cell phone today carries almost a million times more storage space.
This is what makes edge computing viable today. The trade-off in compute power versus network latency is an essential differentiation between edge computing and cloud computing. But there is more. Although in theory there should not be much difference, in practice standards for edge computing are in flux.
When we talk about the edge, we must somehow differentiate between data consumers and data producers in the network. Like the internet, nodes in edge networks are not symmetrical in capabilities. In edge networks, we have many IoT devices, which act almost exclusively as data producers. Therefore, IoT standards are key for edge networks.
Image: Getty Images/iStockphoto
The thing to know about IoT standards is this: There are a bunch of them. This would explain why it's so hard to build bridges among different IoT systems, and this is something which also hinders edge networks. Recent case in point: Google dropping support for otherwise perfectly functional Nest APIs that allowed integration with 3rd parties.
As you might expect, though, the cavalry is assuming the form of open-source initiatives. The Linux Foundation is stepping up to the challenge of standardizing the edge;its newly minted LF Edge seeks to remedy this problem. Arpit Joshipura, the Linux Foundation general manager for Edge and IoT, said, "In order for the broader IoT to succeed, the currently fragmented edge market needs to be able to work together to identify and protect against problematic security vulnerabilities and advances common, constructive vision for the future of the industry".
LF Edge is realizing this vision with five projects. These support emerging Edge applications in non-traditional video and connected things that require lower latency (up to 20 milliseconds), faster processing, and mobility.
Hardware, software, networking and standards all need to evolve
Edge computing is more suited for applications that depend on short and predictable response times. Autonomous vehicles fall under that category. In fact, applying computation at the edge can have an impact on reducing the amount of data that needs to be transmitted, thus reducing response times even further.
Modern AI chips -- specialized hardware to run machine learning algorithms -- can work in the edge, too. With processors like the ones GreenWaves produces, which combine low power consumption with adequate compute power, content produced by rich data sensors can be analyzed on the fly locally, rather than sent for analysis to the cloud.
The energy spent in doing this analysis locally and sending results is far less than the transmission of the raw data. Effectively, this can be seen as a compression method. Besides saving energy, which means longer processor life and more power to be used for more compute, this opens up the door for a layered architecture.
Real-time processing can be done on the vehicle, minimizing network overhead, while data that needs further analysis or permanent storage can be sent to the cloud at a later time. This is essential for autonomous vehicles, which require very fast processing speeds. This notion has not gone unnoticed by vendors such as Dell. In 2017, Dell Technologies offered a three-tier topology for the computing market at large, dividing it into 'core', 'cloud' and 'edge'.
Edge or cloud computing is not necessarily an either / or choice. Most likely, we need both for use cases like autonomous vehicles. Image: Dell Technologies
Courtesy Dell Technologies
When humans are in the driver's seat, they also perform computations on the fly. All the information coming through sound, vision, movement, and all our other senses are processed to give us a sense of the environment and what we need to do to maintain the vehicle's desired course. In autonomous vehicles, this information is ingested through sensors, and needs to be processed on the fly too. Ideally, processing occurs on the vehicle itself; but if that's not possible, then at least as close to the vehicle as possible.
Minimizing round trips is essential for safety. This is the part where 5G can help, too -- not only by offering higher networking speed, but also by changing the paradigm. Unlike existing communication networks, 5G relies on a multitude of small, local relays to work. This should increase both response times and resilience in the network.
How exactly to go about this, however, is not something upon which everyone agrees. It may be the open source way, using off-the-shelf, customizable and programmable Raspberry Pi hardware. Or it may be the Google way, using Raspberry Pi-like Coral, an AI board with TPUs which Google says is ready for business.
This is by no means a clear landscape. We do expect more AI chip providers to step in the game, and some -- Habana for example -- are doing that already. The takeaway from this, however, is that edge computing entails a complex ecosystem where hardware, software, networking, and standards evolve in lockstep.
We still have some way to go in terms of collecting the data and training the algorithms needed for operational and safe autonomous vehicles, and edge computing holds the key.
Form zdnet.com
Want to stay up to date with the latest news, products, and trends in the intelligent industry today? As well as the details of the competitions during the World Intelligence Congress. Subscribe us now and stay informed!
SubscribeÊÀ½çÖÇÄÜ´ó»á WORLD INTELLIGENCE CONGRESS
½òICP±¸17008349ºÅ-3½ò¹«Íø°²±¸ 12010302002098ºÅ ¹Ù·½ÉùÃ÷