Cloud

What is Edge Computing? Definition and Cases Explained.

Edge Computing: computing performed physically or logically as close as possible to where data is created and commands are executed. Offering excellent advantages in latency reduction for applications relying on real-time decision making.
Essentially, it refers to a concept where computing is performed close to the location where data is created and where resulting actions are being taken. This form of computing could potentially be on an end device itself – such a cellphone, a surveillance camera, a drone or an autonomous vehicle – or it may be performed a few a few hops away, such as on a locally connected server next to a cellphone tower or in a small, local data center. The important takeaway is that edge computing is done geographically or logically as close to the source of the data as possible, in order to reduce network traffic and latency.《Glossary: What is Data Center?
It Was Acceptable in the 80s – the First Kind of Edge Computing
The basic concept of edge computing is not new. Although networking and computer technology were first introduced in the 1960s, it relied on mainframes – huge monolithic computing systems placed in an office or laboratory basement – with “dumb” terminals connected via local or wide area networks that people used to connect to the system. With mainframes, computing was basically performed in a central location. However, the 1980s introduced PCs (Personal Computers), which miniaturized the computer and brought it to our desk or home. Computing power was now at the “edge” – the physical location where data was input and processed. This was a shift from a centralized to a distributed form of computing.
The Re-Centralization of Computing in the Cloud
By the early 2000s, another shift started to take place. With the rise of the Internet and related networking technologies enabling higher connection speeds and bandwidth limits, it was now possible to transmit more data faster to a remote location for processing or storage. Centralizing computing power allowed for more efficiency by pooling and sharing resources, while being available to access from anywhere via the public Internet. Amazon first introduced their EC2 (“Elastic Computing Cloud”) service in 2006, and since then more and more computing workloads have started to be performed remotely, creating a demand for technologies such as SaaS (Software as a Service) and PaaS (Platform as a Service). A simple example used by everyone is Google’s Gmail and Drive applications, which host and store email and files on the cloud and can be accessed from anywhere via a web browser. And many businesses ranging from small startups to huge corporations now run some or all of their computing workloads – such as ERP and accounting systems, web services and applications - using public cloud services such as Amazon AWS, Microsoft Azure or Google Cloud. This has become known as Cloud Computing era, with a shift back from distributed to centralized computing.《Glossary: What is Cloud Computing?
The history of centralized vs distributed computing.
So if it has technically been around since the 1980s, why has edge computing suddenly become a hot topic again? The key is the arrival of other new technologies such as Artificial Intelligence (AI), Big Data and the IoT (Internet of Things). While the kind of edge computing of the PC era (and still in use today) relied on a manual kind of data input performed by humans, these new technologies automate both the input and resulting action, and are controlled and managed by a central location in the cloud. However, these technologies also require real-time decision making, which is only made possible when computing is done on or close to the end device itself.

Glossary:
Too Much Data, Not Enough Time – Bringing Back Decentralization
Although the centralized nature of cloud computing provides excellent advantages in terms of cost and resource efficiency, flexibility and convenience, it also comes with some disadvantages. Since all this computing and storage is performed remotely, this data needs to travel back and forth across the public Internet or other networks from its point of origin to a centralized data center (the “cloud”) where it is processed and stored, and then back again to the user. If there is a small amount of data, it can be transmitted fast – but due to the speed of light, but there will still be a delay, at least of a few milliseconds. More commonly if there is a larger amount of data, it could take longer – from seconds to even minutes, since there will only be a limited amount of bandwidth available across the network available to transmit and receive it. In many situations, this delay is a perfectly acceptable trade off to the cheaper cost and flexibility of cloud computing. However, there are now more situations where this delay IS a big problem.
Edge computing is adopted for computing situations where any delay is not acceptable.
When Every Millisecond Counts
Let’s look at one interesting example – computer vision, a kind of AI (Artificial Intelligence) used in video surveillance and facial recognition systems (such as face ID building access systems, vehicle license plate recognition systems in parking lots, or even city-wide CCTV systems used by law enforcement). In order to recognize a human face, including gender or age, and match it to a database of existing records (such as an employee photo or a wanted criminal), the system will rely on a machine learning model. Although it is generated and tested before the system is deployed, the computer vision system also needs to run this model every time a human face is recorded, to perform real-time matching and recognition. This is called inferencing, and requires at least a certain amount of computer processing power. It is easy to run this kind of workload in the cloud, but the delay of sending it back and forth across the network will be noticeable, especially if you need to stand for an extra second in front of the face ID terminal. And that’s just for one camera – if there are tens or even hundreds of cameras in large building or even a whole city, and in busy areas such as train stations or airports, the data of thousands of people will need to be sent back and forth across the network, leading to either a longer delay or extremely high network connection costs as more and more bandwidth is required for sending and receiving terabytes of video or image data across the network.《Glossary: What is Machine Learning?
Edge computing is ideal for facial recognition systems.
That’s where edge computing is more beneficial – these workloads can be processed locally to minimize both delay and network bandwidth usage, either on a small device connected to the video camera directly, or on an edge server located on a local area network. After the data is processed immediately, results can be delivered back to the device or application for action, and only relevant data will be sent back through the cloud, reducing bandwidth needs.
Security and Privacy at the Edge
Apart from the issue of latency, another reason edge computing is preferable in many situations is due to data security and privacy concerns. Take an example of a smart home, which will often feature an online voice assistant device such as an Amazon Echo. This kind of device will monitor sounds within your home, and like the previous example, also use a machine learning model to detect voice commands, and match these commands to particular actions (such as turning on the lights or performing an internet search for tomorrow’s weather). It’s unlikely the house owner would like these voice commands, and all the other private audio data recorded in his house – to be sent over the internet to a remote location to perform matching and recognition, where it could potentially be hacked or exposed, or even sold to other companies. Therefore, it is preferable that the processing of these voice commands is done on the device, at the edge, and then deleted when the command is executed. 《Learn More: AIoT Application-「Do You Know About AIoT?The Practical Applications of Combining Artificial Intelligence with IoT」》
Edge computing is ideal where data privacy is paramount, such as for smart home devices.
However, distributing and decentralizing compute also has a potential downside for security, since each different location on a network represents a vulnerability that could be exploited by hackers. While the centralized nature of cloud computing allows these computing devices by be managed and protected better and threats detected easier, edge computing is more challenging, with more types of devices in remote locations that could have less robust security protections. Therefore, any company implementing an edge computing strategy must also take security as a serious consideration, ensuring that all devices on the network are maintained and updated in a unified manager for new security patches, and feature robust protection measures such as data encryption and firewalls.
Edge Computing and 5G – Perfect Partners
Edge Computing will also be broadly adopted in the next generation of 5G cellular networks. A network architecture known as MEC (Multi-access Edge Computing / Mobile Edge Computing) can enable cloud computing capabilities and an IT service environment to be placed at the edge of a cellular network, such as at cellular base stations or other RAN (Radio Access Network) edge nodes. This can allow the many different applications that the high bandwidth and high transmission speed capabilities of 5G to be processed as close as possible to the user at the periphery of the cellular network, in order to meet the strict latency and reliability requirements of 5G while also helping network operators to reduce their network backhaul costs. 

Glossary:
What is 5G?
What is MEC(Multi-access Edge Computing)?

For example, autonomous drones deployed for package delivery, bridge inspection or crop dusting will be enabled by combing together 5G wireless radio communications technology and edge computing. As an aerial vehicle, the drone needs both a method of low latency and highly reliable wireless communications to send and receive large volumes of data, as well as artificial intelligence capabilities (with machine learning technology) to make independent decisions in real time from data collected both by itself from the surrounding environment, and the remote systems and applications that are managing it. However, placing sufficient computing power to run these machine learning models onboard the drone itself will cause it to be heavier, reducing the battery capacity and flying time. Therefore, some or all of the computing workload can be instead made on an edge server and immediately relayed back to the drone using a type of 5G service category known as URLLC (Ultra-Reliable and Low Latency Communications), ensuring the drone can make decisions immediately without any latency or potential for drop-out which could be catastrophic for the operation of an aerial vehicle.

Learn More:
eMBB Solution:《An Immersive VR Stadium Experience with 5G eMBB Technology
mMTC Solution:《A Smart City Solution with 5G mMTC Technology
URLLC Solution: 《An Autonomous Vehicles Network with 5G URLLC Technology
Autonomous delivery drones will depend on edge computing technology.
Hardware Built for the Edge – Small, Efficient, Flexible
Now that we know the necessity of edge computing, how will we go about implementing it? Computing infrastructure such as servers built for cloud computing are large and power hungry, designed to deliver as much performance as possible, and usually require a cold air-conditioned, dust free environment. Since they will also be deployed in a huge number, they are also usually optimized only for a single purpose – such as storage, or CPU computing, or GPU acceleration, and are designed to be deployed in huge cloud data centers where space or power is more readily available.

On the other hand, edge computing needs to be performed close to the user location in downtown or urban areas, such as in an office cabinet or the base of a cell phone tower. Therefore, space is restricted and power supply might be limited or expensive. In addition, there might not be air conditioning available to maintain a perfectly cooled environment, and since the server will usually be deployed just as a single unit, it needs to offer a good balance of compute, storage, networking, expansion and GPU support.
GIGABYTE has a Solution – Edge Servers for Every Situation
GIGABYTE has also begun to offer our customers a new range of servers specifically designed for edge computing, such as our H242 Series 2U 4 Node edge server. They are specifically designed for edge computing applications such as a MEC (Multi-access Edge Computing / Mobile Edge Computing) to build 5G networks, featuring a compact form factor (short depth & height) and lower power consumption requirements, while still offering capable computing performing (with AMD EPYC or Intel Xeon processors) to run demanding virtualized workloads at the edge.

GIGABYTE’s edge server systems also feature a good balance of memory capacity, storage and other expansion capacity (including PCIe Gen 4.0 support to utilize the latest high speed networking technologies), and even accelerator card support (such as for NVIDIA’s T4 GPGPU) to run inferencing workloads such as computer vision and or speech recognition models, to support AI-enabled applications and services. 
《Recommend for you: High Density Server H242-Z10 & H242-Z11
GIGABYTE’s H242 Series multi-node server for edge computing.
Conclusion
Although it is already in use today, the benefits of edge computing will play even a greater importance in enabling revolutionary new technologies on the near horizon. The time when you will be able to effortlessly stream an 8K video on your mobile phone, or step into an autonomous taxi for your ride home is no longer on the edge of our imagination – in a few years it will be a certain reality. And that’s thanks to the technology of edge computing, made possible by GIGABYTE and our industry partners.
Realtion Tags
Edge Computing
WE RECOMMEND
RELATED ARTICLES
Over 100,000 Pokémon Fanatics Gather to Catch ‘Em All - See How GIGABYTE’s High Density Servers Help Maintain Law & Order

Large scale events can lead to a sudden surge in crowds, creating cellular network congestion. Even if the user capacity of a cell tower user is upgraded, the network operator is still unable to cope with an abrupt increase in demand. In 2019 Industrial Technology Research Institute (ITRI) therefore designed and built a “Private Cell Mobile Command Vehicle”, which can deploy a pre-5G private cellular network to avoid the problem of commercial network traffic jams. The vehicle provides the New Taipei City Police Department with smooth, uninterrupted cellular network service, allowing the police staff to remotely monitor real time footage of large scale events and deploy police resources where needed, increasing the efficiency of providing event security and safety. GIGABYTE’s H-Series High Density Servers are also helping to support ITRI’s “Private Cell Mobile Command Vehicle” by reducing the complexity of back-end infrastructure – each server combines computing, storage and networking into a single system, allowing for resources to be centralized and shared across different applications. The servers also optimize the use of time and manpower by combining and centralizing applications to simplify management procedures.

[Digital Tour] Find Your 5G Edge

GIGABYT will illustrate the key functions and applications that made 5G a highly anticipated technology evolution, and the pivotal role MEC (Multi-access Edge Computing) plays in bringing 5G into our lives. Let’s take a digital tour to experience the splendid 5G future, enabled by GIGABYTE’s edge computing solutions!

Seize New Business Opportunities with Practical Applications of 5G

Recently, 5G communications technology has been frequently talked about in relation to various major industries – it even seems that in order to keep up with technology trends, everything should be connected with 5G! When discussing 5G however, key features such as high bandwidth, low latency and high connection density must definitely be mentioned. As the coming of 5G will revolutionize our future lives, let’s understand more clearly about 5G directly from some real-world examples! And what key role does GIGABYTE play in this next generation communications technology?