Understanding the Edge and the World of Connected Devices

Oct. 29, 2018
The explosion of connected devices creates both challenges and opportunities for companies seeking to understand edge computing. From the DCF Special Report on Edge Computing.

This is the second entry in a four-part Data Center Frontier special report series that explores edge computing from a data center perspective. This entry explores the intersection of the edge and the world of connected devices.

So much has changed as the digital transformation in our industry takes place. Beyond just a marketing term, becoming a digital entity means supporting advanced use cases, mobile users, and new types of use cases.

Download the full report.

Our users and IT platforms are so much more distributed. And, a lot of this ties together with connected devices, users, and IoT. Furthermore, that market continues to grow. Recent research from Accenture has found that the Internet of Health Things (IoHT) is already delivering tangible cost savings, but continuous investment is essential. The report goes on to state that by introducing more connectivity, remote monitoring and information gathering, IoHT can encourage better use of healthcare resources, more informed decisions, a reduction in inefficiencies or waste and the empowerment of health consumers. Estimates from the report show that the value of IoHT will top $163 billion by 2020, with a Compound Annual Growth Rate (CAGR) of 38.1 percent between 2015 and 2020.

The latest AFCOM State of the Data Center Industry study found that 81% of respondents indicated that the purpose for edge compute capacity was to support and enable IoT.

A growing market is a big reason data center and business leaders are actively investing in IoT solutions. Furthermore, they’re investing in solutions that can support IoT devices and all of the users accessing that data. This is specifically where edge computing fits. In the latest AFCOM State of the Data Center Industry study, we found that 81% of respondents indicated that the purpose for edge compute capacity was to support and enable IoT.

Four in 10 respondents have either deployed or planned to deploy edge compute capacity.

Before we go on, let’s go on a bit of a ‘thought’ journey when it comes to edge. Know Thyself’ and Your Edge Requirements.

Requirements assessment in edge data center infrastructure design

Remember, edge solutions aren’t just another data center site. They’re smaller, use-case specific, and are designed to be dense environments to help you process more services and user data.

Use Case Definition

It’s important to take a step back and look at the long-term strategy of your own organization. Are you growing? Will you be supporting remote users? Are you trying to deliver new types of connected services? If you see that edge is a fit, take the next steps to write up a good business plan and technology strategy to support it. You don’t have to be an edge expert to clearly define your own use-case. Furthermore, there are great providers who can help you on this journey. However, it’s important to align infrastructure and business to ensure that your strategy can take off. From there, it’s key to work with the right people who can bring that vision to life. Which brings us to the next point.

The Latency Budget

To an end user, latency is the reason that downloading a movie takes so long, but to a content provider the number of milliseconds it takes to complete a function can be measured in customer dissatisfaction and cost. Furthermore, to a business, latency can also mean the loss of business or a competitive edge.

Even at the speed of light the round trip from a central data center, a facility located in a Tier I market for example, can mean the accumulation of transmission costs. A study conducted by ACG Research estimated that caching content locally in a metro population can save approximately $110 million over a five-year period. If we were to apply this same logic to a company running an IIoT parts tracking application, the hard costs of transmission could be measured, but the associated cost in the degradation of the performance of the application would be incalculable.

Security

This is a big one which adds a key complication into deploying edge. Basically, ‘what happens to my data?’ You’re going to have to take some extra time to define your data requirements and management policies. Is the data transient or will it be stored at the edge? What is the data that’s being processed? What is the connectivity control method around the data? Again, all of this will need to be defined and integrated into your own edge solution. That said, you can absolutely still build compliance and regulation into an edge architecture. However, you’ll need to take extra precautions to ensure data security and control. Take into consideration the location of the edge, storage systems at the edge, how the data will be processed, and who will have access. The cool part is that software-defined solutions allow you to integrate with core data center systems and support powerful data locality policies. This can really impact industries like pharma, healthcare, and other regulated organizations.

The Latency Index

Oftentimes, we discuss ‘slowness’ or latency without really understanding what this means to the business or the network. So, let’s put a few numbers behind the terminology. Latency is the time required to transmit a packet across a network. From there, latency may be measured in many different ways: round trip, one way, etc. Then, latency may be impacted by any element in the chain which is used to transmit data: workstation, WAN links, routers, local area network (LAN), server, and ultimately, it may be limited, in the case of very large networks, by the speed of light. From there, we have impacts on throughput, or the quantity of data being sent/received by unit of time as well as packet loss, which reflects the number of packets lost per 100 packets sent by a host.

Oftentimes, we discuss ‘slowness’ or latency without really understanding what this means to the business or the network.

So, when latency is high, it means that the sender spends more time idle (not sending any new packets), which reduces how fast throughput grows. A recent study showed how latency has a profound effect on TCP bandwidth. Unlike UDP, TCP has a direct inverse relationship between latency and throughput. As end-to-end latency increases, TCP throughput decreases. The following table shows what happens to TCP throughput as round trip latency increases. This data was generated by using a latency generator between two PCs connected via fast Ethernet (full duplex). Note the drastic reduction in TCP throughput as the latency increases.

From there, we have another serious issue. Packet loss. Packet loss will have two serious impacts on the speed of transmission of data:

  • Packets will need to be retransmitted (even if only the acknowledgment packet got lost and the packets got delivered)
  • The TCP congestion window size will not permit an optimal throughput

With 2% packet loss, TCP throughput is between 6 and 25 times lower than with no packet loss.

Although some packet loss is unavoidable, when this is happening consistently, user performance and access will be impacted.

Regardless of the situation, keep in mind that packet loss and latency have a profoundly negative effect on TCP bandwidth and should be minimized as much as possible. This is where edge computing solutions specifically play a key role. They help remove much of this latency by bringing key data points and resources much closer to the user.

What the edge can do for you

So, beyond combating the all-important latency challenge, what else can the edge do for you? Here’s the cool part, edge solutions specifically revolve around the use-case. Are you trying to deliver an application or an entire virtual desktop? Or, are you trying to deliver data that needs to be analyzed close to users or their systems? To that extent, edge systems uses can include:

  • Software-defined solutions that can provisioned based on the needs of your application
  • Branch and micro data centers
  • Hybrid cloud connectivity
  • IoT processing (Azure IoT Edge, for example)
  • Firewall and network security
  • Internet-enabled devices and sensors collecting and analyzing real-time data
  • Connect entire networks of devices
  • Asset tracking
  • Streamline research
  • Pharmaceutical, manufacturing, corporate inventory
  • Reduce latency for specific services
  • Supporting delivery requirements for latency sensitive data points and applications

What does this mean for edge data centers?

The introduction of 5G will accelerate the trend of edge data center networks extending their reach to proximities closer to end users than ever before. The evolution of edge computing and advancements in wireless networking ranging from the imminent roll out of 5G to highly efficient mobile connectivity and data center solutions, coupled with access to smarter mobile and wearable devices, have all contributed to providing a rich environment for the proliferation and growth of next-generation solutions and technologies.

Moving forward, edge facilities will house applications that can easily be defined as “mission critical.” With the advent of technologies such as 5G, the distance from the edge to a user group may often be measured in feet and not miles. 5G, coupled with edge proximity to devices and users, can offer some of the most powerful experiences and create amazing competitive advantages.

To that extent, it’s critical to plan and design your edge ecosystem properly.

This Data Center Frontier series, focused on edge computing, will also cover the following topics over the coming weeks:

Download the full Data Center Frontier Special Report on Edge Computing, courtesy of BASELAYER

Explore the evolving world of edge computing further through Data Center Frontier’s special report series and ongoing coverage.

About the Author

Bill Kleyman

Bill Kleyman is a veteran, enthusiastic technologist with experience in data center design, management and deployment. Bill is currently a freelance analyst, speaker, and author for some of our industry's leading publications.

Sponsored Recommendations

How Deep Does Electrical Conduit Need to Be Buried?

In industrial and commercial settings conduit burial depth can impact system performance, maintenance requirements, and overall project costs.

Understanding Fiberglass Conduit: A Comprehensive Guide

RTRC (Reinforced Thermosetting Resin Conduit) is an electrical conduit material commonly used by industrial engineers and contractors.

NECA Manual of Labor Rates Chart

See how Champion Fiberglass compares to PVC, GRC and PVC-coated steel in installation.

Electrical Conduit Cost Savings: A Must-Have Guide for Engineers & Contractors

To help identify cost savings that don’t cut corners on quality, Champion Fiberglass developed a free resource for engineers and contractors.

Runawayphill/Shutterstock.com
Source: Runawayphill/Shutterstock.com

How A Cross-Company Initiative Is Telling the Story of a New, Collaborative Future for Data Centers

A group of the industry’s top talent have been working as a special task force to address a pivotal issue facing the data center sphere at large. With their upcoming publishing...

White Papers

Mgk Dcf Wp Cover1 2023 01 09 10 34 33

Data Center Microgrids: The Case for Microgrids at Data Centers

Jan. 9, 2023
Many of the systems that businesses and the public rely on in the modern world are dependent on the internet, making data centers a critical form of infrastructure. But as the...