Edge storage: What it is and the technologies it uses

2022-10-08 07:52:29 By : Mr. Michael Ma

bdavid32 - stock.adobe.com

Large, monolithic datacentres at the heart of enterprises could give way to hundreds or thousands of smaller data stores and devices, each with their own storage capacity.

This driver for this is organisations moving their processes to the business “edge”. Edge computing is no longer simply about putting some local storage into a remote or branch office (ROBO). Rather, it is being driven by the internet of things (IoT), smart devices and sensors, and technologies such as autonomous cars. All these technologies increasingly need their own local edge data storage.

Industry analysts Gartner confirm business data is moving from the datacentre to the cloud and the edge. The firm identifies four use cases for edge storage: distributed clouds and datacentres, data processing at the edge, content collaboration and access, and digital ingest and streaming.

This isn’t an exhaustive list – applications such as autonomous vehicles that sit outside enterprise IT are driving edge computing too. Meanwhile, industrial processes, sensors and IoT are all drivers that push more computing to the edge.

The market for edge storage is being shaped by changes in storage technology and by applications for edge computing. Increasingly, edge devices need persistent storage that is robust and secure, but applications also demand performance that goes beyond the SD or micro-SD cards found in early generation IoT devices and single board computers.

A few years ago, edge computing was most closely associated with remote or branch office (Robo) deployments. For storage, Robo was about providing at least some level of backup or replication to secure data, especially if a device failed, and caching or staging data before sending it to the datacentre for further processing. This batch-based approach worked well enough in retail and other environments with fairly predictable data flows.

But adding storage by way of a networked PC, a small server or a NAS device only really works in office or back office environments, because they are static, environmentally stable and usually reasonably secure.

Today’s business edge is much larger and covers much more hostile operating environments. These range from the factory floor, with edge devices attached to manufacturing equipment and power tools, to cameras and other sensors out in the environment, to telecoms kit and even vehicles.

Enrico Signoretti, an analyst at GigaOM, describes these environments as the industrial edge, remote edge or far edge. Storage needs to be reliable, easy to manage and – given the number of devices firms might deploy – a cost-effective solution.

Edge applications require storage to be physically robust, secure, physically and virtually – often encrypted – and able to withstand temperature fluctuations and vibration. It needs to be persistent, but draw little power. In some cases, it also needs to be fast, especially where firms want to apply artificial intelligence (AI) to systems at the edge.

Alex McDonald, Europe, Middle East and Africa (EMEA) chair at the Storage Networking Industry Association (SNIA), says that edge storage includes “storage and memory product technologies that provide residences for edge-generated data include SSDs, SSD arrays, embedded DRAM [dynamic random-access memory] , flash and persistent memory”.

In some cases, storage and compute systems need to be adapted to operate in much wider range of environments than conventional IT. This requires physical robustness and security measures. Single-board computers, for example, often rely on removable memory cards. Although encryption protects against data loss, it will not prevent someone physically removing the memory module.

“Ruggedised and enhanced specification devices will support environments that require additional safeguarding in embedded applications, from automotive to manufacturing,” says McDonald.

Organisations working with edge computing are also looking at storage class memory (SCM), NVMe-over-fabrics, and hyper-converged infrastructure (HCI).

Hyper-converged infrastructure, with its on-board storage, is perhaps best suited to applications that may need to scale up in the future. IT teams can add HCI nodes relatively easily – even in remote locations – without adding significant management overheads.

But for the most part, edge computing’s storage requirements are relatively small. The focus is not on multiple terabytes of storage, but on systems that can handle time-sensitive, “perishable” data that is then analysed locally and passed on to a central system – usually the cloud – or a combination of both.

This requires systems to be able to perform immediate actions on the data, such as performing analytics, before passing it on to a central store or process. This data triage needs to be nimble and, ideally, close to the compute resources. This, in turn, has prompted interest in NVMe-over-fibre channel and storage-class memory.

And, by putting some local storage into the device, systems designers can minimise one of edge computing’s biggest challenges – its demands on bandwidth.

Organisations that want to add data storage to their edge systems do so, at least in part, to reduce demands on their networks and centralised datacentres, or to reduce latency in their processing.

Some firms now have so many edge devices that they risk overwhelming local networks. Although the idea of decentralised computing connected to the cloud is attractive, in practice network latency, the possibility of network disruption and even cloud storage costs have prompted device manufacturers to include at least support for local storage.

A growing number of vendors also make edge appliances that work alongside (or more accurately, just behind) IoT devices to gather data from them. Some are data transfer devices, such as Google’s Edge Appliance, while some take on some of the AI processing itself, offloading it from the network.

By doing this, systems architects can provide a more robust form of edge computing. More data is processed near to the sensor or device, decisions can be made more quickly via analytics or AI, and the amount of data sent to the corporate LAN or cloud service can be vastly reduced.

Adding storage to the edge, directly or via appliances, also allows for replication or batch-based archiving and makes it easier to operate with intermittent or unreliable connections, especially for mobile applications.  Jimmy Tam, CEO of Peer Software, says that some vendors are integrating hard disk drives in combination with SSDs to allow devices to store larger data volumes at a lower cost.

“In the case where the edge storage is mainly focused as a data ingestion platform that then replicates or transmits the data to the cloud, a larger proportion of storage may be HDD instead of SSD to allow for more data density,” he says.

It seems unlikely that any single storage technology will dominate at the edge. As Gartner notes in a recent research report: “Although edge storage solutions possess common fundamental principles, it is not a single technology, because it needs to be tailored to the specific use cases.”

Nonetheless, Gartner expects to see more data storage technology being “edge ready”, including datacentre technologies that work better with the demands of the edge.

IoT and other edge vendors will work to improve storage performance, especially by moving to server and workstation-class storage, such as Flash, NVMe and NVMe-over-fabrics, as well as storage-class memory, rather than USB-based technologies such as SD or micro-SD.

But the real focus looks set to be on how to manage ever larger numbers of storage-equipped devices. Developments such as 5G will only increase the applications for edge computing, so firms will look for storage that is not just rugged but self-healing and, at least in normal operations, can largely manage itself.

Liberty Mutual Insurance's next CIO, Monica Caldas, shares new ways of solving the IT talent problem and explains why soft skills...

During Forrester Research's Technology and Innovation Forum, experts said IT leaders should identify use cases for technology to ...

The CHIPS and Science Act allows the U.S. to invest in critical technologies such as quantum computing and artificial ...

Sullivan was convicted of obstruction of proceedings of the Federal Trade Commission and misprision of felony in connection with ...

A CISA alert warned that APT actors compromised a defense contractor's Microsoft Exchange server and used Impacket, an open ...

Attacks disclosed in September revealed that K-12 schools, universities and local governments continued to suffer at the hands of...

A network disaster recovery plan should include components like documentation, emergency contacts, step-by-step procedures, ...

While network teams are responsible for deploying the elements of a zero-trust network, security teams should also be involved in...

To avoid network overprovisioning, teams should review baselines and roadmaps, assess limitations and consider business strategy ...

File server reporting within File Server Resource Manager can help admins identify problems and then troubleshoot Windows servers...

Administrators who manage many users can go one step further toward streamlining license assignments by taking advantage of a new...

ServiceNow doubled down on its commitment to take the complexity out of digital transformation projects with a new version of its...

The database vendor extended its namesake database platform to support machine learning applications with its revamped ...

At Big Data London, data quality and intelligence took center stage as companies strive for fast and efficient delivery of ...

The streaming data platform vendor added a stream designer and new governance capabilities to its cloud service for organizations...

All Rights Reserved, Copyright 2000 - 2022, TechTarget Privacy Policy Cookie Preferences Do Not Sell My Personal Info