Blog

The Intelligent Edge: Here and Now

October 29, 2018

Plug the phrase “intelligent edge” into Google, and you’ll be presented with over 134 million results. If you think this speaks to the potential that the intelligent edge is the future of computing, read on.

Gartner predicts that by 2022, 50% of enterprise-generated data will be created and processed outside the traditional, centralized data center or cloud, up from less than 10% in 2018. In response, companies are investing millions – even billions – to address the need for an intelligent edge to analyze data.

Hewlett Packard Enterprise plans to invest $4 billion in Intelligent Edge technologies and services by 2022 to help customers turn their data – from every edge to any cloud – into intelligence. Microsoft is investing $5 billion in IoT, which it says “is ultimately evolving to be the new intelligent edge.” And in August 2018, VMware unveiled its extended edge computing strategy, sharing plans to develop a framework that extends the VMware hybrid and multi-cloud environments to the edge.

Perhaps you’re thinking investment plans and strategies don’t translate into real-world applications. But in actuality, the intelligent edge is already a reality…today.

 

What exactly is the intelligent edge?

Intelligent edge software analyzes data from millions of devices

We know the power of the cloud to store, manage, and process data. It’s worked well in its modern context for more than a decade. But the proliferation of billions of devices creating data challenges even cloud computing. When data must get passed from the edge to the cloud, things can quickly get bogged down. That’s where the intelligent edge comes into play.

When it comes to analyzing data, the intelligent edge is designed to overcome the problems of centralized data while moving analytics and machine learning capabilities usually associated with a powerful, centralized data warehouse to a distributed, edge architecture. The intelligent edge is essential because companies need instant access to their data anytime, anywhere to support digital transformation initiatives intended to improve their operations, insights and strategies leading to growth in today’s highly competitive world.

 

The failure of data analytics

We saw the need for a transformative approach to analytics many years ago, which is what prompted us to found Edge Intelligence. We anticipated the multiple – and serious – challenges accompanying the proliferation of data and devices.

We grasped the shortcomings. Analytics data wasn’t going to be used the same way as transaction data. Analytics had to be agile, exploratory and highly iterative. It’s not possible to anticipate how data might need to be analyzed in the future. But we knew it would require a combination of broad aggregate analytics, needle-in-the-haystack forensics, standard reporting, and ad hoc discovery – to name a few. We also knew historical data was valuable only if it could be retained indefinitely and easily accessed and combined with numerous other data sources in a highly efficient and affordable manner.

The impact of these challenges was apparent. Too many big data projects were falling flat. Even back in 2014, Analytics Magazine talked about numerous big data analytics projects failing. In 2016, Gartner pinned the failure rate at about 60%. A year later, it updated the failure rate closer to 85%.

While many reasons contributed to the failures, a prime cause is that analytics solutions were remarkably difficult to use, and they did not scale to easily support extremely large volumes of data.  Most data is not being analyzed at all, let alone to its fullest potential. In 2012, an IDC Digital Universe study found that less than 1% of the world’s data was being analyzed.

It’s not that vendors haven’t responded. Recognizing that companies needed a quick, high-performing, scalable method to extract maximum value from their data, a wave of vendors introduced Hadoop/noSQL data-lake solutions. These were supposed to solve the big data problem… but they didn’t. In fact, some say data lakes just muddy the waters.

 

Answering the need: a new approach to big data

As we watched all of this unfold, it also became apparent that the majority of data was not going to be generated by people, but by machines and sensors. And the volume of that data was going to be enormous. In fact, IDC predicts that by 2025, the global datasphere will grow to 163 zettabytes (that is a trillion gigabytes) – ten times the volume of data generated in 2016. It also forecasts that more than a quarter of this data will be real time in nature, and real-time IoT data will make up more than 95% of this.

It’s not practical to move this much data to a central site – whether the cloud or a corporate datacenter. The laws of physics, economics and the land (i.e., privacy) preclude this option. It takes too long, costs too much and raises additional privacy concerns when transferring large volumes of data to a central location prior to performing analytics.

 

An intelligent edge for our data-driven world

So what’s needed to solve the most difficult challenges at the cornerstone of big data, edge computing, machine learning and the Internet of Things (IoT)? The following elements are essential to enable a new globally distributed, big data analytics platform:

 

  • The fundamental way that data is stored and accessed. The new platform needs to make it possible to analyze data any way you choose with fast, consistent performance. That means unbounded and unanticipated, without having to decide up-front the type of data you might need to analyze so you can avoid regular redesign efforts.

 

  • A novel approach to big data analytics. The decades-old, highly controlled and rigid centralized data warehouse isn’t going to scale for the future. The modern approach harnesses the power of a big data warehouse, real-time data streaming and machine learning to create a distributed, federated environment at the edge – all managed centrally. With the Intelligent Edge, there is no longer the need to “move” data. Data can be analyzed instantly whether it was generated seconds or years ago – without having to move it across geographies to a central location. And it calls upon analytics that are distributed within edge computing and hybrid cloud environments in a flexible, efficient manner.

 

  • A distributed system that is fast and easy to use. Companies need to be up and running and seeing value in weeks, not months or years. An Intelligent Edge solution maintains the ease and familiarity of SQL, with industry-standard tools, yet can scale to ingest and analyze petabytes of data from millions of devices across the globe. A solution operating autonomously in a zero-touch environment on modest, commodity storage and compute.

 

This is exactly what guided our design of the Edge Intelligence Analytics Platform.

 

Building on the foundation for the future

Whether you call it a key element of “digital transformation” or subscribe to the concept of “data as the new oil,” we all know that data is an organization’s most valuable asset. Those that can harness its full potential are the ones that will excel in today’s digital, global economy. Fortunately, the intelligent edge is here now, making that simple and affordable today.

Investments being made in the intelligent edge are sure to have a transformative impact on how data can be applied across all industries. As the use cases and vertical markets continue to take shape, Edge Intelligence will extend its Big Data Analytics platform, forge new partnerships, and be at the forefront of supporting the highest value digital transformation initiatives.

Ready to see proof that the intelligent edge is here…. now? Request a no-obligation demonstration so we can show you how you can gain instant insight into vast amounts of globally distributed data.

 

Posted on October 29, 2018 — Kate Mitchell is the CEO and Co-Founder of Edge Intelligence

Recent Updates

Sign Up for Our Newsletter

  • This field is for validation purposes and should be left unchanged.