The industry's first distributed analytics platform built for edge computing & hybrid cloud environments
As the amount of newly generated data from all things connected – sensors, machines, people - rises to terabyte and petabyte levels, it takes too much time and money to transfer data across geographies to a central warehouse for analysis.
We’ve developed a new architecture for analyzing big data. One that’s capable of consuming unlimited amounts of data across geographically dispersed locations – without needing to move any data centrally for analysis. Querying, event processing and machine learning can be done instantly amongst distributed sources of data as though it was in a single central location.
Extending Data Analytics to the Cloud ...and the Edge
Control where data is to be collected and stored. At the edge, in the cloud, or on-premise – or any combination thereof. You provide the storage and compute resources and we provide the software.
For edge locations, deploy with commodity hardware. Decoupling of storage and processing ensures optimal cost efficiency, footprint and scale for future data growth. For public cloud locations, choose from any providers and data-center locations.
Create a network topology with arbitrary shape and depth. Analyze data from individual locations, groups, or as a whole. Locations can be added and removed with ease as requirements change.
Simple to Use and Secure for Sensitive data
Cloud-based and on-premise portal access, with support for two factor authentication, serves as the central point for command, control and query.
The platform runs autonomously. There is no need to design, tune or optimize for performance.
Software is deployed from a central location and data is automatically replicated and synchronized. Data can be automatically aged without any intervention.
The platform is inherently secure. Encryption is applied to all in-flight and at-rest data. All internal communications within the platform are authenticated. Direct access to data from any node is prohibited.
The Ease and Familiarity of SQL, without Limitation
Data management and access is performed using full-featured, standard SQL commands. Customers realize immediate value without having to develop specialized expertise plus easy integration with BI, machine learning and DevOps tools.
Similarities with SQL stop here. We’ve developed a patented, scalable architecture that pre-optimizes data for the fastest response to all query types and formats – supporting both structured and semi-structured data.
The architecture allows for highly distributed data, network speed loading, fast response to any query type, across any geography.
Complex Event Processing for Edge & Fog Computing
Processing data at the edge has additional advantages. Data can be ingested at network speed with complex event processing functions that take action in near real-time.
Event processing functions available include pre-aggregation, matching against black lists, detecting anomalous conditions and identifying temporal patterns.
Data warehouses such as Hadoop and AWS Redshift can be complemented by adding an interoperable fog computing layer at the edge and by applying data reduction techniques to lower overall cost of ownership.