The HyperScale Challenge

Many businesses are now relying on machine-generated data. Machine-generated data requires enterprises to evaluate and adopt new storage architectures that can support the ever-accelerating velocity of data ingest, volume of data stored, and variety of object data types. At the same time, these architectures must support the scaling of storage capacity to exabytes of data and work with the latest analytical solutions.

 

From The Extreme Edge To The Cloud

YottaStor-Diagram-100-3.png
YottaStor-Diagram-100-1.png
YottaStor-Diagram-100-2.png

Utilizing YottaStor’s Patented Process, data moves from the Edge to the Cloud & the Enterprise and back to the Edge again. YottaStor’s Edge Solutions are integrated with the major Public Cloud providers.

80% of Future Enterprise Data Growth
will Consist of Machine Generated Data

Machine Generated Data
Requires a New Workload Model

 

- Data Ingest
- Content Streaming
- Content Collaboration
- Content Dissemination
- Analytical Frameworks

Spinner Icon Black 2 230 230.png
 

YottaDrive Solves the HyperScale Challenge

Machine Generated Data
Collect & Analyze

YottaStor is focused on machine generated data, the fast growing segment of Big Data. Machine-generated data is overwhelming traditional, POSIX-based architectural designs and rendering them obsolete. Commercial and Federal enterprises are spending hundreds of millions if not billions of dollars deploying advanced sensor technologies that create and capture machine generated data. The YottaDrive is a patented, purpose-built large data object storage service that economically stores this data and exploits it for business insight.

 
Funnel Icon Black 2 230 230.png
 

YottaStor's Big Data Design Principles
 

Capture & Store Data Once

Write data to disk once during its lifecycle. The accelerating velocity at which data is being created as well as the sheer magnitude of data being managed will overwhelm network capacity, rendering impractical any attempt to move data electronically.
 

Automate Data Movement to Less Expensive Storage

The storage system must continually migrate data to less expensive storage, letting customers lower their overall storage cost. The key metric for storing data becomes cost/GB/month for storing the data. Once this metric is developed for a specific organization, then the year-to-year planning process will focus on reducing this cost.
 

Design for ever-increasing Data Variety, Data Volume & Data Velocity

The storage system must demonstrate the ability to scale in three dimensions: Data types which will evolve and extend over time, the daily ingest requirement which will continue to increase, and the overall capacity of the storage system which will expand at accelerating rates.
 

Deploy a Federated, Global Namespace

Adopting namespace technologies that support billions of objects in a single namespace eliminates cost and complexities.

 

Process Data at the Edge

The only point in the architecture that an organization can affordably process data is during the ingest process. This requires co-locating processing and storage. Then users can create and capture the metadata required to access the data in the future.
 

Access through Web Services

This level of application abstraction is key to allowing the operational optimization of the storage cloud without impacting the application layer. One important benefit of this capability is the elimination of “location awareness” that applications must have in POSIX-compliant storage environments.

 

Eliminate RAID

The data durability requirement is actually greater than traditional storage environments. Using new approaches such as replication and erasure coding must be embraced to meet this requirement.


 

Adopt Self-Healing, Self-Replicating Technology

In order to reduce the cost/GB/month the technology must be self-healing and self-replicating. This capability will substantially reduce the number and cost of FTEs required to manage the storage system.

Analytics at HyperScale

Scale Changes Everything. The constantly accelerating ingest velocity creates a situation where the storage system must seamlessly expand without operational disruption. This accelerating ingest velocity renders traditional extract, transform and load (ETL) architectures for moving data from primary storage to the analytical environment using traditional enterprise analytic engines infeasible.

 
Com Icon Black 2 230 230.png