Storidge’s driverless storage is focused on simplifying the user experience for managing storage. Our software turns commodity x86 hardware or virtual instances into convenient, modular units of infrastructure which can rapidly scale-out to add capacity and performance. Resources across a cluster of nodes are aggregated into a common pool, which provide the building blocks for a durable data layer. Storage is provisioned using the elastic building blocks to construct data volumes for applications on demand. The data volumes are managed on top of a distributed storage platform.
Data volumes are accessible from any node of a cluster as an embedded storage orchestrator automatically moves a data volume to the container when it is rescheduled. Storage is easily provisioned through application profiles and automatically created through any ready scheduler. Like containers, Storidge data volumes are designed to be self-contained and optimized for low latency, enabling performance guarantees to be set per container or per application. The logical and performance isolation allows data volumes to be managed as independent objects. Instead of having to manage infrastructure, embedded storage expertise translates application intent into persistent storage on demand, in seconds.
The data plane present data volumes to applications running in containers, VMs or natively. All data access requests are processed through the data plane which is IO path optimized to drive low latency and support the most demanding applications. The enables consistent performance guarantees to be delivered through Quality of Service features which support workload isolation, critical for multi-workload, multi-tenant environments. IO path optimization means our efficient stack is able to effectively leverage high performance flash (NVME), capacity flash (SAS/SATA) or capacity drives (HDDs) in multiple storage tiers on bare metal or in the cloud. The Storidge approach delivers best of both worlds, the operational simplicity of software-defined storage on commodity x86 servers, and the high performance, low latencies of traditional storage arrays.
Data is replicated synchronously to other nodes for durability. If a node fails, the data volume is quickly rebuilt to restore the number of replicas. This rebuild is transparent to orchestration systems as it is running concurrently even when a containerized application and its volume is rescheduled to another node. The unique Storidge design distributes compute and IO loads across all nodes of the cluster so recovery is fast, and with minimal impact to applications running in the cluster.
The control plane is designed with fully distributed metadata so performance scales as nodes are added. This eliminates the added latency and need for separate metadata nodes. Critical metadata in the IO path is always local while replicas are maintained on peer nodes for redundancy. A “hashless” approach to data placement means data infrastructure can seamlessly expand or contract with the addition or removal of nodes, and without the impact of rebalancing nodes. This make it simple to start small and inexpensively, then easily grow as needs evolve.
The control plane performs configuration, health monitoring, scheduling, provisioning and recovery processes. It does this through a REST API which is accessed by plugins, CLI, GUI and REST requests. The API in turn communicates with control plane components in peer nodes of the cluster to take appropriate action.
650 Castro Street Suite 120 Mountain View, CA. 94041
Thanks for dropping by. If you have feedback, comments, or concerns, we would be happy to hear from you.