It's High Time To Seek Out
The Greener Pastures
It’s a global phenomenon that started with your smartphones and will end with your home: the Internet of Things is here. According to Gartner, Inc., roughly 20.8 billion objects – nearly three for every single human on the planet – will be connected to the internet and generating data by 2020. This, by the way, is up from 6.4 billion devices in 2016, and 5 billion in 2015. On top of that, each device will be gathering and storing more and more complex data about its users.
Clearly, this doesn’t bode well for datacenters that have to scale to meet the exponential demand. And it definitely doesn’t look good for data center managers who are already struggling with flat or decreasing IT budgets.
After all, scaling up is expensive, and servers often require time to pay for themselves. And if you overscale? Well, now there are idle servers eating power and resources that could have been allocated in better ways. And with that exponential-growth boogeyman peering around the corner, hardware upgrades on aging infrastructure will eventually add up, too.
Even as operations is trying to support new initiatives such as DevOps and cloud native applications to drive business agility, they also have to manage the complexities of legacy storage architectures which are expensive, hard to manage and create silos within the data center.
Is there a cost-effective solution to this problem?
Many Large Companies Look To SDS To Manage These Challenges
SDS is a relatively recent storage technology that virtualizes storage services away from the storage controller and allows them to be run on general-purpose server hardware. This decoupling from hardware yields cost, flexibility and scalability benefits.
SDS transcends traditional storage virtualization. It moved functions from an external storage system and placed them close to compute, enabling better load balancing, reducing operational task loads, and improving responsiveness and flexibility.
Unfortunately, this is still not enough; while the benefits of SDS are proven, most SDS solutions were implemented to address the challenges of the virtualized datacenter before the trend towards cloud and cloud native applications became apparent. The static nature of SDS does not meet the agility demands of cloud storage. The current solution is to implement a software version of the traditional storage stack but run it on the server inside a virtual machine, which does not make for a complete package.
The Need For Driverless Storage
Organizations are on a path towards modernizing legacy applications, or writing new ones to be cloud-friendly. They are widely adopting container technology and microservice architecture as a foundation for new application development. As such, their behavior and their requirements from the underlying storage infrastructure has significantly changed. Containerized applications are lightweight, agile and highly scalable compared to their counterparts running in virtual machines. They lend themselves toward scale-out storage environments that are tightly integrated into orchestration systems.
Integration is critical as it enables automated storage provisioning through orchestration systems. A distributed storage system that is easy to implement, operate and seamlessly integrated with schedulers is the first step towards driver-less storage where an operator is not in the path of provisioning storage. In addition, driverless storage must build in storage orchestration so operator intervention is not needed to handle exceptions such as storage node failures.
Compared to static virtual machines, container I/O is highly variable and random. The small size and high density of containers means storage infrastructure has to handle hundreds if not thousands of highly parallel I/O streams. Driver-less storage must deliver automated performance management to eliminate the need for individual application performance tuning by an operator.
As organizations increasingly move workloads to the cloud, cost becomes a key operating metric. Resources must be scaled up or down based on application workload and customer demand. Driver-less storage must automatically scale capacity and performance to match increasing demand and then remove resources to optimize costs when demand decreases.
Storidge is developing driver-less storage for modern workloads running in containers.