How to Provision Persistent Storage in One Easy Step

...Also, How to Manage
Those Newly Made Volumes

Note: for a more technical approach to using CIO, please refer to our Quick Start Guide.

I hope you’re ready for this–

cio vdadd

There, that’s it. If setup was completed properly, a brand-new 20 gigabyte volume should spring into existence, ready to be used with Docker, containers, or any application that needs persistent storage. But a 20 gigabyte volume by itself is boring; moreover, it’s not exceptionally practical.

Below, we’ll go over some of the neater features of the CIO volume management.

Customizing Your Volume

CIO volumes have a number of parameters that a volume can be initialized with. You can also change many of these parameters,CIO volumes have a number of parameters that a volume can be initialized with. You can also change many of these parameters,including the capacity, on the fly. Some of the supported flags are as follows:

View Code

Option      Short     Description
volume      -v        Name of the volume
node        -n        Name of node to create the volume on
nodeid      -N        ID of node to create volume on 
capacity    -c        Size of volume in GB (default 20)
directory   -D        Set bind mount directory (defaults to /cio/volumes)
level       -l        Level of redundancy desired (2 or 3 for 2-copy or 3-copy)
type        -t        Type of backend device (hdd or ssd)
iops        -i        *Performance limits in min/max IOPS (cannot be used with bandwidth flag)
bandwidth   -b        *Performance in min/max MB/s (cannot be used with iops flag)
profile     -p        Template to use for volume creation (more on this later)
label       -l        User-defined flags for a volume.
thick       -T        Use Thick provisioning
quiet       -q        Show created vdisk ID only

They can be used with cio vdadd and cio vdmod to adjust volumes as needed. Note that the iops and bandwidth flags take two arguments! Some examples follow:

cio vdadd --volume nginx --capacity 15 \
--level 2 --type ssd --directory NGINX \
--iops 50 100 -q

These flags may be used in the same way to adjust volumes to new values using cio vdmod (vd modify).
Note, however, that the --volume flag selects the volume instead of changing the name. CIO will attempt to
adjust the specified volume to use the new values; if it cannot, it will return an error message.

cio vdmod --volume nginx --capacity 25 --iops 100 1000

Finally, the process of initializing volumes can be shortened even further by saving aspects of the volume to profiles.
For example, the above cio vdadd command created a volume with an NGINX configuration.

Here’s the same vdadd command, but expedited with the use of a profile:

cio vdadd -v nginx -p NGINX

Much simpler! CIO profiles are yaml files containing information about volumes. Here is a sample of what a profile looks like:

View Code

[root@swarm1 profiles]# cat NGINX
capacity: 25
directory: /cio/nginx
  min: 100
  max: 1000
level: 2
local: no
provision: thin
type: ssd
  compression: no
  dedupe: no
    enabled: no
    enabled: no
    destination: none
    interval: 120
    type: synchronous
    enabled: no
    start: 1440
    interval: 60
    max: 10

--- # vim:syntax=yaml:ts=8:sw=2:expandtab:softtabstop=2

These files can be copied, then customized to taste and saved for future use.

To inspect a volume, use cio vdinfo -v <name> or cio vdinfo -V <vdisk_ID>.

To move the volume to another node, simply use cio vdmov -v <vdisk> -n <destination_node>

Finally, to delete a volume, use cio vdrm -v <name>. You must confirm to delete. To skip prompts, you can add a -y flag.

That’s all there is to basic CIO volume management! For more details, please refer to our advanced guide or our Quick Start Guide. To get started using CIO with your Docker containers and applications, please refer to the article, How to Use CIO with Docker Swarm and Containers.