Release Notes

By March 18, 2018Release Notes

Search Knowledge Base by Keyword

Release 0.9.0-2428

New

  • This release adds support for data locality. For containers, having local storage access reduces latency and improves applications’ response times. Instead of using techniques such as labels, constraints or affinity/anti-affinity to ensure data locality, this new feature automatically moves data to the node where a container is running after being rescheduled or on restarts after a node failure. This ensures consistent high performance and low latency for transaction apps such as databases. See blog for details.
  • This release adds initial support for multi-zone clusters (aka stretched clusters). A storage cluster can now be defined across two zones in an AWS region or two datacenters connected by dark fiber. This improves the availability of the cluster and provides protection against zone failures.

Improvements

  • Adds support Centos 3.10.0-862.6.3-862.11.6 kernel
  • Adds support Ubuntu 4.4.0-131-133, Ubuntu 3.13.0-154-155, AWS Ubuntu 4.4.0-1062-1065
  • Adds support for arm64 architecture on Ubuntu 16.04
  • Updates API for cio info unit conversion and cio vdinfo allocate% replacing used capacity
  • Add API routes for /vdisks and /nodeid path
  • Use hostname for instance id when /etc/hostname is set
  • Enable firewall on storage nodes with mk_iptables.sh for Ubuntu 16.04
  • Add multi-zone (aka stretched cluster) feature. Use cioctl create --zone <zone1> <zone2> to start a cluster
  • Add ZONE_LIST to configuration file
  • Update node auto-rejoin feature for multi-zone cluster (aka stretched cluster)
  • Improve cioctl join-token to output command strings based on balanced and unbalance cluster nodes
  • Add support for scale up when only one node in first zone of a multi-zone cluster
  • Update dfs code for fio version 3.1 reporting format
  • Changed MB/s to MiB/s in cio info output and help messages. Previous reported units already in base 2 format
  • Adds support for distributed snapshots on all nodes
  • Add support for snapshot cli command cio snapshot
  • Add lock /etc/convergeio/cio_node_mgmt.lock to avoid conflicts between scale up and scale down operations
  • Add --nounits flag option to get capacity info in byte units
  • Change mount path length from 127 to 160 characters for Kubernetes
  • Add data locality feature. Host I/Os after a vdmv operation trigger rebuilds to local microdisks. A local mdisk is allocated and mdisk[0] is freed. If rebuild is running, data locality will quit asap
  • Change locality rebuild thread name from rbd to lrbd
  • Update data locality capability for stretched cluster

Bug fixes

  • Fixed Ubuntu 16.04 install issue with apt-get by using aptitude on certain errors
  • Fixed garbled command that leads to empty /etc/iscsi/initiatorname.iscsi and hence issues creating a cluster
  • Fixed bug cioctl add allows nodes with different version to join cluster
  • Fixed bug where SSD drives added as HDD drives after node rejoin
  • Fixed memory corruption in recovering extent table group logic
  • Fixed bug where snapshot policy not started due to conflicting name in mount path
  • Fixed bug where backup1 and backup2 nodes don’t know zone name of newly added node
  • Fixed missing metadata after fourth rejoins multi-zone cluster (4 node cluster)
  • Fixed cio vdlist command showing vdisk on wrong node after node failure in multi-zone cluster
  • Fixed bug missing metadata in extent table after second node rejoin
  • Fix bug get_remote_zone_copy_count node not found
  • Fixed a bug where the configuration file is not copied to all nodes
  • Fixed bug in cio vdadd where command with –capacity and –profile flags fail in kubernetes
  • Fixed bug where vdisk owner is incorrect after failover

Known issues

Profiles

Certain capabilities of CIO have not been added to the Create Profile and Profile Details pages. These include directives supporting encryption service, volume labels, file system selection and interface container types. These capabilities are available through CLI commands and will be added to the UI in the next CIO release.

Snapshot, compression and deduplication features shown in sample profiles are not yet released.

TLS Support

The pre-release version of CIO does not include support for TLS. This security feature will be added to the next release of CIO.

Volumes

Support for modifying volumes parameters, e.g. capacity, IOPS limits, etc. have not added to the Portainer UI. These capabilities are available through CLI commands and will be added to the UI in a future CIO release.

Release 0.9.0-2361

New

  • This release adds support for auto-recovery of nodes. The node may have been accidently rebooted or failed, and was removed from the cluster. When the node is available again, it will automatically rejoin the cluster. Previous workloads that stopped because of a lack of resources will be restarted by the scheduler on the rejoined node.
  • Adds online capacity expansion of volumes with file systems. This enable applications to continue running while capacity is added to a volume running out of working space. Supported file systems are ext4, btrfs and xfs.

Improvements

  • Add support AWS Ubuntu 4.4.0-1043-1061 and Ubuntu 4.4.0-122-129, 3.13.0-146-151
  • Adds support for newly released Centos 7.5 distro
  • On bare metal servers, use QoS performance files from sds node on new nodes
  • Improve cluster configuration check messages in cioctl
  • Add Digital Ocean droplets to list of recognized VMs
  • Turn off DRIVE_CHECK on DigitalOcean droplets
  • Run perl using only the embedded “C language locale”
  • Label cio nodes and set constraint for Portainer service to cio and manager nodes
  • Use get_homepath() to locate .pem file from the default AWS login ID home directory (as well as CWD and /etc/convergeio/config).
  • Replace ‘used’ with ‘allocate%’ in cio vdinfo command
  • Simplify steps for adding new node to cioctl join-token and cioctl add commands
  • Add work queue to avoid tasks being delayed by vdmv or other cli commands
  • Reduce vdmv timeout from 60 seconds back to 30 seconds so other cli commands can be processed earlier
  • Add warning message if vdmv takes more than 2 seconds
  • Enable QoS feature only if real performance data is collected during cluster initialization
  • Improve getlocalip() to support Class A network addressing sub-netted into many Class B address ranges, e.g. on Digital Ocean
  • Improve vdisk event messages in syslog and include volume names when available
  • Update mount path to use volume name when available to link apps to volumes

Bug fixes

  • Fix auto kernel update and scst compile issue for Ubuntu 14.04 on AWS
  • Fix cioctl and demo script when running on AWS to detect RHEL or Centos instances and return the correct UID symbolic name and HOME directory
  • Fixes for CentOS 7.4 and Ubuntu 16.04 install on AWS
  • Fix for events to be sent for every volume. Re-enable events after volume expansion
  • Fix difference between actual and expected number of data drives during configuration check
  • Fix snapshots to rotate on snapshot number setting
  • Restart snapshot after cluster reboot
  • Fix metadata check issue when there is less then 3 drives
  • Fix cio vdinfo shows default xfs file system on un-formatted vdisks
  • Fix bug where vdisk not movable if a 3-drive node has one drive failure
  • Fix for two drives marked faulty
  • Fix race condition and improve error handling
  • Fix bug cannot delete volume due to “invalid vdisk id” error

Release 0.9.0-2257

This release focuses on stability improvements and bug fixes.

New

This release adds support for Docker Swarm manager nodes outside of the CIO cluster. There must still be at least one manager node in the CIO cluster.

Improvements

  • Add support Ubuntu 4.4.0-118 and 3.13.0-145
  • Enable sudo installs on CentOS
  • Remove dependency on swarm leader node and use any manager node instead
  • Support online capacity growth of xfs file system
  • Add sys interface to disable vdmv
  • Add 3 + 1 queues to send heartbeats. If any queue is blocked, use standby queue to send
  • Improve get_manager_ip() to be more tolerant of nodes that slip in and out of Swarm membership
  • Improve handling of multiple instances of node names and wildcard character in docker node ls output
  • Demo script installs and configures Docker if this is not already done
  • Suspend QoS after vdmv to improve mount time

Bug fixes

  • Disable vdadd, vdrm, vdmv and vdmod commands after cioctl reboot has been issued
  • Fix cio vdmod not updating vdisk capacity at vfs layer
  • Fix for cio daemon hang caused by cio vdmod
  • Fix for XFS issue after power off or reboot
  • Fix for vdadd on remote node failure due to vdmv race condition
  • Pin CentOS installs to Docker 17.12.1.ce-1 due to /dev issues in 18.03.0.ce-1 release
  • Fix bug where running cioctl reboot on storage node does not reboot all nodes
  • Fix Ubuntu 14.04 node not changing to active state on second cioctl create
  • Fix heartbeat issues caused by rebooting or powering off one node of a multi-node cluster

Release 0.9.0-2225

This release focuses on stability improvements and bug fixes.

New

This release adds support for Ubuntu 16.04 (4.4 kernel) for the first time. On Ubuntu 16.04, the demo release package will install Docker 17.09.1. This is a work around for a known OCI runtime issue with Docker 17.12.0. The demo release package will install the latest version of Docker 17.12.1 on Centos 7 and Ubuntu 14.04.

Improvements

  • Display more useful info when kernel version not supported
  • Rework Makefile system around Linux distribution classes
  • Significantly reduce size of tar install packages
  • Add support Ubuntu kernel 4.4.0-115 to 117, Ubuntu 3.13.0-144, Centos 3.10.0-693.21.1.e17
  • Add support u14 -143 kernel which requires RETPOLINE (Spectre 2) work-around. Same -143 kernel works for AWS
  • Add compile support for Ubuntu Meltdown/Spectre_1/Spectre_2 kernels -166 (general) and -1049 (aws)
  • Enable snapshot on Ubuntu 16.04
  • Create snapshot of vdisk at root instead of subvolume
  • Class u14 and u16 will now use –force-yes
  • Disallow cioctl remove command if vdisk rebuilding
  • Only allow a vdisk to be moved number of cluster nodes – 1 times during the first 10 minutes after a node failure. Restrictions removed after 10 minutes or when a new node joins the cluster

Bug fixes

  • Changed all references to downloading SCST from sourceforge to the ftp server
  • Fix insert scst: Exec format error on Ubuntu 16.04
  • Fix cioctl create broken on Centos 7 due to fio versioning up
  • Use docker 17.09.1 for Ubuntu 16.04 to work around Docker’s OCI runtime issue
  • Don’t allow vdmv, vdmod and show command before mongodb is up
  • Fix wrong file system displayed on cio vdinfo output for snapshot vdisk
  • Fix snaphot “start” parameter
  • Fix snapshot vdisk cannot be removed
  • Simplify serialize/unserialize functions and their usage by replacing double pointers with single pointer
  • Fix two concurrent vdmv operations causing deadlock
  • Fix rebuild threads not stopped after vdisk moved to another node
  • Fix CLI commands conflict with node add operation

Release 0.9.0-2173

New

This version introduces first support for the Portainer UI with extension for Storidge CIO. After a cluster is initialized, a Portainer container is automatically launched with a persistent volume provided by CIO. When connecting to an endpoint, Portainer will check for the cio volume driver and automatically displays the Storidge extension when the cio volume plugin is discovered.

The Storidge extension provides access to a dashboard with cluster view and management. A monitor view provides cluster-wide metrics on capacity, performance and system events, while a profiles page enables easy management of application profiles.