StorPool Review – Part 3 – Connectivity & Scripting

StorPool Review – Part 3 – Connectivity & Scripting

Chris EvansCloud, Cloud Storage, Code, Software-Defined Storage, StorPool

This is the third in a series of posts looking at the StorPool software-defined storage platform.  In this post we will look at connectivity to non-Linux clients, support for virtual environments, and automation through scripting.

Client hosts running Linux access the StorPool platform through the native client.  This configuration uses StorPool networking protocols and presents block devices as if they are from a typical client with storage resources.  As we’ve seen in the previous post, the performance of non-storage clients is almost identical to those presenting storage locally.

Non-Linux Clients

For non-Linux clients, connectivity is provided through the iSCSI protocol.  From version 19 onwards, StorPool uses an internal TCP/IP stack to deliver iSCSI target support, as this offers the capability to implement NIC performance acceleration.  This means the iSCSI IP addresses on the storage hosts don’t show up through standard Linux commands.  Instead, the configuration is exposed (and configured) through a set of StorPool CLI commands.  Figure 1 shows the output from a series of commands that list the network interfaces, iSCSI base name, portals (IP addresses per host) and portal group definitions for iSCSI on our test StorPool cluster. 

StorPool iSCSI Configuration

iSCSI uses the standard SCSI concepts of initiators and targets.  An initiator is a host that consumes resources; a target is a storage system that exposes storage LUNs or volumes for consumption.  The iSCSI standard uses an arbitrary text format called an IQN that is used to identify both targets and initiators.  ISCSI LUNs are exposed to the network through a portal, which is defined by an IQN, IP address and TCP port.  Multiple portals can be combined together to make a portal group, which implements load balancing and resiliency across the network. 

In our test environment, we’ve pre-configured the iSCSI base name, portals and a portal group.  This is the first step for mapping volumes to external hosts.  Next, we need to create some volumes and host initiator definitions, then join the two together. The steps are:

  • Create an initiator configuration for the target host
  • Create a volume
  • Create a target for the volume
  • Export the volume on a portal group to the initiator

At this point the volume will be available for access via iSCSI across the network. 

The following short video demonstrates this process by attaching an iSCSI volume to a Windows 10 host.

We use iSCSI volumes widely in the lab, connected to our VMware vSphere clusters and to Windows hosts. Figure 2 shows the connectivity into one VMware ESXi server, with each of the two volumes defined as a separate target.

Figure 2 – iSCSI LUNs in vSphere

Hypervisor Support

In addition to VMware ESXi support, which includes hardware offload acceleration, StorPool also supports KVM, OpenStack, OpenNebula, OnApp and CloudStack frameworks. We’ll cover Kubernetes and container support in a future post.


Modern storage solutions must provide the ability to automate commands via scripting and API support. The StorPool platform implements a REST API with a command line interface that uses the API functionality. There is also support for Python.

The output from StorPool CLI commands is available in either CSV, JSON (including formatted), raw HTML, as well as standard text output. This makes it easy to integrate commands and output into existing toolsets.

In the following video example, we demonstrate the ability to clone StorPool volumes from the CLI and to integrate these into KVM. Within KVM there are multiple options for attaching persistent storage, including mapping block devices to logical volumes.

In this test, we created an Ubuntu 20.04 master template on a KVM instance, then created a script to clone the volume and import the clone into a replica of the KVM VM. The script provides the capability to create multiple VMs at the same time.

This simple demonstrations shows that there is no impact on running applications as new VMs are created. In addition, the API delivers consistent performance as more VMs are created.

The Architect’s View™

Modern software-defined storage has to provide the capability to automate storage deployments. StorPool provides the capability to script the process of volume creation, update and deletion. Configuration information can also be extracted via the API or CLI in a wide variety of formats. For common platforms, the integration work is already done, removing the need to re-invent the wheel.

Although the majority of applications are arguably now deployed on Linux, non-Linux support (or servers that can’t run the StorPool client) is available through iSCSI, including support for hardware acceleration.

In our next posts, we will cover:

  • Post 4 – Kubernetes and CSI support
  • Post 5 – Failure modes, managing device failures and integrity checking

This work has been made possible through sponsorship from StorPool.

Copyright (c) 2007-2021 – Post #0907 – Brookend Ltd, first published on, do not reproduce without permission.