Limits for volume types on DEDL
Each OpenStack cloud defines storage performance limits to ensure fair resource allocation between projects and prevent performance degradation caused by intensive workloads.
Following these limits keeps performance predictable and prevents “noisy neighbor” effects.
Note
This document covers HDD volume types only.
What we are going to cover
Key terms
- IOPS (Input/Output Operations Per Second)
A standard measure of storage performance that is the number of read and write operations a volume can perform per second. DEDL defines performance limits in terms of IOPS to ensure fair resource sharing across all users.
- Throttling
An automatic protective mechanism that slows down input/output operations when a volume reaches its IOPS limit. Throttling prevents a single workload from monopolizing storage resources and helps maintain consistent performance across projects.
- Volume type
A predefined storage configuration that determines how a volume behaves. Each volume type can include its own Quality of Service (QoS) settings, such as IOPS caps or backend storage tiers.
Who this guide is for
This guide explains how DestinE Data Lake (DEDL) enforces performance limits for Block Storage (Cinder) volumes and what you can do to understand, verify, and, if needed, request changes.
It is intended for users who manage their own volumes and need to:
understand how IOPS limits affect performance,
verify actual performance and
know what information to provide when requesting higher performance from Support.
If you only use pre-configured virtual machines or applications and never attach or manage volumes directly, you can safely skip this article.
Prerequisites
No. 1 Tools for validating block storage performance
In this article, we use fio (Flexible I/O Tester), an open-source, scriptable utility that measures both throughput and IOPS. It integrates well with Linux VMs and automation pipelines and produces consistent, reproducible results.
You can install fio in your VM using one of the following methods:
sudo apt install fio
You may need sudo privileges to run fio.
sudo dnf install fio
# or
sudo yum install fio
You may need sudo privileges to run fio.
Download and extract the ZIP archive containing precompiled binaries from its GitHub repository.
Administrator rights are required to run fio on Windows.
No. 2 Volumes in OpenStack
We assume that you know how to work with volumes under OpenStack; as a refresher, visit the section about data volumes.
The examples in this article assume you have Horizon or CLI access with rights to create and manage volumes.
No. 3 Access to DestinE Data Lake (DEDL) OpenStack environment
You must have an active DEDL project on OpenStack and be able to create, attach, detach, and resize volumes. All examples here are intended to be run inside a DEDL virtual machine.
Understanding and verifying IOPS limits
IOPS measures how many read/write operations a volume can process each second. DEDL limits HDD volumes by a maximum IOPS cap to ensure predictable performance across projects.
How storage limits impact performance
Storage limits matter when your application performs frequent read/write operations and approaches the IOPS cap. Typical examples include:
Databases (PostgreSQL, MySQL, MongoDB) performing many small random reads and writes,
Analytics workloads that scan large datasets or generate temporary files,
Virtual machines with multiple applications sharing the same attached volume, and
Logging or monitoring systems continuously writing to disk.
When the cap is reached, the backend throttles I/O. You will not lose data, but operations may become slower or less responsive.
What happens when the limit is reached
If your workload exceeds the allowed IOPS rate, OpenStack storage throttles further I/O until the average number of operations drops back under the limit.
You may notice:
Slowdowns during data-intensive operations,
Higher latency when accessing files or committing data,
Applications temporarily stalling on disk I/O, or
Benchmark tools such as fio showing a fixed ceiling (e.g., ~200 IOPS regardless of demand).
If throttling happens often, consider:
Using a larger volume – the IOPS cap scales with size,
Distributing I/O across multiple volumes, or
Contacting Support to discuss a higher-performance tier or a custom policy (see Requesting changes from Support).
Default formula for HDD volumes
Use this to estimate your IOPS cap:
IOPS = 2 × volume_size_in_GB
Volumes smaller than 60 GB are capped at 120 IOPS (minimum cap).
Examples
10 GB → 120 IOPS (raised to minimum)
100 GB → 200 IOPS
500 GB → 1000 IOPS
These limits apply automatically. There are no performance guarantees – IOPS are capped, not guaranteed.
How IOPS limits change after volume resize
Resizing a volume does not apply the new cap immediately. The limit is recalculated only after the volume is unmounted and reattached:
Unmount the volume in Linux (e.g.,
umount /mnt/data),Detach it (
openstack volume detach <volume-id> <vm-id>),Reattach it (e.g.,
openstack volume attach <vm-id> <volume-id>), andVerify the new performance level.
Until reattached, the old cap remains in effect.
Verifying limits with fio
Run the following test inside the VM that uses the target volume:
sudo fio \
--filename=/dev/vdb \
--direct=1 \
--rw=randread \
--bs=4k \
--ioengine=libaio \
--iodepth=256 \
--runtime=120 \
--numjobs=4 \
--time_based \
--group_reporting \
--name=iops-test-job \
--eta-newline=1
This command measures random-read IOPS under high concurrency (4 KB block size, deep queue, cache bypassed, 2-minute average). Change –rw to randwrite or readwrite to test other patterns.
Example (100 GB volume, ~200 IOPS cap):
read: IOPS=201, BW=804KiB/s (804KiB/s)
If results are consistently lower than expected, ensure the volume has been reattached after resizing and rerun the test.
Requesting changes from Support
If you regularly reach the IOPS cap or your workload has evolved, open a Support request. Include:
- 1. Evidence
Latest fio results (command + output)
Relevant OS I/O indicators (short excerpts from
iostat -xordstat)
- 2. Context
Project/tenant ID, volume ID(s), current size and type (HDD)
Workload pattern (random/sequential, read/write mix, typical block size)
When throttling occurs (continuous vs. batch windows)
- 3. Desired outcome
Increase volume size to raise the cap, or
Apply a custom per-GB IOPS limit (custom QoS) to a new volume type, or
Move to a higher-performance tier (SSD/NVMe, if available), or
Distribute workloads across multiple volumes.
- 4. Operations window
Confirm you can unmount, detach, and reattach, and specify any downtime constraints.
Providing this information helps Support identify and apply the right solution quickly and safely.
Creating a new volume type
Support does not modify your existing volumes directly. Instead, they may create a new private volume type for your project, with an adjusted IOPS-per-GB formula or a different storage backend.
This new volume type (for example, hdd1iops) is then linked to your project so that any new volumes you create under it automatically follow the updated performance rules.
In practice, this gives you predictable and isolated behavior without affecting other tenants.
Depending on your request, the new configuration may:
reduce or increase the IOPS rate per gigabyte,
assign the volume type to a specific hardware tier (for example, SSD or HDD), or
enforce a fixed performance cap to ensure stability for critical workloads.
Once the change is complete, Support will let you know the name of the new volume type. You can then select it when creating new volumes to apply the custom performance profile.
IOPS formulas in comparison
The tables below illustrate how IOPS limits differ between the default and a custom configuration.
Default HDD volumes – follow the standard DEDL rule of 2 × volume size (GB), with a minimum cap of 120 IOPS for volumes smaller than 60 GB.
Custom QoS volumes (example: hdd1iops) – show what happens when Support defines a project-specific policy that changes the IOPS-per-GB ratio.
The second table is not a replacement but a variation of the first — it demonstrates how a custom policy can alter the IOPS growth curve per volume size.
Volume size (GB) |
Formula (2 × size) |
Effective IOPS |
|---|---|---|
10 |
20 |
120 (minimum cap) |
60 |
120 |
120 |
100 |
200 |
200 |
500 |
1000 |
1000 |
Volume size (GB) |
Formula (1 × size) |
Effective IOPS |
|---|---|---|
10 |
10 |
10 |
60 |
60 |
60 |
100 |
100 |
100 |
500 |
500 |
500 |
As shown above, custom QoS types like hdd1iops can intentionally set a lower performance curve for specific projects or workloads – for example, to reserve faster tiers for production systems while maintaining predictable behavior elsewhere.