Protection Group Commands
openstack dr protection group create, list, show, delete; member add/remove
This page covers the openstack dr protection-group command group, which lets you create and manage Protection Groups — the core organizational unit of Trilio Site Recovery. A Protection Group is a named collection of Nova instances that fail over together as a unit; it maps 1:1 to a Cinder Consistency Group and, through the configured replication policy, to a Pure Storage Protection Group or Pod on the underlying FlashArray. You use these commands to create a Protection Group against a pair of registered sites, add or remove VM members, inspect status, and delete a group when it is no longer needed. Mastering this command group is a prerequisite for every downstream DR operation, including failover, failback, and DR drills.
Before running any protection-group command, confirm the following are in place:
- Both sites registered —
openstack dr site listmust show both your primary and secondary sites withstatus: active. See the Site commands reference if either site is missing or unreachable. - Replication-enabled volume type on both sites — each site must have a Cinder volume type with
replication_enabled='<is> True'and a matchingreplication_typeproperty (asyncorsync). Runopenstack dr site list-volume-types <site-name>on both sites to confirm. protectorclientOSC plugin installed — the plugin must be present in the same Python environment aspython-openstackclient. Verify withopenstack dr --help.clouds.yamlconfigured for both sites — the CLI authenticates to both sites during metadata sync; both cloud entries must be reachable. See the deployment guide for aclouds.yamltemplate.- Keystone credentials with appropriate project scope — Protection Groups are tenant-scoped. You must be authenticated to the project that will own the group. Admin rights are not required for Protection Group operations, but are required for Site registration.
- Network and flavor mappings identified — you will need these when you later configure resource mappings for failover. Having them ready now avoids blocking later steps.
The Protection Group commands ship as part of the protectorclient OSC plugin. If you completed the Trilio Site Recovery deployment guide, the plugin is already installed. The steps below confirm the installation and ensure your shell environment is ready.
- Verify the plugin is loaded:
openstack dr --help
The output must include a protection-group section. If the command is not found, reinstall the plugin:
pip install protectorclient
- Source your project credentials (or use
--os-cloud):
source ~/site-a-openrc
Alternatively, if your clouds.yaml is configured:
export OS_CLOUD=site-a
- Confirm connectivity to both sites:
openstack dr site list
Expected output shows both sites with status: active:
+--------------------------------------+--------+---------+--------+
| ID | Name | Type | Status |
+--------------------------------------+--------+---------+--------+
| a1b2c3d4-... | site-a | primary | active |
| e5f6a7b8-... | site-b | secondary | active |
+--------------------------------------+--------+---------+--------+
If either site shows unreachable or error, resolve connectivity before proceeding — Protection Group creation and all member modifications require both sites to be reachable.
Protection Groups carry a small number of properties set at creation time. Most are immutable after creation; modifying a live Protection Group is limited to adding or removing members.
| Property | Flag | Valid values | Effect |
|---|---|---|---|
name | --name | Any string, unique per project | Human-readable identifier; also used to derive the internal Cinder Consistency Group name. Choose a name that reflects the workload, e.g. prod-web-tier. |
description | --description | Any string | Optional free-text annotation. Shown in show output. |
replication-type | --replication-type | async, sync | Determines how Pure Storage replicates the underlying Consistency Group. async uses periodic Protection Group snapshots (configurable RPO). sync uses ActiveCluster Pods with zero RPO but adds write latency. Must match the replication_type property of the chosen volume type. |
primary-site | --primary-site | Registered site name or ID | The site where workloads run initially. This designation is workload-relative and will swap automatically when a failover completes. |
secondary-site | --secondary-site | Registered site name or ID | The DR site. Must differ from --primary-site. |
volume-type | --volume-type | Volume type name or ID | The Cinder volume type used for all volumes in this group. All volumes belonging to member VMs must use this type. The type must have replication_enabled='<is> True' on both sites. This cannot be changed after creation. |
Immutability note: replication-type, primary-site, secondary-site, and volume-type are fixed at creation time. To change any of these you must delete the Protection Group and recreate it. Deleting a Protection Group cascades to its Consistency Group; ensure you have confirmed no active replication operations are running before deleting.
Metadata sync behavior: Every write operation (create, member-add, member-remove, delete) blocks until the change is confirmed on both sites. If the remote site is unreachable at the time of the operation, the command returns an error and the change is not committed on either site. This strict two-phase behavior prevents metadata divergence.
Create a Protection Group
Run openstack dr protection-group create against the site where your workloads currently live. The command registers the group on both sites atomically and provisions the backing Cinder Consistency Group.
openstack dr protection-group create \
--name prod-web-app \
--description "Production web application tier" \
--replication-type async \
--primary-site site-a \
--secondary-site site-b \
--volume-type replicated-ssd
The service validates that the volume type exists on both sites and carries the required replication properties before creating any resources. The group starts in creating status and transitions to active once both Consistency Groups are provisioned.
List Protection Groups
openstack dr protection-group list
This returns all Protection Groups owned by your current project. To filter by status:
openstack dr protection-group list --status active
Inspect a Protection Group
openstack dr protection-group show prod-web-app
The output includes the current primary site (which reflects failover state), consistency group ID, failover count, and sync status. Pay attention to current_primary_site — after a failover this will differ from primary_site.
Add a VM member
When you add a VM, the service queries Nova for all attached volumes, validates they use the group's volume type, and adds them to the Cinder Consistency Group automatically. The VM and its volumes must already exist on the primary site.
openstack dr protection-group member-add prod-web-app \
--instance-id <nova-instance-uuid>
Repeat this command for each VM you want to protect as a unit. All members fail over together.
List members
openstack dr protection-group member-list prod-web-app
Remove a VM member
Removing a member detaches the VM's volumes from the Consistency Group but does not delete the VM or its volumes.
openstack dr protection-group member-remove prod-web-app \
--member-id <member-uuid>
Delete a Protection Group
Deletion cascades to the Consistency Group on both sites. The group must not be in a transitional state (failing_over, failing_back, deleting).
openstack dr protection-group delete prod-web-app
Example 1 — Create an async Protection Group
Create a Protection Group for a three-tier production application using async replication between two registered sites.
openstack dr protection-group create \
--name prod-three-tier \
--description "Web, app, and DB tiers for prod" \
--replication-type async \
--primary-site site-a \
--secondary-site site-b \
--volume-type replicated-ssd
Expected output:
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| id | pg-12345678-1234-1234-1234-123456789abc |
| name | prod-three-tier |
| description | Web, app, and DB tiers for prod |
| status | active |
| replication_type | async |
| primary_site | site-a |
| secondary_site | site-b |
| current_primary_site | site-a |
| consistency_group_id | cg-87654321-4321-4321-4321-87654321 |
| failover_count | 0 |
| last_failover_at | None |
| created_at | 2025-03-15T09:00:00Z |
+----------------------+--------------------------------------+
Example 2 — Add multiple VM members
Add the web server, application server, and database server to the group. Run each member-add call separately; the service syncs metadata to both sites after each addition.
openstack dr protection-group member-add prod-three-tier \
--instance-id aabbccdd-1111-2222-3333-aabbccddeeff
openstack dr protection-group member-add prod-three-tier \
--instance-id bbccddee-1111-2222-3333-bbccddeeff00
openstack dr protection-group member-add prod-three-tier \
--instance-id ccddee11-1111-2222-3333-ccddeeff0011
Output for each call:
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| id | member-xxxxxxxx-xxxx-xxxx-xxxx-xxxx |
| instance_id | aabbccdd-1111-2222-3333-aabbccddeeff |
| instance_name | web-server-1 |
| status | protected |
| volumes_added | 2 |
+----------------------+--------------------------------------+
List all members to confirm:
openstack dr protection-group member-list prod-three-tier
+-------------+--------------+---------------+-----------+
| Member ID | Instance ID | Instance Name | Status |
+-------------+--------------+---------------+-----------+
| member-aa.. | aabbccdd-... | web-server-1 | protected |
| member-bb.. | bbccddee-... | app-server-1 | protected |
| member-cc.. | ccddee11-... | db-server-1 | protected |
+-------------+--------------+---------------+-----------+
Example 3 — Inspect a group after failover
After a failover to site-b, inspect the Protection Group to confirm current_primary_site has swapped and the group shows failed_over status.
openstack dr protection-group show prod-three-tier
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| id | pg-12345678-1234-1234-1234-123456789abc |
| name | prod-three-tier |
| status | failed_over |
| replication_type | async |
| primary_site | site-a |
| secondary_site | site-b |
| current_primary_site | site-b |
| failover_count | 1 |
| last_failover_at | 2025-03-15T14:22:10Z |
+----------------------+--------------------------------------+
Note that primary_site retains the original designation while current_primary_site reflects where workloads are currently running.
Example 4 — Remove a member and delete the group
Remove the database server from the group, then delete the group entirely.
# Get the member ID
openstack dr protection-group member-list prod-three-tier
# Remove the specific member
openstack dr protection-group member-remove prod-three-tier \
--member-id member-cc-xxxxxxxx-xxxx-xxxx-xxxx
Member member-cc-xxxxxxxx removed from prod-three-tier.
Consistency group updated. Metadata synced to both sites.
# Delete the Protection Group
openstack dr protection-group delete prod-three-tier
Deletion of protection group 'prod-three-tier' initiated.
Consistency group cg-87654321-... will be removed from both sites.
Protection group deleted.
Issue: protection-group create fails with "volume type not found on secondary site"
Symptom: The create command returns an error such as:
ERROR: Volume type 'replicated-ssd' not found on site 'site-b' or does not support replication.
Likely cause: The volume type either does not exist on the secondary site, or it exists but is missing the required replication_enabled='<is> True' property.
Fix:
- Run
openstack dr site list-volume-types site-band confirm the volume type appears. - If it is missing, create a matching volume type on the secondary site with the same name and properties:
openstack volume type create replicated-ssd \ --property replication_enabled='<is> True' \ --property replication_type='<in> async' \ --property volume_backend_name=pure@backend-b - Retry the
protection-group createcommand.
Issue: member-add fails with "volume type mismatch"
Symptom:
ERROR: Volume 'vol-xxxxxxxx' uses type 'standard-ssd', expected 'replicated-ssd'.
Likely cause: One or more of the VM's attached volumes use a different Cinder volume type than the one specified when the Protection Group was created. All volumes belonging to member VMs must use the Protection Group's designated volume type.
Fix: Retype the non-conforming volumes using openstack volume retype before adding the VM to the group. If retyping is not feasible, create a new VM with volumes of the correct type.
Issue: member-add (or any write operation) fails with "remote site unreachable"
Symptom:
ERROR: Cannot modify protection group - remote site 'site-b' is unreachable.
Metadata sync is required for all modifications.
Likely cause: The protector-api or Keystone endpoint on the peer site is down, or network connectivity between the management plane and the peer site has been interrupted. This is by design — metadata modifications are blocked when the peer is unreachable to prevent divergence.
Fix:
- Confirm the remote site's API is responsive:
openstack dr site validate site-b. - If the site is temporarily unreachable, wait for it to recover and then retry the operation.
- If the remote site is confirmed down for an extended period and you need to perform an unplanned failover, use
openstack dr protection-group failover --type unplannedinstead. Do not attempt to bypass the metadata sync block for normal modifications. - Once the remote site recovers, run
openstack dr protection-group sync-force <group>to reconcile any version gap before making further changes.
Issue: protection-group delete fails with "group is in transitional state"
Symptom:
ERROR: Cannot delete protection group 'prod-three-tier': current status is 'failing_over'.
Likely cause: A DR operation (failover, failback, or test failover) is currently in progress or has stalled on this group.
Fix:
- Check the active operation:
openstack dr operation list --protection-group prod-three-tier. - If the operation is still running (
status: running), wait for it to complete. - If the operation has stalled in an intermediate state (
status: failed,status: rolling_back), review the operation error message:openstack dr operation show <operation-id>. - Resolve the underlying issue, allow the operation to reach a terminal state (
completedorfailed), and then retry the delete.
Issue: protection-group show displays current_primary_site different from primary_site
Symptom: After a failover the primary_site field still shows site-a but current_primary_site shows site-b, and the group status is failed_over.
Likely cause: This is expected and correct behavior, not an error. primary_site records the original designation; current_primary_site tracks where workloads are actually running after one or more failovers.
Action: No fix required. Use current_primary_site to determine where VMs are running. When you execute a failback, current_primary_site will revert to site-a and the group status will return to active.