Cli Reference (openstack Dr)
Complete reference for the protectorclient OpenStack CLI plugin. All DR operations can be performed via 'openstack dr ...' commands.
The openstack dr command group is the primary interface for managing disaster recovery operations in Trilio Site Recovery for OpenStack. It is provided by the protectorclient OSC plugin, which extends the standard OpenStack CLI with DR-specific subcommands. Because DR operations span two independent OpenStack clouds, the plugin authenticates against both your primary and secondary site credentials and acts as the coordination layer for metadata synchronization between them — no direct service-to-service communication between sites is required. Use openstack dr commands to register sites, manage Protection Groups and their VM members, configure replication policies, execute and monitor failover and failback operations, and check system health.
openstack dr <command-group> <subcommand> [options] [arguments]
Top-level command groups:
openstack dr site <create|list|show|delete|validate>
openstack dr protection-group <create|list|show|delete>
openstack dr member <add|remove>
openstack dr policy <create|show>
openstack dr failover <protection-group>
openstack dr failback <protection-group>
openstack dr test-failover <protection-group>
openstack dr test-cleanup <protection-group>
openstack dr operation <list|show>
openstack dr health show
The following global flags apply to every openstack dr command. Command-specific flags are documented under each subcommand.
| Flag | Short | Default | Description |
|---|---|---|---|
--os-cloud | OS_CLOUD env var | Named cloud entry from clouds.yaml to use for the primary site. For cross-site operations, use --os-cloud for the primary site and --secondary-cloud for the secondary site. | |
--secondary-cloud | OS_SECONDARY_CLOUD env var | Named cloud entry from clouds.yaml to use for the secondary site. Required for all cross-site operations (site registration, Protection Group create/delete, failover, failback, test failover, member add/remove). | |
--os-auth-url | OS_AUTH_URL | Keystone endpoint for the primary site. Alternative to --os-cloud when not using clouds.yaml. | |
--os-project-name | OS_PROJECT_NAME | Project scope for the primary site. | |
--os-username | OS_USERNAME | Username for the primary site. | |
--os-password | OS_PASSWORD | Password for the primary site. | |
--os-user-domain-name | OS_USER_DOMAIN_NAME | User domain for the primary site. | |
--os-project-domain-name | OS_PROJECT_DOMAIN_NAME | Project domain for the primary site. | |
-f / --format | -f | table | Output format. Accepted values: table, json, yaml. Use -f json for scripting and automation. |
-c / --column | -c | (all columns) | Select specific output columns. Can be repeated. |
--noindent | false | Disable JSON/YAML indentation (useful for log pipelines). | |
--max-width | 0 (no limit) | Maximum display width for table output. | |
--fit-width | false | Fit table to terminal width. | |
--print-empty | false | Print empty table if there are no results. |
Site Management
Site commands register and manage the two OpenStack cloud endpoints that Trilio Site Recovery coordinates between. Sites are admin-scoped resources. The site-type designation is a starting-point label — primary and secondary roles swap dynamically when a failover occurs.
| Subcommand | Description |
|---|---|
openstack dr site create | Register a new site (primary or secondary) with its Keystone endpoint and optional region. Requires admin credentials. |
openstack dr site list | List all registered sites and their current status (active, unreachable, error). |
openstack dr site show <site> | Display full details for a named or UUID-identified site, including capabilities and reachability status. |
openstack dr site delete <site> | Remove a site registration. Blocked if any Protection Groups reference this site. |
openstack dr site validate <site> | Test connectivity to a site's Keystone, Nova, Cinder, and Neutron endpoints and report capabilities. |
Protection Group Management
Protection Groups are the core unit of DR. Each group maps 1:1 to a Cinder Consistency Group and a Pure Storage Protection Group (or Pod for sync replication). Creating a Protection Group automatically provisions the Consistency Groups on both sites.
| Subcommand | Description |
|---|---|
openstack dr protection-group create | Create a Protection Group, specifying primary and secondary sites, replication type (async or sync), and the Cinder volume type to use. Automatically creates matching Consistency Groups on both sites. |
openstack dr protection-group list | List all Protection Groups in the current project, with status and current-primary-site. |
openstack dr protection-group show <pg> | Show full details for a Protection Group, including its Consistency Group ID, member count, current failover state, replication policy summary, and metadata sync version. |
openstack dr protection-group delete <pg> | Delete a Protection Group and its associated Consistency Groups. Blocked if a DR operation is in progress or if the peer site is unreachable (to prevent metadata divergence). |
Member Management
Members are Nova VM instances added to a Protection Group. When you add a member, the plugin discovers all Cinder volumes attached to that instance and adds them to the Consistency Group automatically. All volumes must use the volume type configured for the Protection Group and must have replication_enabled='<is> True'.
| Subcommand | Description |
|---|---|
openstack dr member add <pg> --instance-id <uuid> | Add a VM to the Protection Group. Discovers and enrolls all attached volumes into the Consistency Group. Syncs metadata to the secondary site immediately; blocked if the secondary site is unreachable. |
openstack dr member remove <pg> --member-id <uuid> | Remove a VM member from the Protection Group. Removes its volumes from the Consistency Group and syncs the change to the secondary site. |
Replication Policy
The replication policy binds a Protection Group to the underlying Pure Storage FlashArray arrays on both sites and sets RPO parameters.
| Subcommand | Description |
|---|---|
openstack dr policy create <pg> | Create or replace the replication policy for a Protection Group, supplying FlashArray connection details, the Pure Storage Protection Group name, replication interval (async), and RPO target. |
openstack dr policy show <pg> | Display the current replication policy. API tokens are masked in output. |
DR Operations
These commands execute the core DR workflows. Each invocation creates a DR Operation record that you can track with openstack dr operation show. Because primary/secondary designations are workload-relative and swap on failover, the plugin requires credentials for both sites (--os-cloud and --secondary-cloud) so it can authenticate to whichever site is currently active.
| Subcommand | Description |
|---|---|
openstack dr failover <pg> | Fail workloads over from the current primary site to the secondary site. Supports both planned (graceful) and unplanned (immediate) modes. Prompts for network and flavor mappings unless provided as flags. |
openstack dr failback <pg> | Return workloads to the original primary site after a failover. Supports optional reverse-replication to resync data from the current active site back to the original primary before cutting over. |
openstack dr test-failover <pg> | Execute a non-disruptive DR drill. Instantiates VMs on the secondary site from the latest replicated snapshot without stopping or altering the primary workloads. |
openstack dr test-cleanup <pg> | Tear down test instances and temporary resources created by openstack dr test-failover. Must be run before executing a real failover on a group that has an active test. |
Operation Monitoring
Every DR operation (failover, failback, test failover, test cleanup) is tracked asynchronously. Use these commands to monitor progress, inspect step-level detail, and retrieve error messages.
| Subcommand | Description |
|---|---|
openstack dr operation list | List all DR operations for the current project, with type, status, progress percentage, and timestamps. |
openstack dr operation show <operation-id> | Show full detail for a single operation, including per-step completion status, error messages, and result data. |
Health
| Subcommand | Description |
|---|---|
openstack dr health show | Report the health of the local Protector service (protector-api, protector-engine), database connectivity, and the reachability of all registered remote sites. |
The protectorclient plugin respects all standard OS_* OpenStack environment variables for the primary site. The following variables are specific to cross-site DR operations or have DR-relevant behavior:
| Variable | Default | Description |
|---|---|---|
OS_CLOUD | (none) | Name of the primary-site cloud entry in clouds.yaml. If set, individual OS_* credential variables for the primary site are ignored. |
OS_SECONDARY_CLOUD | (none) | Name of the secondary-site cloud entry in clouds.yaml. When set, you do not need to pass --secondary-cloud on each command. Required for all cross-site operations. |
OS_AUTH_URL | (none) | Keystone v3 endpoint for the primary site. Used when OS_CLOUD is not set. |
OS_PROJECT_NAME | (none) | Project name for the primary site authentication scope. |
OS_USERNAME | (none) | Username for the primary site. |
OS_PASSWORD | (none) | Password for the primary site. |
OS_USER_DOMAIN_NAME | Default | User domain for the primary site. |
OS_PROJECT_DOMAIN_NAME | Default | Project domain for the primary site. |
OS_REGION_NAME | (none) | Region name for the primary site endpoint lookup. |
PROTECTOR_API_VERSION | 1.2 | Override the OpenStack-API-Version: protector microversion header sent to the Protector API. Useful when testing against an older deployment. |
| Code | Meaning |
|---|---|
0 | Command completed successfully. For async DR operations (failover, failback, test-failover, test-cleanup), exit code 0 means the operation was accepted and queued, not necessarily completed. Use openstack dr operation show to confirm completion. |
1 | General error. Includes authentication failures, malformed arguments, missing required flags, and unexpected API errors. The error message is written to stderr. |
2 | Invalid usage — unrecognized command, missing positional argument, or mutually exclusive flags used together. |
3 | Remote site unreachable. Returned when a command that requires both sites (member add/remove, protection-group create/delete, failover, failback) cannot reach the peer site. No changes are made; the Protection Group is not modified to prevent metadata divergence. |
4 | Operation conflict. Returned when a DR operation is already running on the specified Protection Group, or when test-failover is attempted on a group that already has an active test. |
5 | Precondition failed. Returned when validation checks fail before an operation starts — for example, a volume type that does not have replication_enabled='<is> True', a Protection Group with no replication policy configured, or a Protection Group with zero members. |
Authenticate to both sites using clouds.yaml
All cross-site commands require credentials for both sites. Define both in ~/.config/openstack/clouds.yaml and reference them with --os-cloud and --secondary-cloud.
# ~/.config/openstack/clouds.yaml
clouds:
site-a:
auth:
auth_url: http://site-a-controller:5000/v3
project_name: production
username: dr-admin
password: s3cr3t
user_domain_name: Default
project_domain_name: Default
region_name: RegionOne
site-b:
auth:
auth_url: http://site-b-controller:5000/v3
project_name: production
username: dr-admin
password: s3cr3t
user_domain_name: Default
project_domain_name: Default
region_name: RegionOne
Set environment variables so you don't need to repeat them on every command:
export OS_CLOUD=site-a
export OS_SECONDARY_CLOUD=site-b
Register sites
Register both OpenStack clouds with the Protector service. This is a one-time admin operation.
openstack dr site create \
--name site-a \
--description "Primary datacenter - Boston" \
--site-type primary \
--auth-url http://site-a-controller:5000/v3 \
--region-name RegionOne \
--os-cloud site-a
Expected output:
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| id | 3a1b2c3d-0001-0001-0001-aabbccdd0001 |
| name | site-a |
| site_type | primary |
| auth_url | http://site-a-controller:5000/v3 |
| region_name | RegionOne |
| status | active |
+-------------+--------------------------------------+
openstack dr site create \
--name site-b \
--description "Secondary datacenter - Seattle" \
--site-type secondary \
--auth-url http://site-b-controller:5000/v3 \
--region-name RegionOne \
--os-cloud site-a
Validate that both sites are reachable and their capabilities are compatible:
openstack dr site validate site-a
openstack dr site validate site-b
List registered sites
openstack dr site list
Expected output:
+--------------------------------------+--------+-------------+--------+
| ID | Name | Site Type | Status |
+--------------------------------------+--------+-------------+--------+
| 3a1b2c3d-0001-0001-0001-aabbccdd0001 | site-a | primary | active |
| 4b2c3d4e-0002-0002-0002-bbccdd0002bb | site-b | secondary | active |
+--------------------------------------+--------+-------------+--------+
Create a Protection Group
Create a Protection Group for async replication. The volume type you specify must have replication_enabled='<is> True' and replication_type='<in> async' on both sites. The Consistency Group is created automatically.
openstack dr protection-group create \
--name prod-web-app \
--description "Production web application" \
--replication-type async \
--primary-site site-a \
--secondary-site site-b \
--volume-type replicated-ssd
Expected output:
+------------------------+--------------------------------------+
| Field | Value |
+------------------------+--------------------------------------+
| id | pg-12345678-1234-1234-1234-123456789 |
| name | prod-web-app |
| status | active |
| replication_type | async |
| primary_site | site-a |
| secondary_site | site-b |
| current_primary_site | site-a |
| consistency_group_id | cg-87654321-4321-4321-4321-876543210 |
| failover_count | 0 |
| last_failover_at | None |
+------------------------+--------------------------------------+
Configure a replication policy
Bind the Protection Group to the FlashArray arrays on both sites. Store the API tokens securely — they are masked in all command output.
openstack dr policy create prod-web-app \
--primary-fa-url https://flasharray-a.example.com \
--primary-fa-token "T-12345678-abcd-efgh-ijkl-mnopqrstuvwx" \
--secondary-fa-url https://flasharray-b.example.com \
--secondary-fa-token "T-87654321-dcba-hgfe-lkji-xwvutsrqponm" \
--pure-pg-name "pg-prod-web-app" \
--replication-interval 300 \
--rpo-minutes 15
Expected output:
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| protection_group_id | pg-12345678-1234-1234-1234-123456789 |
| primary_fa_url | https://flasharray-a.example.com |
| primary_fa_api_token | *** (masked) |
| secondary_fa_url | https://flasharray-b.example.com |
| secondary_fa_api_token | *** (masked) |
| pure_pg_name | pg-prod-web-app |
| replication_interval | 300 |
| rpo_minutes | 15 |
+----------------------+--------------------------------------+
Add VMs to a Protection Group
Add each VM by its Nova instance UUID. The plugin discovers and enrolls all attached volumes automatically. All volumes must use the volume type configured for the group.
openstack dr member add prod-web-app \
--instance-id a1b2c3d4-0001-0001-0001-aabbccdd1111
Expected output:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| id | mem-aaaabbbb-1111-1111-1111-ccccdddd |
| instance_id | a1b2c3d4-0001-0001-0001-aabbccdd1111 |
| instance_name | web-server-1 |
| status | protected |
| volumes_added | 2 |
+-------------------+--------------------------------------+
Add additional VMs:
openstack dr member add prod-web-app \
--instance-id b2c3d4e5-0002-0002-0002-bbccdd2222bb
openstack dr member add prod-web-app \
--instance-id c3d4e5f6-0003-0003-0003-ccdd3333cccc
Note: If the secondary site is unreachable when you run
dr member add, the command returns exit code3and no changes are made. Metadata must be kept in sync across both sites at all times.
Show a Protection Group
openstack dr protection-group show prod-web-app
Expected output:
+------------------------+--------------------------------------+
| Field | Value |
+------------------------+--------------------------------------+
| id | pg-12345678-1234-1234-1234-123456789 |
| name | prod-web-app |
| status | active |
| replication_type | async |
| primary_site | site-a |
| secondary_site | site-b |
| current_primary_site | site-a |
| consistency_group_id | cg-87654321-4321-4321-4321-876543210 |
| member_count | 3 |
| failover_count | 0 |
| last_failover_at | None |
| metadata_version | 7 |
| remote_sync_status | SYNCED |
| remote_sync_version | 7 |
+------------------------+--------------------------------------+
Execute a planned failover
A planned failover gracefully shuts down VMs on the primary site before activating them on the secondary site. Provide network and flavor mappings so the plugin can reconstruct VMs on the target site.
openstack dr failover prod-web-app \
--type planned \
--network-mapping net-primary-web=net-secondary-web \
--network-mapping net-primary-db=net-secondary-db \
--flavor-mapping m1.large=m2.large
Expected output:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| operation_id | op-abcdef12-1234-5678-abcd-ef12345678 |
| operation_type | failover |
| status | running |
| progress | 10 |
+----------------+--------------------------------------+
Monitor progress until completion:
watch -n 5 openstack dr operation show op-abcdef12-1234-5678-abcd-ef12345678
Expected final output:
+------------------+--------------------------------------+
| Field | Value |
+------------------+--------------------------------------+
| id | op-abcdef12-1234-5678-abcd-ef12345678 |
| operation_type | failover |
| status | completed |
| progress | 100 |
| started_at | 2025-06-01T10:00:00Z |
| completed_at | 2025-06-01T10:07:43Z |
| error_message | None |
+------------------+--------------------------------------+
Execute an unplanned failover
Use --type unplanned when the primary site is unavailable. The plugin does not attempt to gracefully shut down primary VMs. Omit --os-cloud for the primary site or set it to the last-known credentials — the operation proceeds using only the secondary site credentials.
openstack dr failover prod-web-app \
--type unplanned \
--network-mapping net-primary-web=net-secondary-web \
--network-mapping net-primary-db=net-secondary-db
After an unplanned failover, the metadata sync status for the original primary site will show
UNREACHABLE. Modifications to the Protection Group remain blocked until the original primary site recovers and you runopenstack dr protection-group sync-force.
Execute a test failover (DR drill)
A test failover instantiates VMs on the secondary site from the latest replicated snapshot without affecting primary workloads.
openstack dr test-failover prod-web-app \
--network-mapping net-primary-web=net-dr-test-web
Expected output:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| operation_id | op-ddddeeee-5678-5678-5678-ddddeeee56 |
| operation_type | test_failover |
| status | running |
| progress | 0 |
+----------------+--------------------------------------+
After validating your test VMs, clean up:
openstack dr test-cleanup prod-web-app
You must run
test-cleanupbefore running another test failover or a real failover on the same Protection Group.
Execute a failback
Once the original primary site has recovered and data has been resynced, fail workloads back. Use --reverse-replication to resync any writes that occurred on the secondary during the failover period before cutting over.
openstack dr failback prod-web-app \
--type planned \
--reverse-replication \
--network-mapping net-secondary-web=net-primary-web \
--network-mapping net-secondary-db=net-primary-db
List all DR operations
openstack dr operation list
Expected output:
+--------------------------------------+------------------+-----------+----------+---------------------+
| ID | Type | PG Name | Status | Progress |
+--------------------------------------+------------------+-----------+----------+---------------------+
| op-abcdef12-1234-5678-abcd-ef12345678 | failover | prod-web-app | completed | 100 |
| op-ddddeeee-5678-5678-5678-ddddeeee56 | test_failover | prod-web-app | completed | 100 |
+--------------------------------------+------------------+-----------+----------+---------------------+
Check service health
openstack dr health show
Expected output:
+------------------+---------+-------------------------------------------+
| Component | Status | Detail |
+------------------+---------+-------------------------------------------+
| protector-api | ok | Responding on port 8788 |
| protector-engine | ok | Running |
| database | ok | Connected (MariaDB) |
| site-a | ok | Reachable, version compatible |
| site-b | ok | Reachable, version compatible |
+------------------+---------+-------------------------------------------+
Use JSON output for scripting
When integrating with automation pipelines, use -f json to get machine-readable output:
openstack dr operation show op-abcdef12-1234-5678-abcd-ef12345678 -f json
Expected output:
{
"id": "op-abcdef12-1234-5678-abcd-ef12345678",
"protection_group_id": "pg-12345678-1234-1234-1234-123456789",
"operation_type": "failover",
"status": "completed",
"source_site_id": "3a1b2c3d-0001-0001-0001-aabbccdd0001",
"target_site_id": "4b2c3d4e-0002-0002-0002-bbccdd0002bb",
"progress": 100,
"started_at": "2025-06-01T10:00:00Z",
"completed_at": "2025-06-01T10:07:43Z",
"error_message": null,
"steps_completed": [
"validate_sites",
"snapshot_consistency_group",
"promote_volumes",
"recreate_instances",
"update_protection_group"
],
"steps_failed": []
}
Extract just the status field with --column:
openstack dr operation show op-abcdef12-1234-5678-abcd-ef12345678 \
-f value -c status
Expected output:
completed
Poll an operation in a shell script until it finishes:
OP_ID="op-abcdef12-1234-5678-abcd-ef12345678"
while true; do
STATUS=$(openstack dr operation show "$OP_ID" -f value -c status)
echo "$(date -u +%H:%M:%S) status=${STATUS}"
case "$STATUS" in
completed) echo "Operation succeeded."; exit 0 ;;
failed) echo "Operation failed. Check 'openstack dr operation show $OP_ID'."; exit 1 ;;
esac
sleep 10
done