Site Recoveryfor OpenStack
CLI reference

Cli Reference (openstack Dr)

Complete reference for the protectorclient OpenStack CLI plugin. All DR operations can be performed via 'openstack dr ...' commands.

master

Description

The openstack dr command group is the primary interface for managing disaster recovery operations in Trilio Site Recovery for OpenStack. It is provided by the protectorclient OSC plugin, which extends the standard OpenStack CLI with DR-specific subcommands. Because DR operations span two independent OpenStack clouds, the plugin authenticates against both your primary and secondary site credentials and acts as the coordination layer for metadata synchronization between them — no direct service-to-service communication between sites is required. Use openstack dr commands to register sites, manage Protection Groups and their VM members, configure replication policies, execute and monitor failover and failback operations, and check system health.


Synopsis
openstack dr <command-group> <subcommand> [options] [arguments]

Top-level command groups:

openstack dr site          <create|list|show|delete|validate>
openstack dr protection-group <create|list|show|delete>
openstack dr member        <add|remove>
openstack dr policy        <create|show>
openstack dr failover      <protection-group>
openstack dr failback      <protection-group>
openstack dr test-failover <protection-group>
openstack dr test-cleanup  <protection-group>
openstack dr operation     <list|show>
openstack dr health        show

Flags

The following global flags apply to every openstack dr command. Command-specific flags are documented under each subcommand.

FlagShortDefaultDescription
--os-cloudOS_CLOUD env varNamed cloud entry from clouds.yaml to use for the primary site. For cross-site operations, use --os-cloud for the primary site and --secondary-cloud for the secondary site.
--secondary-cloudOS_SECONDARY_CLOUD env varNamed cloud entry from clouds.yaml to use for the secondary site. Required for all cross-site operations (site registration, Protection Group create/delete, failover, failback, test failover, member add/remove).
--os-auth-urlOS_AUTH_URLKeystone endpoint for the primary site. Alternative to --os-cloud when not using clouds.yaml.
--os-project-nameOS_PROJECT_NAMEProject scope for the primary site.
--os-usernameOS_USERNAMEUsername for the primary site.
--os-passwordOS_PASSWORDPassword for the primary site.
--os-user-domain-nameOS_USER_DOMAIN_NAMEUser domain for the primary site.
--os-project-domain-nameOS_PROJECT_DOMAIN_NAMEProject domain for the primary site.
-f / --format-ftableOutput format. Accepted values: table, json, yaml. Use -f json for scripting and automation.
-c / --column-c(all columns)Select specific output columns. Can be repeated.
--noindentfalseDisable JSON/YAML indentation (useful for log pipelines).
--max-width0 (no limit)Maximum display width for table output.
--fit-widthfalseFit table to terminal width.
--print-emptyfalsePrint empty table if there are no results.

Subcommands

Site Management

Site commands register and manage the two OpenStack cloud endpoints that Trilio Site Recovery coordinates between. Sites are admin-scoped resources. The site-type designation is a starting-point label — primary and secondary roles swap dynamically when a failover occurs.

SubcommandDescription
openstack dr site createRegister a new site (primary or secondary) with its Keystone endpoint and optional region. Requires admin credentials.
openstack dr site listList all registered sites and their current status (active, unreachable, error).
openstack dr site show <site>Display full details for a named or UUID-identified site, including capabilities and reachability status.
openstack dr site delete <site>Remove a site registration. Blocked if any Protection Groups reference this site.
openstack dr site validate <site>Test connectivity to a site's Keystone, Nova, Cinder, and Neutron endpoints and report capabilities.

Protection Group Management

Protection Groups are the core unit of DR. Each group maps 1:1 to a Cinder Consistency Group and a Pure Storage Protection Group (or Pod for sync replication). Creating a Protection Group automatically provisions the Consistency Groups on both sites.

SubcommandDescription
openstack dr protection-group createCreate a Protection Group, specifying primary and secondary sites, replication type (async or sync), and the Cinder volume type to use. Automatically creates matching Consistency Groups on both sites.
openstack dr protection-group listList all Protection Groups in the current project, with status and current-primary-site.
openstack dr protection-group show <pg>Show full details for a Protection Group, including its Consistency Group ID, member count, current failover state, replication policy summary, and metadata sync version.
openstack dr protection-group delete <pg>Delete a Protection Group and its associated Consistency Groups. Blocked if a DR operation is in progress or if the peer site is unreachable (to prevent metadata divergence).

Member Management

Members are Nova VM instances added to a Protection Group. When you add a member, the plugin discovers all Cinder volumes attached to that instance and adds them to the Consistency Group automatically. All volumes must use the volume type configured for the Protection Group and must have replication_enabled='<is> True'.

SubcommandDescription
openstack dr member add <pg> --instance-id <uuid>Add a VM to the Protection Group. Discovers and enrolls all attached volumes into the Consistency Group. Syncs metadata to the secondary site immediately; blocked if the secondary site is unreachable.
openstack dr member remove <pg> --member-id <uuid>Remove a VM member from the Protection Group. Removes its volumes from the Consistency Group and syncs the change to the secondary site.

Replication Policy

The replication policy binds a Protection Group to the underlying Pure Storage FlashArray arrays on both sites and sets RPO parameters.

SubcommandDescription
openstack dr policy create <pg>Create or replace the replication policy for a Protection Group, supplying FlashArray connection details, the Pure Storage Protection Group name, replication interval (async), and RPO target.
openstack dr policy show <pg>Display the current replication policy. API tokens are masked in output.

DR Operations

These commands execute the core DR workflows. Each invocation creates a DR Operation record that you can track with openstack dr operation show. Because primary/secondary designations are workload-relative and swap on failover, the plugin requires credentials for both sites (--os-cloud and --secondary-cloud) so it can authenticate to whichever site is currently active.

SubcommandDescription
openstack dr failover <pg>Fail workloads over from the current primary site to the secondary site. Supports both planned (graceful) and unplanned (immediate) modes. Prompts for network and flavor mappings unless provided as flags.
openstack dr failback <pg>Return workloads to the original primary site after a failover. Supports optional reverse-replication to resync data from the current active site back to the original primary before cutting over.
openstack dr test-failover <pg>Execute a non-disruptive DR drill. Instantiates VMs on the secondary site from the latest replicated snapshot without stopping or altering the primary workloads.
openstack dr test-cleanup <pg>Tear down test instances and temporary resources created by openstack dr test-failover. Must be run before executing a real failover on a group that has an active test.

Operation Monitoring

Every DR operation (failover, failback, test failover, test cleanup) is tracked asynchronously. Use these commands to monitor progress, inspect step-level detail, and retrieve error messages.

SubcommandDescription
openstack dr operation listList all DR operations for the current project, with type, status, progress percentage, and timestamps.
openstack dr operation show <operation-id>Show full detail for a single operation, including per-step completion status, error messages, and result data.

Health

SubcommandDescription
openstack dr health showReport the health of the local Protector service (protector-api, protector-engine), database connectivity, and the reachability of all registered remote sites.

Environment variables

The protectorclient plugin respects all standard OS_* OpenStack environment variables for the primary site. The following variables are specific to cross-site DR operations or have DR-relevant behavior:

VariableDefaultDescription
OS_CLOUD(none)Name of the primary-site cloud entry in clouds.yaml. If set, individual OS_* credential variables for the primary site are ignored.
OS_SECONDARY_CLOUD(none)Name of the secondary-site cloud entry in clouds.yaml. When set, you do not need to pass --secondary-cloud on each command. Required for all cross-site operations.
OS_AUTH_URL(none)Keystone v3 endpoint for the primary site. Used when OS_CLOUD is not set.
OS_PROJECT_NAME(none)Project name for the primary site authentication scope.
OS_USERNAME(none)Username for the primary site.
OS_PASSWORD(none)Password for the primary site.
OS_USER_DOMAIN_NAMEDefaultUser domain for the primary site.
OS_PROJECT_DOMAIN_NAMEDefaultProject domain for the primary site.
OS_REGION_NAME(none)Region name for the primary site endpoint lookup.
PROTECTOR_API_VERSION1.2Override the OpenStack-API-Version: protector microversion header sent to the Protector API. Useful when testing against an older deployment.

Exit codes
CodeMeaning
0Command completed successfully. For async DR operations (failover, failback, test-failover, test-cleanup), exit code 0 means the operation was accepted and queued, not necessarily completed. Use openstack dr operation show to confirm completion.
1General error. Includes authentication failures, malformed arguments, missing required flags, and unexpected API errors. The error message is written to stderr.
2Invalid usage — unrecognized command, missing positional argument, or mutually exclusive flags used together.
3Remote site unreachable. Returned when a command that requires both sites (member add/remove, protection-group create/delete, failover, failback) cannot reach the peer site. No changes are made; the Protection Group is not modified to prevent metadata divergence.
4Operation conflict. Returned when a DR operation is already running on the specified Protection Group, or when test-failover is attempted on a group that already has an active test.
5Precondition failed. Returned when validation checks fail before an operation starts — for example, a volume type that does not have replication_enabled='<is> True', a Protection Group with no replication policy configured, or a Protection Group with zero members.

Examples

Authenticate to both sites using clouds.yaml

All cross-site commands require credentials for both sites. Define both in ~/.config/openstack/clouds.yaml and reference them with --os-cloud and --secondary-cloud.

# ~/.config/openstack/clouds.yaml
clouds:
  site-a:
    auth:
      auth_url: http://site-a-controller:5000/v3
      project_name: production
      username: dr-admin
      password: s3cr3t
      user_domain_name: Default
      project_domain_name: Default
    region_name: RegionOne

  site-b:
    auth:
      auth_url: http://site-b-controller:5000/v3
      project_name: production
      username: dr-admin
      password: s3cr3t
      user_domain_name: Default
      project_domain_name: Default
    region_name: RegionOne

Set environment variables so you don't need to repeat them on every command:

export OS_CLOUD=site-a
export OS_SECONDARY_CLOUD=site-b

Register sites

Register both OpenStack clouds with the Protector service. This is a one-time admin operation.

openstack dr site create \
  --name site-a \
  --description "Primary datacenter - Boston" \
  --site-type primary \
  --auth-url http://site-a-controller:5000/v3 \
  --region-name RegionOne \
  --os-cloud site-a

Expected output:

+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| id          | 3a1b2c3d-0001-0001-0001-aabbccdd0001 |
| name        | site-a                               |
| site_type   | primary                              |
| auth_url    | http://site-a-controller:5000/v3     |
| region_name | RegionOne                            |
| status      | active                               |
+-------------+--------------------------------------+
openstack dr site create \
  --name site-b \
  --description "Secondary datacenter - Seattle" \
  --site-type secondary \
  --auth-url http://site-b-controller:5000/v3 \
  --region-name RegionOne \
  --os-cloud site-a

Validate that both sites are reachable and their capabilities are compatible:

openstack dr site validate site-a
openstack dr site validate site-b

List registered sites

openstack dr site list

Expected output:

+--------------------------------------+--------+-------------+--------+
| ID                                   | Name   | Site Type   | Status |
+--------------------------------------+--------+-------------+--------+
| 3a1b2c3d-0001-0001-0001-aabbccdd0001 | site-a | primary     | active |
| 4b2c3d4e-0002-0002-0002-bbccdd0002bb | site-b | secondary   | active |
+--------------------------------------+--------+-------------+--------+

Create a Protection Group

Create a Protection Group for async replication. The volume type you specify must have replication_enabled='<is> True' and replication_type='<in> async' on both sites. The Consistency Group is created automatically.

openstack dr protection-group create \
  --name prod-web-app \
  --description "Production web application" \
  --replication-type async \
  --primary-site site-a \
  --secondary-site site-b \
  --volume-type replicated-ssd

Expected output:

+------------------------+--------------------------------------+
| Field                  | Value                                |
+------------------------+--------------------------------------+
| id                     | pg-12345678-1234-1234-1234-123456789 |
| name                   | prod-web-app                         |
| status                 | active                               |
| replication_type       | async                                |
| primary_site           | site-a                               |
| secondary_site         | site-b                               |
| current_primary_site   | site-a                               |
| consistency_group_id   | cg-87654321-4321-4321-4321-876543210 |
| failover_count         | 0                                    |
| last_failover_at       | None                                 |
+------------------------+--------------------------------------+

Configure a replication policy

Bind the Protection Group to the FlashArray arrays on both sites. Store the API tokens securely — they are masked in all command output.

openstack dr policy create prod-web-app \
  --primary-fa-url https://flasharray-a.example.com \
  --primary-fa-token "T-12345678-abcd-efgh-ijkl-mnopqrstuvwx" \
  --secondary-fa-url https://flasharray-b.example.com \
  --secondary-fa-token "T-87654321-dcba-hgfe-lkji-xwvutsrqponm" \
  --pure-pg-name "pg-prod-web-app" \
  --replication-interval 300 \
  --rpo-minutes 15

Expected output:

+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| protection_group_id  | pg-12345678-1234-1234-1234-123456789 |
| primary_fa_url       | https://flasharray-a.example.com     |
| primary_fa_api_token | *** (masked)                         |
| secondary_fa_url     | https://flasharray-b.example.com     |
| secondary_fa_api_token | *** (masked)                       |
| pure_pg_name         | pg-prod-web-app                      |
| replication_interval | 300                                  |
| rpo_minutes          | 15                                   |
+----------------------+--------------------------------------+

Add VMs to a Protection Group

Add each VM by its Nova instance UUID. The plugin discovers and enrolls all attached volumes automatically. All volumes must use the volume type configured for the group.

openstack dr member add prod-web-app \
  --instance-id a1b2c3d4-0001-0001-0001-aabbccdd1111

Expected output:

+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| id                | mem-aaaabbbb-1111-1111-1111-ccccdddd |
| instance_id       | a1b2c3d4-0001-0001-0001-aabbccdd1111 |
| instance_name     | web-server-1                         |
| status            | protected                            |
| volumes_added     | 2                                    |
+-------------------+--------------------------------------+

Add additional VMs:

openstack dr member add prod-web-app \
  --instance-id b2c3d4e5-0002-0002-0002-bbccdd2222bb

openstack dr member add prod-web-app \
  --instance-id c3d4e5f6-0003-0003-0003-ccdd3333cccc

Note: If the secondary site is unreachable when you run dr member add, the command returns exit code 3 and no changes are made. Metadata must be kept in sync across both sites at all times.


Show a Protection Group

openstack dr protection-group show prod-web-app

Expected output:

+------------------------+--------------------------------------+
| Field                  | Value                                |
+------------------------+--------------------------------------+
| id                     | pg-12345678-1234-1234-1234-123456789 |
| name                   | prod-web-app                         |
| status                 | active                               |
| replication_type       | async                                |
| primary_site           | site-a                               |
| secondary_site         | site-b                               |
| current_primary_site   | site-a                               |
| consistency_group_id   | cg-87654321-4321-4321-4321-876543210 |
| member_count           | 3                                    |
| failover_count         | 0                                    |
| last_failover_at       | None                                 |
| metadata_version       | 7                                    |
| remote_sync_status     | SYNCED                               |
| remote_sync_version    | 7                                    |
+------------------------+--------------------------------------+

Execute a planned failover

A planned failover gracefully shuts down VMs on the primary site before activating them on the secondary site. Provide network and flavor mappings so the plugin can reconstruct VMs on the target site.

openstack dr failover prod-web-app \
  --type planned \
  --network-mapping net-primary-web=net-secondary-web \
  --network-mapping net-primary-db=net-secondary-db \
  --flavor-mapping m1.large=m2.large

Expected output:

+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| operation_id   | op-abcdef12-1234-5678-abcd-ef12345678 |
| operation_type | failover                             |
| status         | running                              |
| progress       | 10                                   |
+----------------+--------------------------------------+

Monitor progress until completion:

watch -n 5 openstack dr operation show op-abcdef12-1234-5678-abcd-ef12345678

Expected final output:

+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| id               | op-abcdef12-1234-5678-abcd-ef12345678 |
| operation_type   | failover                             |
| status           | completed                            |
| progress         | 100                                  |
| started_at       | 2025-06-01T10:00:00Z                 |
| completed_at     | 2025-06-01T10:07:43Z                 |
| error_message    | None                                 |
+------------------+--------------------------------------+

Execute an unplanned failover

Use --type unplanned when the primary site is unavailable. The plugin does not attempt to gracefully shut down primary VMs. Omit --os-cloud for the primary site or set it to the last-known credentials — the operation proceeds using only the secondary site credentials.

openstack dr failover prod-web-app \
  --type unplanned \
  --network-mapping net-primary-web=net-secondary-web \
  --network-mapping net-primary-db=net-secondary-db

After an unplanned failover, the metadata sync status for the original primary site will show UNREACHABLE. Modifications to the Protection Group remain blocked until the original primary site recovers and you run openstack dr protection-group sync-force.


Execute a test failover (DR drill)

A test failover instantiates VMs on the secondary site from the latest replicated snapshot without affecting primary workloads.

openstack dr test-failover prod-web-app \
  --network-mapping net-primary-web=net-dr-test-web

Expected output:

+----------------+--------------------------------------+
| Field          | Value                                |
+----------------+--------------------------------------+
| operation_id   | op-ddddeeee-5678-5678-5678-ddddeeee56 |
| operation_type | test_failover                        |
| status         | running                              |
| progress       | 0                                    |
+----------------+--------------------------------------+

After validating your test VMs, clean up:

openstack dr test-cleanup prod-web-app

You must run test-cleanup before running another test failover or a real failover on the same Protection Group.


Execute a failback

Once the original primary site has recovered and data has been resynced, fail workloads back. Use --reverse-replication to resync any writes that occurred on the secondary during the failover period before cutting over.

openstack dr failback prod-web-app \
  --type planned \
  --reverse-replication \
  --network-mapping net-secondary-web=net-primary-web \
  --network-mapping net-secondary-db=net-primary-db

List all DR operations

openstack dr operation list

Expected output:

+--------------------------------------+------------------+-----------+----------+---------------------+
| ID                                   | Type             | PG Name   | Status   | Progress            |
+--------------------------------------+------------------+-----------+----------+---------------------+
| op-abcdef12-1234-5678-abcd-ef12345678 | failover        | prod-web-app | completed | 100              |
| op-ddddeeee-5678-5678-5678-ddddeeee56 | test_failover   | prod-web-app | completed | 100              |
+--------------------------------------+------------------+-----------+----------+---------------------+

Check service health

openstack dr health show

Expected output:

+------------------+---------+-------------------------------------------+
| Component        | Status  | Detail                                    |
+------------------+---------+-------------------------------------------+
| protector-api    | ok      | Responding on port 8788                   |
| protector-engine | ok      | Running                                   |
| database         | ok      | Connected (MariaDB)                       |
| site-a           | ok      | Reachable, version compatible             |
| site-b           | ok      | Reachable, version compatible             |
+------------------+---------+-------------------------------------------+

Use JSON output for scripting

When integrating with automation pipelines, use -f json to get machine-readable output:

openstack dr operation show op-abcdef12-1234-5678-abcd-ef12345678 -f json

Expected output:

{
  "id": "op-abcdef12-1234-5678-abcd-ef12345678",
  "protection_group_id": "pg-12345678-1234-1234-1234-123456789",
  "operation_type": "failover",
  "status": "completed",
  "source_site_id": "3a1b2c3d-0001-0001-0001-aabbccdd0001",
  "target_site_id": "4b2c3d4e-0002-0002-0002-bbccdd0002bb",
  "progress": 100,
  "started_at": "2025-06-01T10:00:00Z",
  "completed_at": "2025-06-01T10:07:43Z",
  "error_message": null,
  "steps_completed": [
    "validate_sites",
    "snapshot_consistency_group",
    "promote_volumes",
    "recreate_instances",
    "update_protection_group"
  ],
  "steps_failed": []
}

Extract just the status field with --column:

openstack dr operation show op-abcdef12-1234-5678-abcd-ef12345678 \
  -f value -c status

Expected output:

completed

Poll an operation in a shell script until it finishes:

OP_ID="op-abcdef12-1234-5678-abcd-ef12345678"
while true; do
  STATUS=$(openstack dr operation show "$OP_ID" -f value -c status)
  echo "$(date -u +%H:%M:%S) status=${STATUS}"
  case "$STATUS" in
    completed) echo "Operation succeeded."; exit 0 ;;
    failed)    echo "Operation failed. Check 'openstack dr operation show $OP_ID'."; exit 1 ;;
  esac
  sleep 10
done