Installing the Osc Plugin
Installing python-protectorclient and verifying 'openstack dr' commands are available
The python-protectorclient package provides the openstack dr command group — the primary coordination layer for Trilio Site Recovery operations. Because the protector-api and protector-engine services on each site never communicate directly with each other, the CLI plugin acts as the bridge: it authenticates to both your primary and secondary OpenStack sites simultaneously and orchestrates metadata synchronization across them. Installing this plugin gives you access to all 21 openstack dr subcommands needed to register sites, manage Protection Groups, execute failovers, and monitor DR operations from a single workstation or management server.
Before installing the plugin, ensure the following are in place:
- Python: 3.8 or later
- OpenStack client:
python-openstackclientinstalled and functional (the plugin registers itself into theopenstackcommand; it cannot operate standalone) - pip: Available and up to date (
pip install --upgrade pip) - Network access: Your workstation or management server must be able to reach the Keystone endpoints on both the primary and secondary OpenStack sites
- Credentials: Valid Keystone credentials for both sites — the plugin authenticates to each site independently using standard
OS_*environment variables or clouds.yaml entries - Permissions: The OpenStack user you authenticate with must have at minimum the
memberrole on the target project; thedr_site_adminrole is additionally required for site-level failover operations - Protector services:
protector-apiandprotector-enginemust already be deployed and running on each site before anyopenstack drcommands can succeed
Step 1 — Install the package
Install python-protectorclient from PyPI:
pip install python-protectorclient
If you are working from a source checkout (for example, a pre-release build provided by Trilio), install in editable mode instead:
git clone https://github.com/openstack/python-protectorclient
cd python-protectorclient
pip install -e .
Tip: Using a Python virtual environment is recommended to avoid conflicts with system-managed OpenStack packages:
python3 -m venv ~/.venvs/protectorclient source ~/.venvs/protectorclient/bin/activate pip install python-openstackclient python-protectorclient
Step 2 — Verify the entry point is registered
The package registers itself via a setup.cfg entry point (openstack.cli.extension = protector = protectorclient.osc.plugin). Confirm the OpenStack CLI has picked it up:
openstack --help | grep -E '^ dr'
You should see one or more lines beginning with dr in the output. If the grep returns nothing, the plugin is not loaded — see the Troubleshooting section.
Step 3 — List all available DR commands
Confirm the full command set is available:
openstack help dr
The output should list all 21 commands across the dr site, dr protection group, and dr operation namespaces.
Step 4 — Confirm a specific command loads without error
openstack dr site list --help
A successful response prints the command's usage and option flags. Any ImportError or EntryPointLoadError at this stage indicates a packaging problem — see Troubleshooting.
The plugin itself requires minimal configuration beyond standard OpenStack authentication. The key settings are:
Authentication environment variables
The plugin uses the same OS_* environment variables as every other OpenStack CLI command. Because DR operations span two sites, you will typically maintain two separate credential files and source whichever site you want to treat as the "local" site for a given session:
# ~/site-a-openrc (primary site)
export OS_AUTH_URL=http://site-a-controller:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_PROJECT_NAME=production
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
# ~/site-b-openrc (secondary / DR site)
export OS_AUTH_URL=http://site-b-controller:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_PROJECT_NAME=production
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
Source one file before running commands against that site:
source ~/site-a-openrc
openstack dr protection group list
API version pin (optional)
If you need to pin the plugin to a specific Protector API version, set:
export OS_PROTECTOR_API_VERSION=1
This variable is optional. When unset, the plugin negotiates the highest mutually supported version with the protector-api endpoint.
Protector API endpoint discovery
The plugin discovers the protector-api endpoint through the Keystone service catalog. Ensure that protector-api is registered in the catalog on each site during service deployment. No plugin-side configuration is required once the catalog entry exists.
Debug output
Enable verbose output for any command using the global --debug flag or -vvv:
openstack --debug dr site list
openstack -vvv dr failover <pg-id> --failover-type planned
This is useful when diagnosing authentication failures or endpoint resolution problems.
After installation, all DR commands follow the pattern openstack dr <resource> <action>. The plugin integrates fully with standard OpenStack CLI output controls, so you can use -f json, -f yaml, -f value, -f csv, and -c <column> on any command.
Authenticating to both sites
Source the credentials for the site you are operating against before running commands. For operations that read or write Protection Group metadata — which must stay synchronized across both sites — the plugin internally authenticates to the peer site using the credentials embedded in the Protection Group's site registration records. You do not need to switch credentials mid-command for normal DR operations.
Typical session flow
# 1. Work against the primary site
source ~/site-a-openrc
# 2. Confirm both sites are registered
openstack dr site list
# 3. Check the state of your protection groups
openstack dr protection group list
# 4. Review members of a specific group
openstack dr protection group member list <pg-id-or-name>
# 5. Trigger a planned failover and capture the operation ID
OP_ID=$(openstack dr failover <pg-id> --failover-type planned -f value -c id)
# 6. Poll the operation until it completes
watch -n 5 "openstack dr operation show $OP_ID"
Scripting with -f value
The -f value -c <column> pattern extracts a single field, making the plugin straightforward to use in automation scripts:
PG_ID=$(openstack dr protection group create prod-web \
--primary-site cluster1 \
--secondary-site cluster2 \
-f value -c id)
openstack dr protection group member add $PG_ID --instance <vm-uuid>
Metadata synchronization awareness
The Protector service blocks modifications to a Protection Group when the peer site is unreachable, to prevent metadata divergence. If a command fails with a sync-related error, use openstack dr metadata sync <pg> to check and repair the sync state before retrying the operation.
Example 1 — Confirm the plugin is installed and list commands
openstack --help | grep -E '^ dr'
Expected output (abbreviated):
dr failback Execute failback for a protection group
dr failover Execute failover for a protection group
dr metadata sync Synchronize metadata between sites
dr operation cancel Cancel a running DR operation
dr operation list List DR operations
dr operation show Show DR operation progress
dr protection group ... (multiple subcommands)
dr site ... (multiple subcommands)
dr test failover Execute a non-disruptive DR test
dr test failover cleanup Cleanup resources from a test failover
Example 2 — Verify the plugin loads and show command help
openstack dr site list --help
Expected output:
usage: openstack dr site list [-h] [-f {csv,json,table,value,yaml}]
[-c COLUMN] ...
List all DR sites
...
If you see this help output, the plugin is correctly installed and registered.
Example 3 — Register both sites after sourcing primary credentials
source ~/site-a-openrc
openstack dr site create cluster1 \
--site-type PRIMARY \
--auth-url http://site-a-controller:5000/v3 \
--region USEAST
openstack dr site create cluster2 \
--site-type SECONDARY \
--auth-url http://site-b-controller:5000/v3 \
--region USEAST
openstack dr site list
Expected output:
+--------------------------------------+----------+-------------+---------+
| ID | Name | Type | Status |
+--------------------------------------+----------+-------------+---------+
| a1b2c3d4-... | cluster1 | PRIMARY | active |
| e5f6g7h8-... | cluster2 | SECONDARY | active |
+--------------------------------------+----------+-------------+---------+
Example 4 — Confirm a command reaches the protector-api endpoint
After registering sites, list protection groups to confirm end-to-end connectivity from the CLI to the running protector-api service:
openstack dr protection group list
A healthy response (empty table is fine if no groups exist yet):
+----+------+--------+------------------+
| ID | Name | Status | Current Site |
+----+------+--------+------------------+
+----+------+--------+------------------+
An HTTP error or connection refused at this point indicates either that protector-api is not running on the target site, or that the Keystone service catalog entry is missing.
Issue 1 — openstack --help | grep dr returns nothing
Symptom: After installing python-protectorclient, no dr commands appear in the OpenStack CLI help output.
Likely cause: The package was installed into a different Python environment than the one providing the openstack executable, so the entry point was not registered where the CLI can find it.
Fix: Check which Python is running the openstack command, then install the plugin into the same environment:
which openstack
# Example: /usr/local/bin/openstack
# Identify the Python interpreter it uses
head -1 $(which openstack)
# Example: #!/usr/local/bin/python3
# Reinstall the plugin using that interpreter explicitly
/usr/local/bin/python3 -m pip install python-protectorclient
If you are using a virtual environment, activate it before both installing the plugin and running openstack commands.
Issue 2 — ImportError or EntryPointLoadError when running any openstack dr command
Symptom: Running openstack dr site list (or any dr command) prints a traceback with ImportError or a message like Could not load 'protector = protectorclient.osc.plugin'.
Likely cause: The python-protectorclient package installed successfully but a dependency it requires is missing or incompatible.
Fix:
# Reinstall with dependency resolution
pip install --upgrade python-protectorclient
# If installing from source in editable mode, ensure dependencies are installed
pip install -e .
# Confirm the entry point is registered
pip show python-protectorclient
Issue 3 — openstack dr commands fail with authentication or endpoint errors
Symptom: Commands return 401 Unauthorized, 503 Service Unavailable, or keystoneauth1.exceptions.catalog.EndpointNotFound.
Likely cause A (401): The OS_* environment variables are not sourced, are stale, or point to the wrong site.
Fix: Source the correct credential file and confirm the token is valid:
source ~/site-a-openrc
openstack token issue # confirms basic auth works
openstack dr site list
Likely cause B (EndpointNotFound): The protector-api service has not been registered in the Keystone service catalog on this site.
Fix: Verify the catalog entry exists (requires admin credentials):
openstack catalog list | grep protector
If no entry appears, the protector-api service registration step was skipped during deployment. Refer to the Protector deployment guide to register the service and its endpoints.
Likely cause C (503): The protector-api process is not running on the target site.
Fix: Check the service status on the controller node of the affected site. The API binds on port 8788 by default:
curl http://<site-controller>:8788/
Issue 4 — Commands succeed but metadata sync errors block Protection Group modifications
Symptom: Attempts to add members, update, or delete a Protection Group fail with an error indicating the peer site is unreachable or metadata is out of sync.
Likely cause: The Protector service enforces strict metadata consistency — modifications are blocked when the peer site cannot be reached, to prevent divergence between the two sites' databases.
Fix: Verify connectivity to the peer site, then check and repair sync state:
openstack dr metadata sync <pg-id-or-name>
If the peer site is genuinely unreachable (disaster scenario), follow the unplanned failover procedure rather than attempting to modify the Protection Group.
Issue 5 — Horizon container cannot load the plugin after manual deployment
Symptom: After copying protectorclient files into the Horizon container, a 500 Internal Server Error is returned by the dashboard.
Likely cause: Files copied with docker cp default to 600 permissions; Apache/www-data cannot read them.
Fix: Apply correct permissions after copying:
docker exec horizon chmod -R 755 /var/lib/kolla/venv/lib/python3.10/site-packages/protectorclient
docker exec horizon find /var/lib/kolla/venv/lib/python3.10/site-packages/protectorclient \
-type f -exec chmod 644 {} \;
docker restart horizon