Site Recoveryfor OpenStack
Guide

Admin Panel

Site registration and tenant mapping for cloud administrators

master

Overview

The Admin panel is the administrative interface through which cloud platform engineers register OpenStack sites and create tenant mappings that allow the Trilio Site Recovery service to orchestrate disaster recovery across two independent clouds. Because each site runs its own Keystone, Nova, Cinder, and Neutron endpoints — and because the Protector service on each site has no direct service-to-service communication with its peer — an administrator must explicitly introduce each site to the service and map the cross-site tenant identities before any tenant can create Protection Groups or execute DR operations. This guide walks you through registering your primary and secondary sites, validating connectivity between them, and creating the tenant mappings that allow the OSC plugin to synchronize protection metadata bidirectionally across the two Keystone domains.


Prerequisites

Before using the Admin panel, ensure the following conditions are met:

  • Two independent OpenStack clouds are deployed and reachable: a primary site and a secondary (DR) site, each with its own Nova, Cinder, Neutron, and Keystone endpoints.
  • Protector services are running on both sites: protector-api (port 8788) and protector-engine must be active on each site.
  • Service user exists in Keystone on each site: the protector service user must hold the admin role in the service project on both clouds.
    openstack role add --project service --user protector admin
    
  • Network reachability: each site's Keystone endpoint (port 5000) and Protector API endpoint (port 8788) must be reachable from the host where you run the OSC CLI plugin (protectorclient).
  • protectorclient OSC plugin is installed on your workstation and configured with clouds.yaml entries for both sites (see the Configuration section).
  • Admin credentials for both OpenStack clouds are available.
  • Cinder volume types with replication_enabled='<is> True' and a matching replication_type property are configured on both sites before tenants can create Protection Groups — this is a hard dependency, not a post-registration step.
  • Pure Storage FlashArray replication connections are established between the two arrays (not required to register sites, but required before Protection Groups can be created).

Installation

The Admin panel capabilities are delivered through the protectorclient OSC plugin. There is no separate Admin panel binary to install; the plugin adds openstack protector subcommands, including all site and dr tenant mapping administrative subcommands.

Step 1 — Confirm the OSC plugin is installed

Verify that protectorclient is installed and that the Protector service endpoint is registered in Keystone on the site you are administering:

openstack protector --help

You should see subcommand groups including site, protection-group, and operation. If you see a No module named protectorclient error, install the plugin:

pip install python-protectorclient

Step 2 — Register the Protector service endpoint in Keystone (if not already done)

Run the following on each site, substituting the correct controller hostname:

source ~/admin-openrc

openstack service create --name protector \
  --description "OpenStack Disaster Recovery Service" protector

openstack endpoint create --region RegionOne \
  protector public http://controller:8788/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
  protector internal http://controller:8788/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne \
  protector admin http://controller:8788/v1/%\(tenant_id\)s

Repeat this on the secondary site controller with its own hostname.

Step 3 — Configure clouds.yaml for multi-site access

The OSC plugin authenticates to both sites to coordinate metadata synchronization. Create or update ~/.config/openstack/clouds.yaml:

clouds:
  site-a:
    auth:
      auth_url: http://site-a-controller:5000/v3
      project_name: admin
      username: admin
      password: YOUR_SITE_A_ADMIN_PASSWORD
      user_domain_name: Default
      project_domain_name: Default
    region_name: RegionOne

  site-b:
    auth:
      auth_url: http://site-b-controller:5000/v3
      project_name: admin
      username: admin
      password: YOUR_SITE_B_ADMIN_PASSWORD
      user_domain_name: Default
      project_domain_name: Default
    region_name: RegionOne

Verify access to both clouds:

openstack --os-cloud site-a token issue
openstack --os-cloud site-b token issue

Both commands must succeed before you proceed.


Configuration

The Admin panel relies on two configuration surfaces: the protector.conf file on each site's controller node (for service-level settings) and the clouds.yaml file on the administrator's workstation (for multi-site CLI access). The entries below are the settings most relevant to site registration and tenant mapping.

protector.conf — service configuration (on each site)

SectionKeyDefaultValid valuesPurpose
[api]bind_host0.0.0.0Any IP or hostnameInterface the Protector API listens on. Set to a specific management IP in production.
[api]bind_port8788IntegerPort the Protector API listens on. Must be open between sites.
[keystone_authtoken]auth_urlKeystone v3 endpoint URLUsed by the service to validate inbound tokens. Must match the site's Keystone.
[keystone_authtoken]project_nameserviceStringThe service project in which the protector service user exists.
[service_credentials]default_trust_rolesmember,_member_Comma-separated role namesRoles delegated to the Protector service via Keystone trusts when acting on behalf of tenants. The member role must be sufficient for Cinder volume manage/unmanage operations (requires Cinder policy adjustment — see the Deployment Prerequisites guide).
[oslo_policy]policy_file/etc/protector/policy.yamlFile pathRBAC policy file. The default policy restricts all site management endpoints to admin role.

Site registration is admin-only by design. The default policy.yaml ships with:

"context_is_admin": "role:admin"
"admin_or_owner": "is_admin:True or project_id:%(project_id)s"

All /v1/admin/sites endpoints enforce context_is_admin. Do not relax this rule — site registration grants cross-site operational authority.

Site object fields

When you register a site, the following fields are stored:

FieldRequiredDescription
nameYesUnique human-readable identifier used in CLI commands and Protection Group references.
descriptionNoFree-text description (e.g., datacenter location).
site_typeYesprimary or secondary. Note: this is an initial designation only — it swaps dynamically on failover, and the architecture treats both sites symmetrically.
auth_urlYesKeystone v3 endpoint for this site (e.g., http://site-a:5000/v3). Used by the engine to authenticate cross-site operations.
region_nameNoOpenStack region name if the site uses regions.

Sync behavior affecting Admin operations

Site registration itself does not require peer-site reachability — you register each site independently on its own Protector instance. However, once Protection Groups exist, all metadata modifications are blocked if the peer site is unreachable. This is not configurable; it is a hard constraint to prevent metadata divergence across sites.


Usage

Registering sites

You register each site against the Protector API running on that same site. You do not register Site B's credentials on Site A's Protector — instead, both site registrations are performed through the OSC CLI plugin, which authenticates to each site independently.

Register the primary site:

source ~/site-a-admin-openrc
# or: export OS_CLOUD=site-a

openstack protector site create \
  --name site-a \
  --description "Primary datacenter - Boston" \
  --site-type primary \
  --auth-url http://10.0.1.10:5000/v3 \
  --region-name RegionOne

Register the secondary site:

source ~/site-b-admin-openrc
# or: export OS_CLOUD=site-b

openstack protector site create \
  --name site-b \
  --description "Secondary datacenter - Seattle" \
  --site-type secondary \
  --auth-url http://10.0.2.10:5000/v3 \
  --region-name RegionOne

Validating site connectivity

After registration, validate that each site can reach the other's endpoints and that the replication-capable Cinder backend is visible:

openstack protector site validate site-a
openstack protector site validate site-b

Validation checks Keystone reachability, Cinder backend availability, and whether at least one volume type with replication_enabled='<is> True' exists. A site whose validation fails will have a status of error or unreachable and cannot be used as a Protection Group target until the condition is resolved.

You can list available replication-capable volume types per site to confirm your Cinder configuration before tenants begin creating Protection Groups:

openstack protector site list-volume-types site-a
openstack protector site list-volume-types site-b

Managing tenant mappings

Because each site has an independent Keystone, the same tenant (project) has a different UUID on Site A than on Site B. You must create a tenant mapping so the Protector service can correlate the two project identities during cross-site metadata synchronization and failover operations.

Create a tenant mapping once per tenant pair:

openstack dr tenant mapping create \
  --local-tenant <local-project-id> \
  --remote-site site-b \
  --remote-tenant <remote-project-id> \
  --description "Production tenant mapping"

This mapping syncs automatically to the remote site. You do not need to create the inverse mapping manually.

To retrieve project IDs for both sites:

# On site-a
export OS_CLOUD=site-a
openstack project list

# On site-b
export OS_CLOUD=site-b
openstack project list

Listing and inspecting registered sites

# List all registered sites
openstack protector site list

# Show full details of a specific site
openstack protector site show site-a

The status field in the output reflects the site's last-known reachability state: active, unreachable, or error.

Updating and deleting sites

# Update site description or auth URL
openstack protector site set site-a \
  --description "Primary datacenter - Boston (upgraded)"

# Delete a site (only safe if no Protection Groups reference it)
openstack protector site delete site-a

Warning: Deleting a site that is referenced by active Protection Groups will break those groups' ability to fail over. Confirm no Protection Groups reference the site before deleting.


Examples

Example 1 — Full site registration for an async DR pair

This example registers a primary site in Boston and a secondary site in Seattle, then validates both.

# Authenticate to Site A (admin)
export OS_CLOUD=site-a

# Register primary site
openstack protector site create \
  --name site-a \
  --description "Primary datacenter - Boston" \
  --site-type primary \
  --auth-url http://10.0.1.10:5000/v3 \
  --region-name RegionOne

Expected output:

+---------------+--------------------------------------+
| Field         | Value                                |
+---------------+--------------------------------------+
| id            | a1b2c3d4-1234-5678-abcd-000000000001 |
| name          | site-a                               |
| description   | Primary datacenter - Boston          |
| site_type     | primary                              |
| auth_url      | http://10.0.1.10:5000/v3             |
| region_name   | RegionOne                            |
| status        | active                               |
| created_at    | 2025-06-01T10:00:00Z                 |
+---------------+--------------------------------------+
# Authenticate to Site B (admin)
export OS_CLOUD=site-b

# Register secondary site
openstack protector site create \
  --name site-b \
  --description "Secondary datacenter - Seattle" \
  --site-type secondary \
  --auth-url http://10.0.2.10:5000/v3 \
  --region-name RegionOne

Expected output:

+---------------+--------------------------------------+
| Field         | Value                                |
+---------------+--------------------------------------+
| id            | b2c3d4e5-2345-6789-bcde-000000000002 |
| name          | site-b                               |
| description   | Secondary datacenter - Seattle       |
| site_type     | secondary                            |
| auth_url      | http://10.0.2.10:5000/v3             |
| region_name   | RegionOne                            |
| status        | active                               |
| created_at    | 2025-06-01T10:01:00Z                 |
+---------------+--------------------------------------+
# Validate both sites
export OS_CLOUD=site-a
openstack protector site validate site-a

export OS_CLOUD=site-b
openstack protector site validate site-b

Expected output (for each validation):

Validating site site-a...
  ✅ Keystone reachable
  ✅ Cinder backend available
  ✅ Replication-capable volume types found: replicated-ssd
Site status: active

Example 2 — Creating a tenant mapping for a cross-site tenant pair

This example maps a production project that exists in both clouds under different UUIDs.

# Get project UUID on Site A
export OS_CLOUD=site-a
SITE_A_PROJECT=$(openstack project show production-project -f value -c id)
echo "Site A project ID: $SITE_A_PROJECT"

# Get project UUID on Site B
export OS_CLOUD=site-b
SITE_B_PROJECT=$(openstack project show production-project -f value -c id)
echo "Site B project ID: $SITE_B_PROJECT"

# Create the mapping (run against either site)
export OS_CLOUD=site-a
openstack dr tenant mapping create \
  --local-tenant $SITE_A_PROJECT \
  --remote-site site-b \
  --remote-tenant $SITE_B_PROJECT \
  --description "Production tenant mapping"

Expected output:

+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| id               | m1n2o3p4-3456-7890-cdef-000000000003 |
| local_tenant     | aaaa-1111-bbbb-2222-cccccccccccc     |
| remote_site      | site-b                               |
| remote_tenant    | dddd-3333-eeee-4444-ffffffffffff     |
| description      | Production tenant mapping            |
| created_at       | 2025-06-01T10:05:00Z                 |
+------------------+--------------------------------------+

Mapping synced to site-b ✅

Example 3 — Listing replication-capable volume types before onboarding tenants

Before asking tenants to create Protection Groups, confirm that matching replication-enabled volume types exist on both sites. A volume type must have replication_enabled='<is> True' and a replication_type property to be eligible.

export OS_CLOUD=site-a
openstack protector site list-volume-types site-a

export OS_CLOUD=site-b
openstack protector site list-volume-types site-b

Expected output (each site):

+----------------+---------------------------+------------------+-------------------+
| Name           | Backend                   | Replication      | Replication Type  |
+----------------+---------------------------+------------------+-------------------+
| replicated-ssd | pure@backend-a            | True             | async             |
| local-ssd      | pure@backend-local        | False            | N/A               |
+----------------+---------------------------+------------------+-------------------+

Only replicated-ssd is eligible for use in a Protection Group. If no eligible volume type appears on either site, do not proceed with tenant onboarding — Protection Group creation will fail at validation.


Troubleshooting

Site validation fails with Keystone reachable but status: error

Symptom: openstack protector site validate <site> reports Keystone as reachable but the site status remains error.

Likely cause: The Protector service user does not have the admin role in the service project on that site, or the Cinder backend is misconfigured.

Fix:

# Verify the protector service user role
export OS_CLOUD=<affected-site>
openstack role assignment list --user protector --project service --names

# If the admin role is missing, add it
openstack role add --project service --user protector admin

# Retry validation
openstack protector site validate <site-name>

openstack protector site create returns 403 Forbidden

Symptom: Site creation fails with an HTTP 403 error.

Likely cause: The credentials you are using do not have the admin role, or the policy.yaml on the Protector service has been modified to restrict site management further.

Fix: Ensure you are sourcing admin credentials for the site you are registering (not tenant-level credentials), and verify the Protector policy file:

cat /etc/protector/policy.yaml | grep context_is_admin
# Expected: "context_is_admin": "role:admin"

# Confirm your token carries the admin role
export OS_CLOUD=site-a
openstack token issue
openstack token validate --format json $(openstack token issue -f value -c id) | grep roles

Tenant mapping creation succeeds locally but reports sync failure to remote site

Symptom: openstack dr tenant mapping create completes but outputs a warning that the mapping could not be synced to the remote site.

Likely cause: The remote site's Protector API (port 8788) is unreachable from the local host or the remote Keystone endpoint stored in the site registration is incorrect.

Fix:

# Check reachability of the remote Protector API
curl -v http://<remote-site-controller>:8788/

# Check the auth_url recorded for the remote site
export OS_CLOUD=site-a
openstack protector site show site-b
# Verify the auth_url field is correct

# If the auth_url is wrong, update it
openstack protector site set site-b \
  --auth-url http://correct-controller:5000/v3

# Retry the sync manually after fixing connectivity
openstack dr tenant mapping sync <mapping-id>

openstack protector site list-volume-types returns no replication-eligible types

Symptom: The command returns volume types but none show Replication: True.

Likely cause: The Cinder volume types on this site are missing the replication_enabled='<is> True' extra spec, or the Pure Storage Cinder driver is not reporting replication capability.

Fix: On the affected site, set the required extra specs on the volume type:

export OS_CLOUD=<affected-site>

openstack volume type set replicated-ssd \
  --property replication_enabled='<is> True' \
  --property replication_type='<in> async' \
  --property volume_backend_name=pure@backend-a

# Verify
openstack volume type show replicated-ssd --format json | grep replication

Note that the volume type name must match on both sites for the Protection Group to be created successfully.


Site status becomes unreachable after initial registration

Symptom: A previously active site now shows status: unreachable in openstack protector site show.

Likely cause: Network connectivity between the sites has been interrupted, or the remote site's Protector API or Keystone service has gone down.

Effect: Any attempt to modify a Protection Group that references this site will be blocked to prevent metadata divergence. This is expected behavior, not a bug.

Fix:

  1. Restore network connectivity or restart the remote Protector services:
    # On the affected remote site controller
    systemctl restart protector-api
    systemctl restart protector-engine
    systemctl status protector-api protector-engine
    
  2. Once connectivity is restored, re-validate the site:
    openstack protector site validate <site-name>
    
  3. If Protection Groups have diverged due to an outage during a failover, use the force-sync command to reconcile metadata:
    openstack protector protection-group sync-force <pg-name>