Site Recoveryfor OpenStack
Guide

Multi-tenant Considerations

Tenant isolation, RBAC, and per-tenant Protection Group ownership.

master

Overview

This page explains how Trilio Site Recovery for OpenStack enforces tenant isolation across Protection Groups, failover operations, and DR monitoring. Because multiple tenants may share the same OpenStack clouds at both the primary and secondary sites, the system applies strict ownership boundaries so that tenants can only see and act on their own resources. Understanding these boundaries helps you design DR workflows that are both self-service for tenants and auditable by administrators.


Prerequisites

Before working with multi-tenant Protection Groups, ensure the following are in place:

  • Two independent OpenStack clouds (primary and secondary sites), each running Nova, Cinder, Neutron, and Keystone — Victoria release or later recommended
  • Trilio Site Recovery (protector-api and protector-engine) installed and running on both sites
  • The protectorclient OSC CLI plugin (or the Horizon dashboard) installed and configured with clouds.yaml entries for both sites
  • A Keystone service user (protector) with the admin role on the service project, present on each site's Keystone
  • /etc/protector/policy.yaml deployed on each site — this file defines the RBAC rules that enforce tenant isolation
  • Site registration completed by a cloud administrator — tenants cannot self-register sites
  • Cinder volume types on both sites with replication_enabled='<is> True' and a matching replication_type property, so that tenant volumes are eligible for protection
  • Tenant projects and users provisioned in Keystone on both sites before attempting Protection Group creation

Installation

Tenant isolation is enforced at the service layer through Keystone middleware and oslo.policy — there is no separate installation step. The configuration steps below wire those mechanisms together.

1. Deploy policy.yaml on both sites

The policy file must be present at the path specified in protector.conf (/etc/protector/policy.yaml by default). Copy it to each site's controller node:

# On each site's controller node
sudo cp policy.yaml /etc/protector/policy.yaml
sudo chown protector:protector /etc/protector/policy.yaml
sudo chmod 640 /etc/protector/policy.yaml

2. Confirm the oslo_policy section in protector.conf

Verify the policy file path is declared on each site:

[oslo_policy]
policy_file = /etc/protector/policy.yaml

3. Verify Keystone middleware is active

The api-paste.ini file must route all v1 API requests through keystonemiddleware.auth_token. Confirm the relevant stanza is present on both sites:

[composite:protectorapi_v1]
use = call:keystonemiddleware.auth_token:filter_factory
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
keystone_authtoken = keystoneauth

This middleware extracts the project_id (tenant ID) from the validated Keystone token and injects it into the request context. Every subsequent RBAC check and database query is scoped to that project_id.

4. Restart both services on both sites after any policy change

systemctl restart protector-api
systemctl restart protector-engine

# Confirm they are running
systemctl status protector-api
systemctl status protector-engine

Configuration

Tenant isolation is controlled by two configuration surfaces: the policy.yaml RBAC file and the Keystone endpoint URL pattern. Both must be consistent across the primary and secondary site deployments.


Keystone Endpoint URL Pattern

The Protector API endpoint is registered in Keystone with a %(tenant_id)s token in the URL path:

http://controller:8788/v1/%(tenant_id)s

Keystone expands this to the authenticated project's ID at token issuance. Every API request therefore carries the caller's project_id in the URL, which the service uses to scope all database queries. You cannot query another tenant's resources by substituting a different project ID — the Keystone middleware validates that the token's project_id matches the URL path segment.


policy.yaml — RBAC Rules

The policy file uses oslo.policy syntax. The defaults below enforce tenant ownership for all Protection Group and operation resources.

RuleDefaultEffect
context_is_adminrole:adminGrants full cross-tenant visibility
admin_or_owneris_admin:True or project_id:%(project_id)sAllows admins or the owning tenant
defaultrule:admin_or_ownerApplied to all rules that do not override it
protector:protection_groups:indexrule:defaultList PGs — tenants see only their own; admins see all
protector:protection_groups:showrule:defaultShow a PG — ownership enforced
protector:protection_groups:createrule:defaultAny authenticated tenant may create a PG
protector:protection_groups:updaterule:defaultOnly the owning tenant (or admin) may update
protector:protection_groups:deleterule:defaultOnly the owning tenant (or admin) may delete
protector:members:indexrule:defaultList members — scoped to owning tenant
protector:members:createrule:defaultAdd VMs/volumes — owning tenant or admin
protector:members:deleterule:defaultRemove members — owning tenant or admin
protector:operations:indexrule:defaultList operations — tenants see only their own
protector:operations:showrule:defaultShow operation detail — ownership enforced
protector:operations:actionrule:defaultTrigger failover/failback — owning tenant or admin
protector:policies:showrule:defaultView replication policy — owning tenant or admin
protector:policies:createrule:defaultCreate replication policy — owning tenant or admin

Site registration is admin-only. There are no policy lines above for protector:sites:* because site registration and modification are restricted to cloud administrators at the infrastructure level. Tenants interact with sites only indirectly — they reference a site name when creating a Protection Group.


Metadata Sync and Tenant Isolation

When the CLI or Horizon dashboard synchronizes Protection Group metadata between the primary and secondary site, it does so using the service account credentials stored in the site configuration — not the tenant's Keystone token. The service account has visibility into all Protection Groups, but the API layer still enforces tenant scoping: the synced metadata retains the original project_id, so tenants on the secondary site see only their own records after sync.

Because metadata sync is blocked when the peer site is unreachable, Protection Group modifications are also blocked in that state. This prevents tenant A's Protection Group state from diverging from the secondary site record while tenant B's is simultaneously being modified.


Usage

How tenant ownership is established

When you create a Protection Group, the service binds it to the project_id extracted from your Keystone token. This binding is immutable — you cannot reassign a Protection Group to a different tenant. All subsequent operations (adding members, configuring policies, triggering failovers) require your token to carry the same project_id.

# Source your tenant credentials — NOT admin credentials
source ~/demo-openrc

# Create a Protection Group — it is automatically owned by your project
openstack protector protection-group create \
  --name "web-tier-pg" \
  --description "Web tier DR protection" \
  --replication-type async \
  --source-cloud site-a \
  --target-cloud site-b

What tenants can and cannot do

ActionTenantAdmin
Create a Protection Group
View own Protection Groups
View other tenants' Protection Groups
Add/remove VMs from own PG
Trigger failover on own PG
Trigger failover on another tenant's PG
View own operations
View all operations (cross-tenant)
Register or modify sites
View site definitions❌ (indirect)

Triggering your own failover

As a tenant, you initiate failover against your own Protection Group. The CLI authenticates to both sites using your clouds.yaml credentials and the service account for cross-site operations:

openstack protector protection-group failover <pg-id> \
  --target-cloud site-b \
  --network-mapping net-a=net-b \
  --flavor-mapping m1.large=m1.xlarge

The engine validates that <pg-id> belongs to your project_id before accepting the request. If it does not, the API returns HTTP 403.


Monitoring your own operations

You can list and inspect only the operations that belong to your project:

# List your operations
openstack protector operation list

# Inspect a specific operation
openstack protector operation show <operation-id>

Attempting to show an operation that belongs to another tenant returns HTTP 404 (not HTTP 403) to avoid leaking resource existence.


Horizon dashboard boundaries

The Horizon plugin enforces the same boundaries visually:

  • Project dashboard: Shows only the current tenant's Protection Groups, members, policies, and operations. All create/update/delete actions operate within the tenant's project scope.
  • Admin panel: Provides a cross-tenant view of all Protection Groups and operations across all projects. Administrators use this panel for support and compliance auditing without impersonating tenants.

Examples

Example 1 — Tenant creates and manages a Protection Group

A demo project tenant sources their credentials and works entirely within their own scope:

# Source tenant (non-admin) credentials
source ~/demo-openrc

# Create Protection Group
openstack protector protection-group create \
  --name "app-pg" \
  --description "Application tier" \
  --replication-type async \
  --source-cloud site-a \
  --target-cloud site-b

Expected output:

+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| id               | a1b2c3d4-e5f6-7890-abcd-ef1234567890 |
| name             | app-pg                               |
| project_id       | 9f8e7d6c5b4a3210fedcba9876543210     |
| replication_type | async                                |
| source_cloud     | site-a                               |
| target_cloud     | site-b                               |
| status           | available                            |
+------------------+--------------------------------------+

Note that project_id is automatically set to the tenant's project — it cannot be overridden.


Example 2 — Tenant attempts to view another tenant's Protection Group

A tenant tries to inspect a Protection Group that belongs to a different project:

source ~/demo-openrc

openstack protector protection-group show bbbbb-other-tenant-pg-id

Expected output:

ERROR: No protectiongroup with a name or ID of 'bbbbb-other-tenant-pg-id' exists.

The API returns HTTP 404, not HTTP 403, to avoid confirming that the resource exists.


Example 3 — Admin views all Protection Groups across tenants

An administrator sources admin credentials and lists every Protection Group on the site:

source ~/admin-openrc

openstack protector protection-group list --all-tenants

Expected output:

+--------------------------------------+----------+--------------------------------------+--------+
| ID                                   | Name     | Project ID                           | Status |
+--------------------------------------+----------+--------------------------------------+--------+
| a1b2c3d4-e5f6-7890-abcd-ef1234567890 | app-pg   | 9f8e7d6c5b4a3210fedcba9876543210     | available |
| f0e1d2c3-b4a5-6789-cdef-012345678901 | db-pg    | 1a2b3c4d5e6f7890abcdef1234567890     | available |
+--------------------------------------+----------+--------------------------------------+--------+

Example 4 — Admin registers a site (tenant cannot)

Site registration is an admin-only action. A non-admin attempt fails immediately:

# As a non-admin tenant
source ~/demo-openrc

openstack dr site create site-b \
  --auth-url https://site-b-controller:5000 \
  --service-username protector-service \
  --service-password <password> \
  --service-project-name service \
  --type secondary

Expected output:

ERROR: Policy doesn't allow protector:sites:create to be performed. (HTTP 403)

An administrator performs the same command successfully:

source ~/admin-openrc

openstack dr site create site-b \
  --auth-url https://site-b-controller:5000 \
  --service-username protector-service \
  --service-password <password> \
  --service-project-name service \
  --type secondary

Expected output:

+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| id                  | c3d4e5f6-a7b8-9012-cdef-345678901234 |
| name                | site-b                               |
| auth_url            | https://site-b-controller:5000       |
| site_type           | secondary                            |
| status              | active                               |
+---------------------+--------------------------------------+

Example 5 — Inspecting the policy.yaml admin_or_owner rule

This rule is the foundation of all tenant isolation. You can verify it is correctly applied by reviewing the deployed file:

grep -A3 'admin_or_owner' /etc/protector/policy.yaml

Expected output:

"admin_or_owner": "is_admin:True or project_id:%(project_id)s"
"default": "rule:admin_or_owner"

The project_id:%(project_id)s condition compares the project_id in the request context (from the Keystone token) to the project_id stored on the resource. They must match for a non-admin request to succeed.


Troubleshooting

HTTP 403 when creating a Protection Group as a tenant

Symptom: ERROR: Policy doesn't allow protector:protection_groups:create to be performed. (HTTP 403)

Likely cause: The credentials you sourced belong to a user who has no role on the target project, or the policy.yaml default rule has been tightened beyond rule:admin_or_owner.

Fix:

  1. Confirm you have sourced the correct openrc file for your project.
  2. Verify your user has at least the member role: openstack role assignment list --user <your-user> --project <your-project>
  3. Check that /etc/protector/policy.yaml on the primary site has "default": "rule:admin_or_owner" and that "protector:protection_groups:create" resolves to rule:default.
  4. Restart protector-api after any policy change: systemctl restart protector-api

Tenant can see another tenant's Protection Groups

Symptom: openstack protector protection-group list returns Protection Groups with project_id values that do not match the caller's project.

Likely cause: The policy.yaml default rule has been set to a permissive value (for example "" or "@"), or the Keystone middleware is not validating tokens (for example delay_auth_decision = true without proper fallback).

Fix:

  1. Inspect /etc/protector/policy.yaml and confirm "default": "rule:admin_or_owner".
  2. Inspect api-paste.ini and confirm the keystonemiddleware.auth_token filter is active for the v1 API composite.
  3. Check protector.conf for [keystone_authtoken] — confirm auth_url, username, and password are correct and the middleware can reach Keystone.
  4. Restart protector-api after corrections.

Admin --all-tenants flag returns only the admin's own Protection Groups

Symptom: Running openstack protector protection-group list --all-tenants as admin returns only Protection Groups in the admin's project.

Likely cause: The admin user does not have the admin role, or context_is_admin in policy.yaml does not match the admin user's role name.

Fix:

  1. Confirm the admin user has the admin role: openstack role assignment list --user admin --project admin
  2. Check that "context_is_admin": "role:admin" in policy.yaml matches the role name exactly (case-sensitive).
  3. If your cloud uses a different admin role name (for example cloud_admin), update the context_is_admin rule accordingly and restart protector-api.

Tenant receives HTTP 403 when triggering failover on their own Protection Group

Symptom: ERROR: Policy doesn't allow protector:operations:action to be performed. (HTTP 403) even though the Protection Group is owned by the tenant's project.

Likely cause: The Keystone token has expired, the project ID in the URL path does not match the token's project_id, or the operations:action rule has been overridden to require role:admin.

Fix:

  1. Re-source your credentials to obtain a fresh token: source ~/demo-openrc
  2. Confirm the project_id in your token matches the PG owner: openstack token issue -f value -c project_id
  3. Inspect policy.yaml for any override on protector:operations:action — it should resolve to rule:default (which resolves to rule:admin_or_owner).

Metadata sync is blocked and Protection Group modifications fail

Symptom: Any attempt to update a Protection Group (add/remove members, update policy) returns an error indicating the peer site is unreachable.

Likely cause: Network connectivity between the primary and secondary site Protector services has been lost, or the secondary site's protector-api is down. Strict metadata sync enforcement intentionally blocks modifications to prevent divergence.

Fix:

  1. Test connectivity to the secondary site's Keystone from the primary site controller: curl -k https://site-b-controller:5000/v3
  2. Verify protector-api is running on the secondary site: systemctl status protector-api
  3. Check the secondary site's service credentials are still valid: authenticate manually using the stored service_username and service_password.
  4. Review firewall rules — the required ports are 5000 (Keystone), 8774 (Nova), and 8776 (Cinder) between site controllers.
  5. Once connectivity is restored, retry the modification — the sync will proceed automatically.

Horizon Admin panel shows no Protection Groups across tenants

Symptom: The Horizon Admin panel loads without error but the cross-tenant Protection Group list is empty, even though tenants have created Protection Groups.

Likely cause: The Horizon session is authenticated with a user who does not have the admin role, so the Admin panel renders but RBAC filters out all results.

Fix:

  1. Log out of Horizon and log back in with a user that has the admin role assigned in Keystone.
  2. Confirm the role assignment: openstack role assignment list --user <horizon-user> --project admin
  3. If the Admin panel is still empty after confirming the role, check protector-api logs for incoming requests from Horizon and look for RBAC denial messages: tail -f /var/log/protector/protector-api.log