Multi-tenant Considerations
Tenant isolation, RBAC, and per-tenant Protection Group ownership.
This page explains how Trilio Site Recovery for OpenStack enforces tenant isolation across Protection Groups, failover operations, and DR monitoring. Because multiple tenants may share the same OpenStack clouds at both the primary and secondary sites, the system applies strict ownership boundaries so that tenants can only see and act on their own resources. Understanding these boundaries helps you design DR workflows that are both self-service for tenants and auditable by administrators.
Before working with multi-tenant Protection Groups, ensure the following are in place:
- Two independent OpenStack clouds (primary and secondary sites), each running Nova, Cinder, Neutron, and Keystone — Victoria release or later recommended
- Trilio Site Recovery (
protector-apiandprotector-engine) installed and running on both sites - The
protectorclientOSC CLI plugin (or the Horizon dashboard) installed and configured withclouds.yamlentries for both sites - A Keystone service user (
protector) with theadminrole on theserviceproject, present on each site's Keystone /etc/protector/policy.yamldeployed on each site — this file defines the RBAC rules that enforce tenant isolation- Site registration completed by a cloud administrator — tenants cannot self-register sites
- Cinder volume types on both sites with
replication_enabled='<is> True'and a matchingreplication_typeproperty, so that tenant volumes are eligible for protection - Tenant projects and users provisioned in Keystone on both sites before attempting Protection Group creation
Tenant isolation is enforced at the service layer through Keystone middleware and oslo.policy — there is no separate installation step. The configuration steps below wire those mechanisms together.
1. Deploy policy.yaml on both sites
The policy file must be present at the path specified in protector.conf (/etc/protector/policy.yaml by default). Copy it to each site's controller node:
# On each site's controller node
sudo cp policy.yaml /etc/protector/policy.yaml
sudo chown protector:protector /etc/protector/policy.yaml
sudo chmod 640 /etc/protector/policy.yaml
2. Confirm the oslo_policy section in protector.conf
Verify the policy file path is declared on each site:
[oslo_policy]
policy_file = /etc/protector/policy.yaml
3. Verify Keystone middleware is active
The api-paste.ini file must route all v1 API requests through keystonemiddleware.auth_token. Confirm the relevant stanza is present on both sites:
[composite:protectorapi_v1]
use = call:keystonemiddleware.auth_token:filter_factory
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
keystone_authtoken = keystoneauth
This middleware extracts the project_id (tenant ID) from the validated Keystone token and injects it into the request context. Every subsequent RBAC check and database query is scoped to that project_id.
4. Restart both services on both sites after any policy change
systemctl restart protector-api
systemctl restart protector-engine
# Confirm they are running
systemctl status protector-api
systemctl status protector-engine
Tenant isolation is controlled by two configuration surfaces: the policy.yaml RBAC file and the Keystone endpoint URL pattern. Both must be consistent across the primary and secondary site deployments.
Keystone Endpoint URL Pattern
The Protector API endpoint is registered in Keystone with a %(tenant_id)s token in the URL path:
http://controller:8788/v1/%(tenant_id)s
Keystone expands this to the authenticated project's ID at token issuance. Every API request therefore carries the caller's project_id in the URL, which the service uses to scope all database queries. You cannot query another tenant's resources by substituting a different project ID — the Keystone middleware validates that the token's project_id matches the URL path segment.
policy.yaml — RBAC Rules
The policy file uses oslo.policy syntax. The defaults below enforce tenant ownership for all Protection Group and operation resources.
| Rule | Default | Effect |
|---|---|---|
context_is_admin | role:admin | Grants full cross-tenant visibility |
admin_or_owner | is_admin:True or project_id:%(project_id)s | Allows admins or the owning tenant |
default | rule:admin_or_owner | Applied to all rules that do not override it |
protector:protection_groups:index | rule:default | List PGs — tenants see only their own; admins see all |
protector:protection_groups:show | rule:default | Show a PG — ownership enforced |
protector:protection_groups:create | rule:default | Any authenticated tenant may create a PG |
protector:protection_groups:update | rule:default | Only the owning tenant (or admin) may update |
protector:protection_groups:delete | rule:default | Only the owning tenant (or admin) may delete |
protector:members:index | rule:default | List members — scoped to owning tenant |
protector:members:create | rule:default | Add VMs/volumes — owning tenant or admin |
protector:members:delete | rule:default | Remove members — owning tenant or admin |
protector:operations:index | rule:default | List operations — tenants see only their own |
protector:operations:show | rule:default | Show operation detail — ownership enforced |
protector:operations:action | rule:default | Trigger failover/failback — owning tenant or admin |
protector:policies:show | rule:default | View replication policy — owning tenant or admin |
protector:policies:create | rule:default | Create replication policy — owning tenant or admin |
Site registration is admin-only. There are no policy lines above for
protector:sites:*because site registration and modification are restricted to cloud administrators at the infrastructure level. Tenants interact with sites only indirectly — they reference a site name when creating a Protection Group.
Metadata Sync and Tenant Isolation
When the CLI or Horizon dashboard synchronizes Protection Group metadata between the primary and secondary site, it does so using the service account credentials stored in the site configuration — not the tenant's Keystone token. The service account has visibility into all Protection Groups, but the API layer still enforces tenant scoping: the synced metadata retains the original project_id, so tenants on the secondary site see only their own records after sync.
Because metadata sync is blocked when the peer site is unreachable, Protection Group modifications are also blocked in that state. This prevents tenant A's Protection Group state from diverging from the secondary site record while tenant B's is simultaneously being modified.
How tenant ownership is established
When you create a Protection Group, the service binds it to the project_id extracted from your Keystone token. This binding is immutable — you cannot reassign a Protection Group to a different tenant. All subsequent operations (adding members, configuring policies, triggering failovers) require your token to carry the same project_id.
# Source your tenant credentials — NOT admin credentials
source ~/demo-openrc
# Create a Protection Group — it is automatically owned by your project
openstack protector protection-group create \
--name "web-tier-pg" \
--description "Web tier DR protection" \
--replication-type async \
--source-cloud site-a \
--target-cloud site-b
What tenants can and cannot do
| Action | Tenant | Admin |
|---|---|---|
| Create a Protection Group | ✅ | ✅ |
| View own Protection Groups | ✅ | ✅ |
| View other tenants' Protection Groups | ❌ | ✅ |
| Add/remove VMs from own PG | ✅ | ✅ |
| Trigger failover on own PG | ✅ | ✅ |
| Trigger failover on another tenant's PG | ❌ | ✅ |
| View own operations | ✅ | ✅ |
| View all operations (cross-tenant) | ❌ | ✅ |
| Register or modify sites | ❌ | ✅ |
| View site definitions | ❌ (indirect) | ✅ |
Triggering your own failover
As a tenant, you initiate failover against your own Protection Group. The CLI authenticates to both sites using your clouds.yaml credentials and the service account for cross-site operations:
openstack protector protection-group failover <pg-id> \
--target-cloud site-b \
--network-mapping net-a=net-b \
--flavor-mapping m1.large=m1.xlarge
The engine validates that <pg-id> belongs to your project_id before accepting the request. If it does not, the API returns HTTP 403.
Monitoring your own operations
You can list and inspect only the operations that belong to your project:
# List your operations
openstack protector operation list
# Inspect a specific operation
openstack protector operation show <operation-id>
Attempting to show an operation that belongs to another tenant returns HTTP 404 (not HTTP 403) to avoid leaking resource existence.
Horizon dashboard boundaries
The Horizon plugin enforces the same boundaries visually:
- Project dashboard: Shows only the current tenant's Protection Groups, members, policies, and operations. All create/update/delete actions operate within the tenant's project scope.
- Admin panel: Provides a cross-tenant view of all Protection Groups and operations across all projects. Administrators use this panel for support and compliance auditing without impersonating tenants.
Example 1 — Tenant creates and manages a Protection Group
A demo project tenant sources their credentials and works entirely within their own scope:
# Source tenant (non-admin) credentials
source ~/demo-openrc
# Create Protection Group
openstack protector protection-group create \
--name "app-pg" \
--description "Application tier" \
--replication-type async \
--source-cloud site-a \
--target-cloud site-b
Expected output:
+------------------+--------------------------------------+
| Field | Value |
+------------------+--------------------------------------+
| id | a1b2c3d4-e5f6-7890-abcd-ef1234567890 |
| name | app-pg |
| project_id | 9f8e7d6c5b4a3210fedcba9876543210 |
| replication_type | async |
| source_cloud | site-a |
| target_cloud | site-b |
| status | available |
+------------------+--------------------------------------+
Note that project_id is automatically set to the tenant's project — it cannot be overridden.
Example 2 — Tenant attempts to view another tenant's Protection Group
A tenant tries to inspect a Protection Group that belongs to a different project:
source ~/demo-openrc
openstack protector protection-group show bbbbb-other-tenant-pg-id
Expected output:
ERROR: No protectiongroup with a name or ID of 'bbbbb-other-tenant-pg-id' exists.
The API returns HTTP 404, not HTTP 403, to avoid confirming that the resource exists.
Example 3 — Admin views all Protection Groups across tenants
An administrator sources admin credentials and lists every Protection Group on the site:
source ~/admin-openrc
openstack protector protection-group list --all-tenants
Expected output:
+--------------------------------------+----------+--------------------------------------+--------+
| ID | Name | Project ID | Status |
+--------------------------------------+----------+--------------------------------------+--------+
| a1b2c3d4-e5f6-7890-abcd-ef1234567890 | app-pg | 9f8e7d6c5b4a3210fedcba9876543210 | available |
| f0e1d2c3-b4a5-6789-cdef-012345678901 | db-pg | 1a2b3c4d5e6f7890abcdef1234567890 | available |
+--------------------------------------+----------+--------------------------------------+--------+
Example 4 — Admin registers a site (tenant cannot)
Site registration is an admin-only action. A non-admin attempt fails immediately:
# As a non-admin tenant
source ~/demo-openrc
openstack dr site create site-b \
--auth-url https://site-b-controller:5000 \
--service-username protector-service \
--service-password <password> \
--service-project-name service \
--type secondary
Expected output:
ERROR: Policy doesn't allow protector:sites:create to be performed. (HTTP 403)
An administrator performs the same command successfully:
source ~/admin-openrc
openstack dr site create site-b \
--auth-url https://site-b-controller:5000 \
--service-username protector-service \
--service-password <password> \
--service-project-name service \
--type secondary
Expected output:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| id | c3d4e5f6-a7b8-9012-cdef-345678901234 |
| name | site-b |
| auth_url | https://site-b-controller:5000 |
| site_type | secondary |
| status | active |
+---------------------+--------------------------------------+
Example 5 — Inspecting the policy.yaml admin_or_owner rule
This rule is the foundation of all tenant isolation. You can verify it is correctly applied by reviewing the deployed file:
grep -A3 'admin_or_owner' /etc/protector/policy.yaml
Expected output:
"admin_or_owner": "is_admin:True or project_id:%(project_id)s"
"default": "rule:admin_or_owner"
The project_id:%(project_id)s condition compares the project_id in the request context (from the Keystone token) to the project_id stored on the resource. They must match for a non-admin request to succeed.
HTTP 403 when creating a Protection Group as a tenant
Symptom: ERROR: Policy doesn't allow protector:protection_groups:create to be performed. (HTTP 403)
Likely cause: The credentials you sourced belong to a user who has no role on the target project, or the policy.yaml default rule has been tightened beyond rule:admin_or_owner.
Fix:
- Confirm you have sourced the correct
openrcfile for your project. - Verify your user has at least the
memberrole:openstack role assignment list --user <your-user> --project <your-project> - Check that
/etc/protector/policy.yamlon the primary site has"default": "rule:admin_or_owner"and that"protector:protection_groups:create"resolves torule:default. - Restart
protector-apiafter any policy change:systemctl restart protector-api
Tenant can see another tenant's Protection Groups
Symptom: openstack protector protection-group list returns Protection Groups with project_id values that do not match the caller's project.
Likely cause: The policy.yaml default rule has been set to a permissive value (for example "" or "@"), or the Keystone middleware is not validating tokens (for example delay_auth_decision = true without proper fallback).
Fix:
- Inspect
/etc/protector/policy.yamland confirm"default": "rule:admin_or_owner". - Inspect
api-paste.iniand confirm thekeystonemiddleware.auth_tokenfilter is active for the v1 API composite. - Check
protector.conffor[keystone_authtoken]— confirmauth_url,username, andpasswordare correct and the middleware can reach Keystone. - Restart
protector-apiafter corrections.
Admin --all-tenants flag returns only the admin's own Protection Groups
Symptom: Running openstack protector protection-group list --all-tenants as admin returns only Protection Groups in the admin's project.
Likely cause: The admin user does not have the admin role, or context_is_admin in policy.yaml does not match the admin user's role name.
Fix:
- Confirm the admin user has the
adminrole:openstack role assignment list --user admin --project admin - Check that
"context_is_admin": "role:admin"inpolicy.yamlmatches the role name exactly (case-sensitive). - If your cloud uses a different admin role name (for example
cloud_admin), update thecontext_is_adminrule accordingly and restartprotector-api.
Tenant receives HTTP 403 when triggering failover on their own Protection Group
Symptom: ERROR: Policy doesn't allow protector:operations:action to be performed. (HTTP 403) even though the Protection Group is owned by the tenant's project.
Likely cause: The Keystone token has expired, the project ID in the URL path does not match the token's project_id, or the operations:action rule has been overridden to require role:admin.
Fix:
- Re-source your credentials to obtain a fresh token:
source ~/demo-openrc - Confirm the
project_idin your token matches the PG owner:openstack token issue -f value -c project_id - Inspect
policy.yamlfor any override onprotector:operations:action— it should resolve torule:default(which resolves torule:admin_or_owner).
Metadata sync is blocked and Protection Group modifications fail
Symptom: Any attempt to update a Protection Group (add/remove members, update policy) returns an error indicating the peer site is unreachable.
Likely cause: Network connectivity between the primary and secondary site Protector services has been lost, or the secondary site's protector-api is down. Strict metadata sync enforcement intentionally blocks modifications to prevent divergence.
Fix:
- Test connectivity to the secondary site's Keystone from the primary site controller:
curl -k https://site-b-controller:5000/v3 - Verify
protector-apiis running on the secondary site:systemctl status protector-api - Check the secondary site's service credentials are still valid: authenticate manually using the stored
service_usernameandservice_password. - Review firewall rules — the required ports are 5000 (Keystone), 8774 (Nova), and 8776 (Cinder) between site controllers.
- Once connectivity is restored, retry the modification — the sync will proceed automatically.
Horizon Admin panel shows no Protection Groups across tenants
Symptom: The Horizon Admin panel loads without error but the cross-tenant Protection Group list is empty, even though tenants have created Protection Groups.
Likely cause: The Horizon session is authenticated with a user who does not have the admin role, so the Admin panel renders but RBAC filters out all results.
Fix:
- Log out of Horizon and log back in with a user that has the
adminrole assigned in Keystone. - Confirm the role assignment:
openstack role assignment list --user <horizon-user> --project admin - If the Admin panel is still empty after confirming the role, check
protector-apilogs for incoming requests from Horizon and look for RBAC denial messages:tail -f /var/log/protector/protector-api.log