VAST Cluster's multi-tenancy feature enables you to divide a cluster's storage resources into tenants with isolated data paths, serving different clients. Multi-tenancy can help you achieve security goals, and control allocation of performance and capacity for any reason.
VAST Cluster has a default tenant, with which resources are associated by default. The default tenant cannot be deleted. To make use of multi-tenancy, you create additional tenants.
S3 features are supported on the default tenant only.
SMB is supported on multiple tenants as long as they are all using the same AD provider. Multiple AD providers are supported on the cluster, one of which may be configured as "SMB allowed".
SMB share names must be globally unique per cluster, rather than only per tenant.
Client listings of SMB shares includes shares that are associated with all tenants.
Audit records for all tenants are stored on the default tenant.
If the tenant is associated with AD, the usage of client source IPs (which can be supplied in the tenant configuration) to determine which tenant is used is only allowed if the AD is SMB allowed.
In order to enable clients to access tenants, you must associate each tenant with at least one of the following:
VIP pool. VIP pools can be set to serve all tenants or a specific tenant.
Client IP. You can associate a range of client IPs with each tenant. These client IPs must be unique per tenant.
When a client requests access to a VIP that belongs to a VIP pool that is associated with a tenant, any client IPs that are also associated with the tenant are also checked. If there are client IPs associated with the tenant, then the request is directed to the tenant provided the requesting IP matches one of the associated client IPs. Otherwise, the request is rejected.
Whern a client requests access to a VIP that belongs to a VIP Pool that is not associated with a specific tenant, the client's source IP is checked against the client IPs that are defined within each tenant. That check alone then determines access.
Unless a tenant is associated with any VIP pool or client IP, the tenant is not accessible from any client.
Client access to a view is authorized only if:
The view belongs to the same tenant that is established through the VIP pool and/or client IP check, and
The client host has permission to access the view according to the host based access rules defined in the view policy.
When you create a tenant, an Element Store path is created for that tenant. This data path is isolated from all other tenants. It is deleted if and when the tenant is deleted.
Every view must be associated with a tenant.
A view policy can be associated with a specific tenant or it can serve all tenants. If a view policy is dedicated to a specific VIP pool, it must be a VIP pool that serves the same tenant as the view policy.
Since tenants' data paths are isolated from each other, directories, files, buckets and objects may have the same path on different tenants. Similarly, NFS aliases and S3 bucket names must be unique per tenant but need not be unique per cluster. An exception is SMB share names, which must be unique per cluster.
Users cannot access directories, files, buckets and objects from tenants other than the tenant associated with the view that they are accessing. Client listings of exports/shares/buckets via any protocol do not include any data from tenants other than the tenant used for the client session. An exception to this is that share listings by SMB clients do display shares belonging to all tenants.
Authorization providers can be separate or shared by multiple tenants. For supported combinations of provider types per tenant, see Supported Provider and Protocol Combinations. Providers in use for each tenant must be specified in the tenant configuration. Providers cannot be deleted from the cluster while in use by any tenant.
All providers must be resolvable via the DNS server configured at cluster installation.
Each tenant that has multiple external providers has its own setting for the POSIX primary provider.
Quality of Service (QoS) policies can be defined and associated with views. If you wish to provide variant quality of service to multiple tenants, define the quality of service policies that reflect the service levels you want to define per tenant, and attach them as appropriate to views that belong to those tenants.
Async replication source and target paths can be on any tenant on each cluster and are specified in the protected path configuration.
Audit logs are stored under the default tenant, with each record specifying which tenant each operation belongs to.