To enable clients to access directories via NFSv4.1, follow these steps.
Important
VAST Cluster supports NFSv4.1 client access from Linux clients, with the following limitations:
You can enable NFSv4.1 access on one view per Element Store directory.
NFSv4.1-enabled views can be enabled for NFSv3 concurrently. They cannot be enabled for SMB.
The security flavor must be set to NFS or Mixed Last Wins.
NFSv4.1 ACLs are supported only with Mixed Last Wins security flavor.
Multiple views on the same directory path must have the same security flavor.
NFSv4.1 is not supported on clusters which have the Suppressed Showmount feature enabled.
See also Feature Specific Configurations.
File and directory access for NFSv4.1 can be authorized through an Active Directory server. Other external providers are not supported. Moreover, if you wish to use Kerberos authentication and NFSv4.1 ID mapping, or ID mapping without Kerberos authentication, then you will need to use Active Directory. The cluster and clients must join the same Active Directory domain, where the entries for any users and groups that will require access via NFSv4.1 should have NFS user and group attributes.
It is possible to use the local provider, which is internal to the VAST Cluster and lets you add users with their NFS user attributes. However, in this case, Kerberos authentication and ID mapping are not supported.
The Active Directory configuration includes:
If the cluster is not yet joined to an Active Directory domain, follow this procedure to join an Active Directory domain: Joining Active Directory (AD).
Each user that will need access must have a user entry on the joined Active Directory domain with the following NFS attributes:
uidNumber, defining the user's NFS user ID as used by Linux/UNIX.
gidNumber, defining the user's default (leading) NFS group ID as used by Linux/UNIX.
Similarly, each group entry should have a gidNumber entry, to define the NFS group ID of the group.
Finally, users' memberships of any additional groups beside their default group should be defined in Active Directory.
One way you can update Active Directory user and group entries is via the Microsoft Management Console (MMC):
On the Active Directory server machine, open the MMC and select Active Directory Users and Computers.
From the View menu, select Advanced Features.
For each user and group object, open the object and select the Attributes Editor tab.
-
Verify or fill the uidNumber attribute for users and the gidNumber attribute for both users and groups. The following optional UNIX/Linux attributes can be overridden by the client:
loginShell. This defines the default UNIX/Linux shell and is usually /bin/sh.
unixHomeDirectory. Defines the path to the user home storage areas. This is usually /home/<username>.
Note
We hope the above procedure is helpful. In the event that the above procedure does not match your Microsoft operating system interface exactly, please seek the exact procedure in Microsoft's documentation.
When the cluster joins an Active Directory domain, a machine account object is added to the domain for the cluster using the machine account name specified in the Active Directory configuration, The principle nfs/cluster_machine_account_name, is also added (automatically) to the list of ServicePrincipalNames (SPNs).
NFS SPNs are needed so that the Kerberos protocol can access the cluster and authenticate the machine on the NFSv4.1 server to mount the exported filesystem.
With this configuration, clients can mount views specifying the cluster by the specific DNS name <cluster_machine_account_name>.<Active Directory domain name>, provided that this DNS name resolves to a VIP pool on the cluster. It's important to note that when clients use Kerberos authentication, the mount request must specify the cluster using a DNS name rather than an IP address.
Note
In the event that you choose to enable NFSv4.1 client access without Kerberos authentication, client mount requests can specify a VIP (an IP address), provided they do not mount with Kerberos.
If your cluster has a VAST DNS configuration such that the FQDN that resolves to each VIP pool is other than cluster_machine_account_name.Active_Directory_domain_name, then the SPN configuration is not sufficient to enable access to the VIPs and you need to do one of the following:
Add additional '/nfs' SPN attributes to the cluster machine object in order to enable clients to mount views using the FQDNs that resolve to the VIP pools.
Add an A record to the central DNS server's to map each IP in each VIP pool that you want to use to the specific DNS name <cluster_machine_account_name>.<Active Directory domain name>. With this solution, the central DNS becomes responsible for the round-robin load balancing of the cluster's VIP pools. It requires manual editing of the DNS records where additional IPs are added or subtracted over the lifetime of the solution. VAST DNS is dynamic in this respect and does not require manual intervention.
For each VIP pool's DNS name, you will need to add an SPN per VIP Pool in the following format:
nfs/DOMAIN_NAME.DOMAIN_SUFFIX
Here, DOMAIN_NAME is the domain name value set in each VIP pool and DOMAIN_SUFFIX is the domain suffix configured in VAST-DNS.
For example, supposing you have the following configuration:
One delegation rule on your central DNS server forwarding all requests for cluster.mycorp.com to the VAST DNS server.
VAST-DNS enabled with domain suffix set to cluster.mycorp.com.
-
Three VIP pools with domain names:
VIP pool
Domain Name
vippool1
domain1
vippool2
domain2
vippool3
domain3
One option is to add the following SPN attributes to the cluster's machine account in Active Directory:
nfs/domain1.cluster.mycorp.com
nfs/domain2.cluster.mycorp.com
nfs/domain3.cluster.mycorp.com
An alternative solution is to add each VIP in each VIP pool as an A record to the central DNS server.
One way to add SPNs to an Active Directory domain is to use the Active Directory Users and Computers MMC:
On the Active Directory server machine, open the Active Directory Users and Computers MMC.
Under View, select Advanced Features.
Select Computers and in the left pane, locate the cluster's machine account object.
Right-click the object and select Properties.
Select the Attribute Editor tab and edit the servicePrincipalName attribute.
Add the entries.
Click OK in the editor and the properties dialogs as needed to save the entries.
To create users and groups on the local provider, see Managing Local Users.
A view policy is a reusable set of configurations. Every view has a view policy. Multiple views may use the same view policy. Before creating a view that is exposed as an NFS export accessible by NFSv4, you need to make sure you have a view policy that is configured to support NFSv4. You can either modify a view policy or create a new one.
-
In the VAST Web UI, select Element Store from the left navigation menu and then select View Policies.
The View Policies tab displays at least one view policy, the default view policy.
-
To edit a view policy, click
to open the Actions menu for the policy and select Edit. Alternatively, to create a new view policy, click Create Policy at the top right of the grid.
The Add Policy or Update Policy dialog opens with the General area expanded.
In the Name field, enter a unique name for the policy.
-
From the Security Flavor dropdown, select one of the following:
NFS. Supports NFSv4 without support for NFSv4 ACLs.
Mixed Last Wins. Required to enable support for NFSv4 ACLs.
Security flavors determine how file permissions are managed when views are exposed to multiple file sharing protocols.
-
To limit access to specific VIP pools, select those VIP pool(s) in the VIP Pools dropdown.
If no VIP pools are selected, all VIP pools can access all views that are attached to this view policy.
-
From the Group Membership Source dropdown, choose the source used for retrieving group memberships of NFS users for the purposes of authorizing access to files and directories. For NFSv4.1, you must choose one of the following options:
-
Providers. Group memberships retrieved from authorization providers are considered as the user's group memberships (as for SMB-only and multiprotocol views). Group memberships declared in the request are ignored.
Note
This option must be used if Minimal Protection Level is set to Kerberos Auth-only.
-
Client and Providers. Both group memberships declared in the request and group memberships retrieved from authorization providers are considered.
Note
If Kerberos authentication is used, the groups declared in the request are ignored.
For more information about the impact of this setting, see The VAST Cluster Authorization Flow.
-
-
Expand the NFS section. Here you can manage which hosts are allowed to access the view via NFSv3 and NFSv4.1 and the types of access you allow to different hosts.
Two wildcard entries initially appear in the Read/Write and Root Squash rows of the grid:
These wildcards represent all IPs of all hosts. This default configuration gives all hosts read/write access and root squashing. The root squash policy is relevant only for NFSv3 clients.
-
Add and remove entries in the access type grid to allow the exact host access that you want.
-
Click the +Add new IP button for the access type you want to add hosts to.
The IPs list for the access type becomes editable.
-
Add hosts using any of the following expressions in a comma separated list:
A single IP.
A subnet indicated by CIDR notation. For example: 1.1.1.1/24.
A range of IPs indicated by an IP address with '*' as a wildcard in place of any of the 8-bit fields in the address. For example, 3.3.3.*, or 3.3.*.*.
The access types comprise these categories:
-
Controlling read and write operations:
Read / Write. Read/write access.
Read Only. Read only access.
-
Controlling squash policy:
-
No Squash. All operations are supported. Use this option if you trust the root user not to perform operations that will corrupt data.
Note
This option is not relevant for NFSv4.1 users if Kerberos is used, since AD does not include the 'root' user principal by default and since the handling of credentials for the user with UID 0 depends on configuration of the rpc.gssd service.
-
Root Squash. The root user is mapped to nobody for all file and folder management operations on the export. This enables you to prevent the strongest super user from corrupting all user data on the VAST Cluster.
All Squash. All client users are mapped to nobody for all file and folder management operations on the export.
-
Note
The Trash Access option may appear if enabled on the Settings page. This access type is applicable for NFSv3 clients only. The Trash folder feature is not supported for NFSv4.1 clients.
You can add hosts to any and all of the types, but within each category no more than one type will be applied to any given host. If a host is specified with multiple entries in mutually exclusive types, the conflict is resolved as follows:
-
Click Add or press the ENTER key on your keyboard.
The entries are added.
-
To remove an entry, hover to the right of the entry until a removal button appears and click it:
-
-
In the NFS 4.1 tab, select the Minimal Protection Level to accept from NFSv4.1 client RPCs:
Kerberos Auth-only. Allows client mounts with Kerberos authentication only (using the RPCSEC_GSS authentication service).
System. Allows client mounts using either the AUTH_SYS RCP security flavor (the traditional default NFS authentication scheme) or with Kerberos authentication
None. Allows client mounts with the AUTH_NONE (anonymous access), or AUTH_SYS RCP security flavors, or with Kerberos authentication.
-
Switch back to the General tab and (optionally) expand the Advanced section and change the following settings:
-
Atime frequency. atime is a metadata attribute of NFS files that represents the last time the file was updated. atime is updated on read operations if the difference between the current time and the file's atime value is greater than the configured atime frequency. Consider that a very low value might have a performance impact if high numbers of files are being read.
Specify ATIME_FREQUENCY as an integer followed by a unit of time (s = seconds, m= minutes, h=hours, d=days).
-
Posix ACL. For NFSv3 clients, this option enables full support of extended POSIX Access Control Lists (ACL). By default, VAST Cluster supports the traditional POSIX file system object permission mode bits, (minimal ACL mode) in which each file has three ACL entries defining the permissions for the owner, owning group, and others, respectively. To learn more about POSIX ACL, see https://linux.die.net/man/5/acl.
Note
The Posix ACL setting is relevant for NFSv3 only.
When applied to views that have both NFSv3 and NFSv4.1 enabled, POSIX ACLs are supported for NFSv3 clients while NFSv4.1 ACLs are not supported. Support for NFSv4.1 ACLs requires Mixed Last Wins security flavor and is not supported concurrently with POSIX ACLs for NFSv3.
Note
The Posix ACL setting is supported only with the NFS security flavor.
Note
The
setfacl
Linux command is blocked if this option is not enabled.
-
-
Click Create.
The policy is created and added to the list.
Use the viewpolicy create command to create a new view policy or the viewpolicy modify command to modify the default view policy. For command syntax, follow NFS Usage.
Views expose files and directories to client protocols. In order to make files at a given path accessible to NFSv4.1 clients, configure views as follows:
Make sure there is a view on the root directory, '/', and that it has NFSv4.1 enabled. By default, there is a view on this directory and it has NFSv3 enabled without NFSv4.1.
Create a view on the specific path you want to make accessible.
If there are any views on any intermediate directories along the path, enable NFSv4.1 on those paths as well. If there are views on those paths that do not have NFSv4.1 enabled, NFSv4.1 clients will not be able to access the path underneath them that you want to make accessible.
For example, supposing you have a path /product/roadmaps that you want to make accessible to NFSv4.1. You need to make sure that the default view on '/' is enabled for NFSv4.1 and you need to create a view on /product/roadmaps. You do not have to create a view on /product but if there is a view on /product it must be enabled for NFSv4.1.
-
In the VAST Web UI, select Element Store from the left navigation menu and then select Views.
-
Click Create View to add a new view.
The Add View dialog appears.
In the Path field, enter the full path from the top level of the storage system on the cluster to the location that you want to expose. The directory may exist already, such as if it was created by a client inside a mounted parent directory. It could also be a path to a new directory which you'll create now (see step 7).
Open the Protocols dropdown, select NFS4. Optionally, you can also select NFS to expose the same view to NFSv3 clients.
-
If you selected NFS as well as NFS4 in the Protocols dropdown, then, optionally in the NFS Alias field, you can specify an alias for the mount path of the NFS export. This can be used by NFSv3 clients. An alias must begin with a forward slash ("/") and must consist of only ASCII characters.
Note
Alias is supported for NFSv3 clients and not for NFSv4.1 clients.
From the Policy Name dropdown, select the view policy that is configured as described in the previous step. It might be the default policy or one you created for this purpose.
If the directory does not already exist in the file system, enable the Create Directory setting to create the directory.
-
Click Create.
The view is now created. You can see it displayed in the Views tab.
Use the view create command.
Full configuration per feature may require additional configurations or usage on the client side. See the following topics:
Comments
0 comments
Article is closed for comments.