Skip to content

NFS Management

Introduction to NFS

Network File System (NFS) is a distributed file system protocol originally developed by Sun Microsystems in 1984. It allows a computer to access files over a network in the same way it accesses local storage. NFS enables users and programs to access files on remote systems as if they were local files, providing a seamless integration of resources across a network.

NFS Volume for the WCS DCA Servers

Overview for DCA Shared Filesystem

The White Cloud Security Data Center Appliance (DCA) uses avatar icons to represent login accounts and Security Groups. These avatar images are uploaded by login accounts and admins and stored in a Linux volume via the DCA server.

To ensure these avatar images are shared and displayed on the WCS Dashboard for any DCA server showing the Dashboard, they must be stored on a shared Linux volume that all DCAs can access.

Likewise, the cluster of DCA servers stores reports and third-party app reports on this shared volume, which can either be a shared NFS volume or an attached volume on AWS Elastic File Storage (EFS).

NFS On AWS: Elastic File Storage

NFS nomenclature on AWS is EFS. The AWS EFS volume should be mounted on the /data folder on the DCA server. If the DCA detects that the /data volume is mounted, it then shows the /data volume as a pre-attached volume and doesn't give the user an option to provide an NFS URI.

NFS: Use of Shared NFS Volume

If the DCA detects that no volume is attached to the /data folder, it requests the NFS URI for the NFS Shared Volume to store its files. This NFS URI should point to an NFS volume on a Linux NFS server.

When the DCA /set-dca-options page is opened, its form allows the admin to specify a DCA NFS File Storage Volume URI, such as:

   172.31.20.2:/mnt/data/dcas/wcs1
or
   172.31.20.11:/srv/nfs/share/wcs1

where "wcs1" would represent an NFS shared volume for a single WCS service instance. This service instance can include one or more DCAs which are all:

  • Using the same MySQL databases
  • Servicing the same groups of endpoints
  • Sharing the file storage for avatars, reports, and other data files

NFS On RHEL/CentOS

Install NFS on RHEL and CentOS

Use yum or dnf to install the NFS Server package:

    sudo dnf update
    sudo dnf install nfs-utils
    sudo systemctl enable --now nfs-server
    sudo systemctl enable --now nfs-lock
    sudo systemctl enable --now nfs-idmap
    sudo firewall-cmd --permanent --add-service=nfs
    sudo firewall-cmd --reload

Set up NFS Shared Volume on RHEL and CentOS

In the first URI example, the IT admin has created an NFS storage volume on the Linux server at IP address 172.31.20.2 which is sharing the directory on its filesystem at: /mnt/data/dcas/wcs1.

To create the first example's shared volume, follow these steps:

  1. Create a directory to hold the shared avatar images:

    sudo mkdir -p /mnt/data/dcas/wcs1   
    
  2. Edit /etc/exports to specify the shared volume and which DCAs can access it:

    sudo vi /etc/exports     
    
    Add the line:
    /mnt/data/dcas/wcs1 172.31.20.1/24(rw,sync,no_subtree_check,no_root_squash)
    
    In this case, the entire class C subnet for 172.31.20.1 is allowed to mount this file storage volume.
  3. Export the share info and restart the NFS server:

    sudo exportfs -arv 
    sudo systemctl restart nfs-server.service 
    
  4. On the DCA /set-dca-options page, include the shared volume's URI (servername:/pathname):

    172.31.20.2:/mnt/data/dcas/wcs1/avatars
    
  5. When you click on the "Setup DCA" button, the DCA will verify that the DCA WCS File Storage Volume is accessible and will make the mount point persistent on the DCA.

NFS On Debian/Ubuntu

Install NFS on Debian and Ubuntu

Use apt and apt-get to install the NFS Server package:

    sudo apt update
    sudo apt install nfs-kernel-server
    sudo systemctl restart nfs-kernel-server
    sudo systemctl status nfs-server # Check the status of the NFS server
    sudo systemctl start nfs-server  # Start the NFS server if not already running
    sudo systemctl enable nfs-server

    # Check nfs-lock and nfs-idmapd
    sudo systemctl status nfs-server # Should also show nfs-lock status
    ps aux | grep nfs-idmapd        # Check if nfs-idmapd is running

    # Allow NFS ports
    sudo ufw allow from 172.31.20.1/24 to any port 111     # RPC portmapper
    sudo ufw allow from 172.31.20.1/24 to any port 2049    # NFS
    sudo ufw allow from 172.31.20.1/24 to any port 32803   # rpcbind
    # Enable UFW if not already enabled
    sudo ufw enable

Set up NFS Shared Volume on Debian and Ubuntu

In the second URI example, the IT admin has created an NFS storage volume on the Linux server at IP address 172.31.20.11 which is sharing the directory on its filesystem at: /srv/nfs/share/wcs1.

  1. Create a directory to hold the shared files and set its user permissions:

    sudo mkdir -p /srv/nfs/share/wcs1
    sudo chown 48:48 /srv/nfs/share/wcs1
    sudo chmod 0770 /srv/nfs/share/wcs1
    
  2. Edit /etc/exports to specify the shared volume and which DCAs can access it:

    sudo vi /etc/exports     
    
    Add the line:
    /srv/nfs/share/wcs1 172.31.20.1/24(rw,sync,no_subtree_check,no_root_squash)
    

In this case, the entire class C subnet for 172.31.20.1 is allowed to mount this file storage volume.

  1. Restart the NFS server:

    sudo exportfs -ra
    
  2. On the DCA /set-dca-options page, include the shared volume's URI (servername:/pathname):

    172.31.20.2:/mnt/data/dcas/wcs1/avatars
    
  3. When you click on the "Setup DCA" button, the DCA will verify that the DCA Avatar Storage Volume is accessible and will make the mount point persistent on the DCA.

When running /set-dca-options, the DCA will copy the default avatars to the shared NFS volume and create a file structure similar to this:

wcsFiles/
├── avatars
└── virustotal

To ensure that the NFS setup is scalable and highly available, consider the following:

Scaling and Highly Available NFS Architectures

  • Automounting: Utilize automounting features to manage mounting of NFS volumes dynamically based on usage. This can help balance the load across multiple NFS servers. Reference: NFS Automount Configuration
  • Load Balancing: Implement load balancing to distribute the traffic evenly across multiple NFS servers. Tools like HAProxy can be used. Reference: Load Balancing NFS with HAProxy
  • Horizontal Scaling: Increase the number of NFS servers as needed. This might involve setting up a distributed file system like GlusterFS or CephFS, which can scale out by adding more servers and data centers. Reference: GlusterFS Documentation, CephFS Documentation
  • Redundant NFS Servers: Set up multiple NFS servers with redundancy. If one server fails, another can take over. Reference: High Availability NFS using DRBD and Heartbeat
  • Shared Storage: Use shared storage solutions like AWS EFS, which inherently provides high availability and durability. Reference: AWS EFS High Availability
  • Backup and Recovery: Regularly backup NFS data and have a recovery plan in place. Tools like rsync can be used for incremental backups. Reference: Using rsync for Backups
flowchart TD
    A["WCS DCA"] -- Round Robin Request --> E["Load Balancer"] & H["Load Balancer"]
    G["WCS DCA"] -- Round Robin Request --> E & H
    E -- Read/Write --> I["NFS Server"] & J["NFS Server"]
    H -- Read/Write --> I & J
    I -- Round Robin --> K["Shared Storage"] & L["Shared Storage"]
    J -- Round Robin --> K & L
    K -- Backup --> M["Backup Server"]
    L -- Backup --> N["Backup Server"]

Gluster and/or Ceph considerations, are both distributed storage systems, but they have different architectures and use cases although both support WCS DCA Appliances NFS requirements:

GlusterFS

  • Architecture: Gluster uses a scale-out architecture where storage is provided by multiple servers (or nodes) that pool their storage resources into a single namespace. It uses a stackable user-space design, meaning it's implemented mostly in user space.
  • Use Cases: Gluster is primarily used for scale-out NAS (Network-Attached Storage) and is well-suited for file-based workloads. It's designed to be simple to deploy and manage, often used for storing large volumes of unstructured data like logs, media files, and backups.
  • Data Access: Gluster provides file-level access to data, typically using protocols like NFS and SMB.
  • Ease of Use: Gluster is generally considered easier to set up and manage compared to Ceph, with a simpler configuration and fewer dependencies.

CephFS

  • Architecture: Ceph uses a more complex, distributed object storage architecture that provides interfaces for object, block, and file-level storage. It uses a combination of RADOS (Reliable Autonomic Distributed Object Store) for storing data and CRUSH (Controlled Replication Under Scalable Hashing) for data placement and replication.
  • Use Cases: Ceph is designed to provide unified storage, supporting object storage (via S3 and Swift APIs), block storage (via RBD), and file storage (via CephFS). It's often used in cloud environments and for OpenStack storage backends.
  • Data Access: Ceph provides object, block, and file-level access, making it a versatile solution for various storage needs. It's known for its high scalability and performance, particularly in environments that require high availability and redundancy.
  • Complexity: Ceph is more complex to deploy and manage compared to Gluster, requiring more expertise and a more extensive setup process. It also has a higher learning curve but offers more flexibility and robustness.

Air Gapped Backups

  • Backup and Recovery: We highly recommend performing periodic air-gapped backups as part of a disaster/recovery plan.

Monitoring and Alerts