Some operations, like backup/restore and data loading, require you to either read or write large files. You can mount a NAS (network attached storage) file system for these operations.
This procedure shows you how to mount a NAS file system for storing or accessing large files. The file system will be mounted at the same location on each node in the cluster automatically. When any node is restarted, the file system will be mounted again automatically, if it can be found.
When supplying a directory for writing or reading a backup, you can specify the mountpoint as the directory to use. Likewise, you can stage data there for loading.
Note that backups are written by the Linux user "admin". If that user does not have permission to write to the NAS file system, you could write the backups to disk (for example /export/sdc1, /export/sdd1, /export/sde1, or /export/sdf1) and then set up a cron job that executes as root user and copies the backup to the NAS device every night, then deletes it from the directory.
Do not send the periodic backups or stage files on /export/sdb1 since it is a name node. It is used internally by Hadoop Distributed File System (HDFS) and if this drive fills up, it can cause serious problems. Do not allow backups or data files to accumulate on ThoughtSpot. If disk space becomes limited, the system will not function normally.
Log in to the Linux shell using SSH.
Mount the directory to the file system, by issuing the appropriate command:
For an NFS (Network File System) directory:
tscli nas mount-nfs
For a CIFS (Common Internet File System) directory:
tscli nas mount-cifs
Use the mounted file system as you wish, specifying it by referring to its mount point.
When you are finished with it, you may optionally unmount the NAS file system:
tscli nas unmount --dir <directory>