Quota Management - calab-ntu/gpu-cluster GitHub Wiki
First, ssh
to tumaz
:
-
Turn off quota
quotaoff -a
-
Turn on quota
quotaon -auvg
-
Create an account on eater
- Go to GUI interface by internet browser
- Control panel
- User
- Create user
- Except for these two steps, the others follow the preset values. (PWD is set to the same as root on eureka)
ssh OurLabID@eater
sudo -i
chown NewID:calab /volume1/gpucluster3/NewID
-
create a directory for user on
eater:/projectY
Ref: https://blog.xuite.net/m740138.m740138/blog/35596395-QUOTA
ssh OurLabID@eater
sudo -i
mkdir /volume1/gpucluster3/NewID
chown NewID:calab /volume1/gpucluster3/NewID
-
Set quota for user (4TB is limit)
# A: name # B: quotaformat # C: block-softlimit (kB) # D: block-hardlimit (kB) # E: inode-softlimit # F: inode-hardlimit # G: moumt point () # A B C D E F G setquota -u tseng -F vfsv0 5368709120 5368709120 0 0 /volume1 # MB = 1024 kB # GB = 1048576 kB # TB = 1073741824 kB
The maximum volume of Ext4 format is limited as 4T.
-
Report quoata
repquota -auvs
btrfs: w/ DSM (ironman:/work1
, eater:/projectW
, eater:/projectY
, eater:/projectZ
, pacific:/projectU
, pacific:/projectV
)
All porcess need to login to DSM
-
Create directory for user
-
File Station
> Choose the folder which new directory goes. -
Create
>Create Folder
> Name the folder with user's name - Login to
eureka:root
orpacific:root
and set up privilege.chown User:group /projectV/User
chmod 755 /projcetV/User
-
-
Delete a directroy
-
File Station
> Find the directory you want to delete - Right click the directory and selece
Delete
- Double check the
Delete
and key in the admin password
-
-
Set and show quota
-
Control Panel
>Domain/LDAP
>LDAP Users
- Choose the user and click
Edit
>Quota
- You can set up quota or check the usage of the user
- After editing the quota >
Save
- Update the page https://github.com/calab-ntu/eureka/wiki/Storage
-
-
Set quota
- Make sure the total quota left is enough.(Check https://github.com/calab-ntu/eureka/wiki/Storage)
sudo btrfs qgroup limit [quota] /volume2/gpucluster4/NewID
sudo btrfs usrquota limit -u [userID_number] [quota] /volume2/gpucluster4/NewID
-
Show quota
sudo btrfs qgroup show -pcreF /volume2/gpucluster4/NewID/ sudo btrfs usrquota show /volume2/gpucluster4/NewID