dns names and definitions - raeker/ARC-Wiki-Test GitHub Wiki

Advanced Research Computing : DNS names and Definitions

DNS names of ARC-TS Nodes

gl-build.arc-ts.umich.edu

  • Staging system for Great Lakes, for administrative use. [Only > accessible through mandarin.dsc.umich.edu or alethe.dsc.umich.edu]

greatlakes.arc-ts.umich.edu

  • Great Lakes login systems for users.  Users must be on a campus network or using UMVPN.

gl-campus-login.arc-ts.umich.edu

  • A login node for users to run software with licenses that restrict their usage to on campus. This node can only be accessed via a wired connection when physically on campus.

greatlakes-xfer.arc-ts.umich.edu

  • The Great Lakes transfer node. No login (no ssh) but still supports scp, sftp, and globus.

nyxb.arc-ts.umich.edu

  • Primary Flux system, for administrative use, to create or modify an object. Only accessible through mandarin.dsc.umich.edu or alethe.dsc.umich.edu

flux-admin09.arc-ts.umich.edu

  • Used to access other administration nodes like cavium-rm01.

cavium-rm01.arc-ts.umich.edu

  • Used for administrative tasks on the Cavium ThunderX hadoop cluster.

mandarin.dsc.umich.edu

  • Used to get on the ARC-TS subnet to access nodes like nyxb without a vpn.

alethe.dsc.umich.edu

  • Oracle system. Used to get on the ARC-TS subnet to access nodes like nyxb without a vpn.

flux-xfer.arc-ts.umich.edu 

  • The Flux transfer node. No login (no ssh) but still supports scp, sftp, and globus.  There are six nodes: xfer1 through xfer6. 

armis2.arc-ts.umich.edu

  • Primary login node for Armis2. 

cavium-thunderx-login01.arc-ts.umich.edu

  • A login node for users to run hadoop jobs on the Cavium arm cluster.

lighthouse.arc-ts.umich.edu

  • A login node for users to access their own equipment. 

lh-build.arc-ts.umich.edu

  • Staging system for Lighthouse, for administrative use. Only accessible through mandarin.dsc.umich.edu or alethe.dsc.umich.edu

lh-campus-login.arc-ts.umich.edu

  • A login node for users to run software with licenses that restrict their usage to on campus. This node can only be accessed via a wired connection when physically on campus.

ARC

High Performance Computing

  • Great Lakes High-performance computing cluster

  • Armis2 - compliant high-performance computing cluster

  • Lighthouse - Pi owned hosted hardware

  • Conflux - Low latency, high bandwidth Power8 (PowerPC) high-performance computing cluster. Primarily used of computational physics groups. 

  • XSEDE - National HCP facilities one can choose from a variety of clusters located at other institutions

  • Yottabyte Research Cloud - CUI compliant high-performance computing cluster

Data Transfer

  • Globus - Web interface for transferring files between clusters, storage services, and personal computers 

Storage

  • ARC-TS Storage Mounted only on Great Lakes, Armis and LH - /home, /tmp, and /scratch, suitable for short-term storage

  • Turbo Research storage, individual volumes may be mounted on clusters at /nfs/turbo, suitable for storing and working with medium-sized data for a medium-term. Can be mounted on Great Lakes, Lighthouse, or Armis2. Volumes containing PHI information can only be mounted on Armis2.

  • Locker Research storage, individual volumes may be mounted on Great Lakes,suitable for storing big data for a medium-term

  • DataDen Tape archive, individual volumes are only mounted on Flux, uses disk caching for non-sequential access, suitable for archiving humongous data for a long-term

Hadoop

  • Cavium Thunder X - Free of cost ARMv8 (ARM) Hadoop cluster with Hadoop Distributed File System (HDFS) and more storage

⚠️ **GitHub.com Fallback** ⚠️