Child pages
  • Getting Started on the Visualisation Cluster
Skip to end of metadata
Go to start of metadata

Introduction to the Visualisation cluster

The  Visualisation cluster is a High Performance Computing resource that provides an advanced visualisation capability to the end user's desktop. Users can connect to the cluster and take advantage of its advanced capabilities to render very large data sets interactively from their desktops without having any special graphics hardware on their workstation. The images are sent back to the user's desktop over the network and displayed on the desktop using a lightweight visualisation client.

The Visualisation cluster consists of a login node viz0 (viz0.canterbury.ac.nz) and 5 nodes viz1 to viz5 with 2 GPU and 8 cores per node. Each node has access to the same shared file system than the other BlueFern HPC systems (BlueGene P, Power7), in other words, users can access the same home and scratch directory from any of the BlueFern HPC systems including the visualisation cluster.

To run any visualisation software interactively on the cluster you will first need to connect to the login node and request resources (e.g  access to 2 GPUs on 1 node). In simple terms, the login node will check which nodes in the cluster are available for you and will consequently allocate you the requested resources. You can then connect to your given visualisation node and start your visualisation session there.

There are 2 ways that you can typically use a visualisation software on the cluster:

  1. Using a remote visualisation session (VNC) where you can start any software on the cluster (see Remote visualisation session)
  2. Using a Client/Server configuration where you need the software installed on your desktop and connect to the server version of the software on the cluster (e.g Paraview)

Virtual Network Computing (VNC) is a desktop sharing system which uses a protocol to remotely control another computer (one of the visualisation cluster nodes for example). It transmits the keyboard presses and mouse clicks from one computer to another relaying the screen updates back in the other direction, over a network.

 

Visualisation Cluster Hardware

The visualisation nodes have an Intel x86 architecture with SLES 11 (Suse Linux Entreprise server) for operating system. Each visualisation node viz1-viz5 has:

  • 96GB of RAM
  • 2 GPU Tesla M2070Q Nvidia
  • 8 cores

Visualisation software available

 

In remote visualisation session (VNC)In Client/Server mode
ParaViewParaView
Visit 
Ensight 
Fluent on the Viz Cluster 
VMD 

PyMOL

 
SmokeView 

For software available on all the BlueFern systems please check Software available on the BlueFern Systems.

 

Remote visualisation session

A remote visualisation session consists of connecting a Virtual Network Computing client (VNC) to a VNC server on a different computer over the network.  In other words, VNC is a desktop sharing system which uses a protocol to remotely control another computer (one of the visualisation cluster nodes for example). It transmits the keyboard presses and mouse clicks from one computer to another relaying the screen updates back in the other direction, over a network.

To connect to the cluster via a remote visualisation session, you will first need to download a VNC client software on your local machine. For example you could use TurboVNC which can be found at this link: http://www.virtualgl.org/Downloads/TurboVNC. Once you have installed TurboVNC, you will find executables under the path: /opt/TurboVNC/bin.

Before you can request a remote visualisation session on the login node viz0, and then start a session on your allocated node on the cluster (viz1-viz5), you will have to first create a VNC password, please choose a different one from your login password.

Quick Start guide for Linux/Unix/Mac clients

  1. Log in to the visualisation head node viz0:

  2. Create a VNC password (necessary only once), see Creating a VNC password
  3. Request a node with 1 GPU (recommended): 

  4. Start a VNC session on your desktop, and use the connection detailed given in the output of the viz-tvnc -x command line.
  5. Your Gnome session will appear on one of the visualisation node, open a terminal and start ParaView with:

These instructions and steps are explained in more detail in the following subsections.

Icon

Note that in all this documentation the symbol ">" in front of command lines refer to a prompt in a terminal (e.g where you enter command lines) and should not be included when you type the command lines.

Note: vglrun forces applications to use the software VirtualGL (see What is VirtualGL? ) and therefore the cluster GPUs for rendering.

 

Creating a VNC password (1st connection only)

In order to use a remote visualisation session you need to create a TurboVNC password.

The first step is to login to the head node viz0:

Then on viz0 type the following command line, and follow the instructions:

Note that you do not need to enter a view-only password. This is an extra option that allows collaborators to connect to and view your session without being able to interfere.

Icon

It is recommended that the password you choose is not the same as the password you have for your login password.

Since the path /hpc/home/username is part of the shared file system you do not need to repeat this first time login to the other viz nodes.

Now you are ready to request a visualisation session on the viz cluster.

Starting a remote visualisation session

There are several steps involved in the process:

  • Requesting resources on the login node;
  • Creating a ssh forward tunnel from your local machine to your allocated node;
  • Starting the TurboVNC client session on your local machine;
  • Start a 3D application with VirtualGL (e.g: vglrun paraview).  

The following describes each step in details.

Requesting resources

First you need to login to the head node viz0 and you will be requesting resources via the Vizstack script "viz-tvnc". There are several options that you can use with the script viz-tvnc, the most important ones are described below:

For example, to request a session on 1 exclusive GPU, you will type:

Alternatively, to request a session on an entire node (with 2 GPUS):

To see all of the viz-tvnc options you can run this following command in a terminal:

Let's start a session with exclusive use of a GPU (-x option)

As described on the terminal output, you have been allocated 1 GPU on the the node viz1-c. Now it is important to leave this terminal open and untouched throughout your entire session. When you are ready to stop your session and free the allocated  resources, you will need to enter "control C" (often referred to as ^C) .

Live instructions

Icon

You need to follow exactly the notes on the terminal and not what is in this documentation. You need to use the nodename as well as the port number given in YOUR terminal.

Starting a VNC client, connection with ssh forward tunnel

VNC uses a random challenge-response system to provide authentication that allows you to connect to a VNC server. This is reasonably secure as the password is not sent over the network. However, once connected, the traffic between the client and the server is unencrypted. If high security is important to you we recommend that you "tunnel"' your VNC client connection through an encrypted channel like ssh.

Using ssh can have another advantage, it can compress the data as well.  This is particularly useful if the connection between your VNC client and the VNC server is slow. To add simple compression, use the -C option. If you have a slow network connection, you can change the image quality options of the vnc viewer, please check the section on Handling the image quality versus network bandwidth problem below.

If you wish to "tunnel" your VNC session over ssh, you just need to follow the steps from the viz-tvnc output for the second set of instructions:

In other words, before starting the TurboVNC client on your local machine and connect to the TurboVNC server on viz1 (in this example), you will need to create a "ssh forward tunnel " from your local machine to your allocated node. In a new terminal will enter the following command:

The -L option is the forward option, the number 5901 is the port that ssh uses to open a connection between your local machine and your allocated node, here, viz1 Depending on your particular settings and allocated node you will have to addapt this command line.

Again you need to leave this terminal open during your entire remote visualisation session. Once your session is over you may log out.

This is the final step to access your remote visualisation session. Remember that you need to have TurboVNC (or any other VNC client) installed on your local machine first.

In a new terminal (if using TurboVNC on unix or Mac), you can start the TurboVNC viewer as follow:

A small X window will pop up and ask you to enter the server name and window display you wish to connect to. 

Enter the following then press enter:

(warning) Note that you need to use "localhost:1" when tunnelling VNC over ssh as opposed to the node name e.g "viz1:1" for a direct connection.

This small window will disappear and you will be prompted to enter your password. This is the same password entered the first time you started a TurboVNC server for the first time. Then the remote visualisation session will start.

Note that some VNC viewers offer a ssh tunnel connection as an option, if you tick it, you may not need doing  the above step manually. 

Starting a VNC client, direct connection 

If you wish to connect your VNC client directly to the VNC server on the Visualisation cluster, you just need to follow the steps from the viz-tvnc output for the first set of instructions:

This is the final step to access your remote visualisation session. Remember that you need to have TurboVNC (or any other VNC client) installed on your local machine first.

In a new terminal (if using TurboVNC on unix or Mac), you can start the TurboVNC viewer as follow:

A small X window will pop up and ask you to enter the server name and window display you wish to connect to. 

Enter the following then press enter:

This small window will disappear and you will be prompted to enter your password. This is the same password entered the first time you started a TurboVNC server for the first time. Then the remote visualisation session will start.

(warning) Note that if your local desktop is on the University of Canterbury campus network, you may not need to specify the full IP address (viz1.canterbury.ac.nz) and  could use the shorter name "viz1:1" for your vnc connection. 

Running applications inside the remote Visualisation session

To run GPU enabled applications (e.g applications that use GPUs for rendering), you will need to use the package VirtualGL when starting your application via the command line:

For example as a simple test you can try:

You will see clockworks rotating.

An alternative to glxgears is glxspheres64: 

You will see rotating circles rendered by the GPU.

Now for a more practical example, you can run the paraview application (client) in your remote session follow:

In a similar way you can use Fluent, EnSight, ViSit. For a more detailed list of available visualisation software see below, for a list of general applications and packages available on all the BlueFern systems please refer to Software available on the BlueFern Systems.

Note: vglrun forces applications to use the software VirtualGL (see What is VirtualGL? ) and therefore the cluster GPUs for rendering.

Summary

To start a TurboVNC session on the visualisation cluster follow the steps below:

  1. Ssh to the login node viz0;
  2. Request resources on the login node via the Vizstack command "viz-tvnc" with the appropriate options (-x for an exclusive GPU, -N for an entire Node);
  3. In a new terminal, open a ssh forward tunnel from your local machine to your allocated node;
  4. Start a TurboVNC client session on your local machine
  5. Start a 3D application with VirtualGL, e.g vglrun paraview

Stopping a remote visualisation session VNC

If you just quit your local vnc session without doing any of the steps below, your session will still be running on the Viz cluster and you could reconnect to it from a different computer for example.

To really terminate a VNC remote session on the Viz cluster, you should do one of the following actions:

  1. When you are ready to stop your session and free the allocated  resources, you will need to enter "control C" (often referred to as ^C) .

  2. Alternatively you can just "log out" from inside your vnc, this will stop your session and allocation. 
  3. If you are unable to do 1. or 2. , you can "force quit" your allocation by using the "vs-kill" command from VizStack on the login node viz0:
    In a new terminal:

    Then to see your session ID use vs-info

    The session ID above is 85, so to force kill your session, use vs-kill ID

    To check that your session has been terminated, you can check again with vs-info:

Handling the image quality versus network bandwidth problem

Depending on the network your local machine is connected to (wifi, cable), you may observe various degrees of performance in terms of image quality and interactivity.

Most VNC clients do have bandwidth and quality options to choose from when you connect to the visualisation cluster. The TurboVNC viewer has the following command line options that may be useful to improve the usability of the remote desktop:

  • bandwidth <2MBit/s (slow connection): 

  • bandwidth between 2MBit/s and ~50MBit/s (cable)

  • bandwidth >50 MBit/s (LAN, e.g on UC campus network)

  • It is also possible to do some fine tuning of TurboVNC's JPEG compression by varying N with "-quality N" between 1 (lowest quality and highest compression) and 100 (best quality, lowest compression).

Changing the geometry of your VNC viewer

You can specify the geometry (size) of the remote visualisation session when starting viz-tvnc with the -g option:

On a Unix based operating system (e.g Ubuntu, Mac) you can check the resolution of your screen by running the command line:

You can then pass this resolution to viz-tvnc:

You can then enter full screen once you are connected.

Applications in Client/Server mode

Some visualisation applications can be used in "Client/Server" mode. You can open your local application on your desktop ("client") and connect it to its parallel or rendering counterpart on the visualisation cluster ("server"). All the heavylifting of data rendering, memory use and processing is done on the server side (on the visualisation cluster), and a lightweight video stream of the rendered and processed data is sent over the network to your "client" application in real-time.

One good example of an application that can be used in Client/Server mode is ParaView. The ParaView client is a serial application and is always run with the "paraview" command. The server is a parallel MPI program that must be launched as a parallel job, with the "pserver" command. You can start a paraview server (pserver) session on the visualisation cluster where all the CPU and GPU rendering will take place, and visualise the results interactively from you local paraview application on your computer (paraview).

There are several steps involved in the process:

  • Requesting resources on the login node
  • Starting your client application on your local machine and connecting it to the application server on the cluster.

For specific information on how to start the ParaView Client/Server mode please go to ParaView.

Firewall configuration requirements

If your organization (e.g outside the University of Canterbury) has a firewall blocking outgoing connections, ask your system administrator to:

  • open ssh port 22
    for access to the login node viz0
  • open TCP port range 5901-5910
    for remote visualisation sessions with VNC
  • open TCP port range 11111-11120

    for ParaView in Client/Server mode

What is VirtualGL?

  • With VirtualGL, the OpenGL commands and 3D data are instead redirected to a 3D graphics accelerator on the application server, and only the rendered 3D images are sent to the client machine.
  • VirtualGL eliminates the workstation and the network as barriers to data size. Users can now visualize huge amounts of data in real time without needing to copy any of the data over the network or sit in front of the machine that is rendering the data.
  • VirtualGL forces the OpenGL commands to be delivered to a server-side graphics card rather than a client-side graphics card.

For more detailed information about VirtualGL check out http://www.virtualgl.org/

  • No labels