GROMACS produces graphical output in the form of .xvg files. These are designed to be viewed with a classic UNIX/Linux plotting program called Grace. If you happen to be using Linux and you have Grace installed, it is very easy to plot the data with the command
My current employer, the STOKES Advanced Research Computing Center (STOKES ARCC), is hiring a postdoctoral research associate to conduct research in high performance computing with an emphasis on next-generation networking technologies. The ARCC has internal funding that will be used to upgrade our research network to the Internet2 Innovation Platform standard. We are also seeking external funding to extend the research network across the UCF campus. We are looking for a candidate with an interest in topics such as defining a “Science DMZ,” Internet2, GENI, perfSONAR, software-defined networks, etc. Please use the link above to apply for the position. Feel free to contact me if you have questions-my contact information is on the about page.
I have published up-to-date versions of two classic GROMACS tutorials on GitHub. The Getting Started section of the GROMACS online documentation contains some helpful tutorials. Unfortunately, these tutorials have not been updated in a while. They also don’t explain how to set up an efficient workflow to run large molecular dynamics simulations on a shared cluster using a resource manager such at Torque. I have created a set of files that implement the speptide tutorial from the GROMACS documentation.You can use my files and follow along with the explanations in the GROMACS manual. Continue reading
Most of the time, RPM (especially in conjunction with yum) is a decent package management solution. However, I can think of two common circumstances when you don’t want to let RPM install a package:
- You don’t have root permissions on a system such as a shared cluster
- You are an administrator on a shared cluster and you can’t risk having a package over-write system-critical files
I found an interesting quirk when trying to build an OpenMPI application on a visualization node with a “stock” version of Red Hat Enterprise Linux 5.8. I used mpicc to compile the application and got the following error:
$ mpicc hello_world_mpi.c -o hello_world
/usr/bin/ld: cannot find -lnuma
Since Python is widely used as a high-productivity language for scientific computing, Intel has created a page showing how to build NumPy with Intel compilers and the Math Kernel Library (MKL). I would like to clarify a few items regarding building NumPy on a 64-bit Red Hat Enterprise Linux 5.4 system. Since this is a production system, I don’t want to replace the Python 2.4 binary -2.7.3-intel-composer-2013that ships with RHEL 5.4. Instead, I created a directory called
This post will take you through the installation and configuration of an Infiniband card on a server running Red Hat Enterprise Linux 5.4. These steps are applicable to any version of Red Hat 5, and will probably work with version 6 as well. It has been surprisingly hard to find all of these steps in one document.
xCAT is the eXtreme Cloud Administration Toolkit from IBM. It’s a suite of tools that IBM has developed to manage large groups of servers, such as a cloud infrastructure or a high-performance computing cluster (HPCC). I have only used xCAT to administer a mid-sized compute cluster (about 140 compute nodes totaling about 1400 cores running RHEL 5). Overall, I have not found xCAT to be particularly effective for managing a mid-sized cluster. In many ways, xCAT is a brilliant piece of software, but like many “brilliant” solutions, it’s just too complex for its own good. There might be a cluster that is so large and complex that only a tool like xCAT can effectively manage it (especially if you have an administrative staff and you can pay someone to be a full-time xCAT guru). If you have a smaller cluster with limited administrative resources, you’re better off finding a simpler management solution.
OpenFOAM is a notoriously difficult piece of software to compile, install and run. OpenCFD (the authors of OpenFOAM) have chosen to require the use of recent versions of gcc that are not available on most stable enterprise-class systems (ie Red Hat Enterprise Linux). To make things worse, until recently, OpenCFD also bundled a large number of libraries and helper applications (like VTK and ParaView) with the OpenFOAM source instead of using libraries and tools that are already on the system. Fortunately, OpenCFD has now moved the extra tools to a separate tarball, and the wizards at Gentoo have managed to create an ebuild for OpenFOAM. This is why I run Gentoo on my desktop workstation!
CFD-ACE+ is a multiphysics and computational fluid dynamics (CFD) simulation tool that was originally developed by CFD Research Corp. and is now distributed by ESI Software. The only platforms officially supported by CFD-ACE+ are Red Hat Enterprise Linux, SUSE Linux and Windows. Fortunately, it seems that ACE+ runs on other Linux distributions with only a little hacking. I just installed and tested CFD-ACE+ successfully (albeit not very thoroughly) on an up-to-date Gentoo Linux system. The process will require hacking some config files to build an external library from source.