LVM device-mapper: reload ioctl failed: Invalid argument

LVM2 (Logical Volume Management) is pretty amazing, but when something goes wrong, it’s not easy to troubleshoot. This is not the fault of the tools, but a reflection that LVM is relatively new in Linux, and not widely understood.

What I Tried to Do

I tried to increase the size of a logical volume with the lvextend command:

lvextend --extents 100%FREE /dev/VolumeGroup1/var

This form of the command is supposed to use all of the free space in the volume group.

What Failed

The command responded with an error message that contained this text:

device-mapper: reload ioctl failed: Invalid argument

Running lvdisplay showed the following status for the volume:

LV Status    suspended

Continue reading

Is it worthwhile to embed structured data in Web content?

Short answer: not really!

A few years ago, I was fascinated with the idea of microformats. The concept is to add structural data to common HTML tags so that certain types of data on a web page can be clearly identified. Adding structural information makes it easier to automate the parsing and indexing of web content. The original microformats included common data sets that contain well-defined data, such as contact information or a calendar event. In the intervening years, microformats were replaced by microformats2, with additional draft standards for more items such as recipes, resumes, etc.

I decided to use my resume as a “use case” for structured HTML data. My original plan was to use the h-resume draft specification from microformats2. I modified my plan when I realized that major search engines (Bing, Google, Yahoo! and Yandex) have decided to parse another semantic technology, microdata (microdata W3C spec). There is no point in creating semantic web pages if nobody is going to parse them, so I used the microdata specifications that are available at schema.org and placed them into an h-resume framework.  I used the event class to represent each position I held or educational achievement, because the event class is the only one with start and end dates. Continue reading

Monitoring with SNMP, Part 3: Automate active monitoring with Nagios

My last post showed how to monitor networked devices with SNMP. You could try to remember to manually check the status of things periodically, but that would be missing the point of computers. Instead, automate your monitoring with Nagios, a web-based monitoring tool for Linux that automates the process of actively querying devices and doing something with the information. Nagios is available as free open source software (Nagios Core), and the company offers additional non-free products with premium features. The open-source version is fine for getting started and setting up basic monitoring. Nagios does a lot more than just SNMP monitoring. I’ll refer you to the Nagios Core documentation to get Nagios up and running, and I’ll focus on how to set up Nagios to actively monitor devices with SNMP.
Continue reading

Monitoring with SNMP, Part 2: Command-line tools for active SNMP

In Part 1, I summarized the basic concepts of SNMP and defined the terms and acronyms used in this post. Now, I will show how to use SNMP to monitor actual devices. As an example, I will monitor an enterprise-grade uninterruptible power supply (UPS) and power distribution unit (PDUs) from Tripp-Lite. These devices have an SNMPWEBCARD installed to support communication over Ethernet.

Command-line tools for SNMP communication should be available for any Linux distribution (or any other UNIX-derived OS). Documentation for the basic SNMP tools is available online. The challenge with SNMP is figuring out what parameters are supported by a particular device. Most devices support a set of standard OIDs that return basic information such as device name, uptime, etc.
Continue reading

Monitoring with SNMP, Part 1: Fundamentals of SNMP

SNMP is a protocol for conveying information and controlling devices over a network. SNMP can be used in two ways:

  • Active: a device sends a command to set a parameter or request information for another device
  • Passive: a device sends an alert (called a trap) to another device, which is configured to receive traps and do something with the information.

The “payload” of an SNMP message is called an Object Identifier, or OID. An OID is an ordered list of non-negative numbers, such as:

1.3.6.1.2.1.1.3.0

The sequence is hierarchical, starting with the highest-level object and progressing to lower-level objects. The above sequence corresponds to:

iso(1) org(3) dod(6) internet(1) mgmt(2) mib-2(1) system(1) sysUpTime(3) 0

When this command is sent to a device, it will return the uptime of the device.

The translation between the numerical sequence and the human-readable form is stored in a text file called a Management Information Base, or MIB. The format of the MIB is defined in RFC 2578. Some MIB files are standard and contain object IDs that are recognized by almost all devices. Device manufacturers also provide custom MIB files in which they define specialized object IDs for a particular device. Unfortunately, some devices don’t have MIB files, and you will have to query the device to see what objects it supports and decipher what they mean.

In Part 2 of this series, I will use active SNMP to monitor infrastructure.

“Exporting” a project from a Git repository

What do you do when you want to distribute or release source code that is stored in a Git repository? Obviously, if your target audience is using Git, you can just compress the directory that contains the repository and distribute the copies, or give the users a way to clone your repository (such as GitHub). However, your audience may not be Git users, or the hidden .git directory may be very large and you don’t want to distribute it. The solution is the git archive command, which packs the files from a tree-ish into an achive (ZIP or TAR). By “tree-ish”, they mean that you can specify a branch, commit, HEAD, etc. git archive is somewhat analagous to the svn export command. I find the most useful form of this command to be:
cd example
git archive --output ~/example.zip --format=zip --prefix=example/ HEAD

Do not forget the trailing slash after the directory that you specify with the --prefix flag!

REFERENCE: How to do a “git export” (like svn export)

Running network experiments on the GENI project

The GENI Project is a networking testbed that is used by researchers studying novel networking technologies. While the technology is fascinating, the web site is, unfortunately, a confusing mess. Here are some pointers to get you started (or refresh your memory). This post will be updated as I learn more.

Key GENI Links

GENI Portal

This is where you log into the GENI Project. Your institution must have Shibboleth enabled and be part of the InCommon Federation. Click on the “Use GENI” button, enter the name of your institution into the search box, and you will be redirected to your institution’s login page.

GENI Documention

Tutorials, How-Tos, and other documentation can be found at the GENI Experimenter Page.
Continue reading

Collaborative Git workflow: Shared Repository on a File Server

GitHub is a great tool for collaborating on projects. However, sometimes it is necessary to mimic the “GitHub workflow” using a shared repository on a local Linux server. The following example shows how I shared an example repository with multiple users.  We are also using the Git flow model for branching, aided by the handy git flow plugin.

On my workstation

I started by creating a repo on my local workstation and setting it up to use the git flow plugin.

git init .
git flow init
git flow feature start first_feature
Continue reading

Linux configuration management roundup

Our high performance compute cluster (HPCC) has fairly primitive tools for managing the deployment of the operating system on the compute nodes. Our current tools are “aspencopy,” which takes an “image” of a the filesystem of a running server and saves it as a .tar.gz file (NOT a disk image).  “aspenrestore” is its counterpart, which  deploys an “image” to another server.  The utility is smart enough to update things like the host name, IP address, host SSH keys, etc.  However, the images are essentially “black boxes,” in the sense that there is no system for keeping track of which configuration changes have been applied to which image, and no way to know which image is running on each server.  The next cluster that I am responsible for purchasing must include a configuration management/data center automation system, such as:

On a related note, Vagrant is a system for managing virtual machines.  You can define a virtual machine configuration in a specification file, and Vagrant will automate the startup and shutdown of arbitrary numbers of virtual machines.

 

Configuring GRUB2 on Ubuntu to boot from another Linux partition

My recent Ubuntu installation was my first experience with the new GRUB 2.x series of bootloaders. Unforunately, the process of manually configuring GRUB2 on Ubuntu is not well documented in the case that everything doesn’t work “automagically.” I had to solve two problems: the blank screen at boot, and getting GRUB to boot to an existing partition with CentOS 5 installed.
Continue reading