Microsoft Baseline Security Analyzer (MBSA) is a useful tool for auditing the configuration and update status of Windows computers. Most of the time, its reports are useful and easy to understand. However, some of its responses are baffling, and some of its suggested solutions haven’t been updated since Server 2003. Here is my collection of odd MBSA reports, and how to resolve them.
The use of Internet Explorer is not restricted for administrators on this server
Enable IE Enhanced Security Configuration in the Server Manager
SSSD (System Security Services Daemon) allows Linux systems (specifically, Red Hat, CentOS, and Fedora) to verify identity and authenticate against remote resources. If you have a CentOS or Red Hat enterprise system, and you need to authenticate against a domain controller such as FreeIPA or Active Directory, SSSD is the way to go. I use SSSD on CentOS 7 systems, but it is now available on CentOS 6 as well. A few years ago, adclient (an open-source project from Centrify) was your only option to make a CentOS 6 server authenticate against Active Directory. adclient seems to have reached end of life, so SSSD is definitely the path forward.
I won’t repeat the procedure for using Active Directory as an identity provider on a Red Hat 7 system. Instead, I want to provide a few troubleshooting tips, since limited information is available on SSSD and related tools.
LVM2 (Logical Volume Management) is pretty amazing, but when something goes wrong, it’s not easy to troubleshoot. This is not the fault of the tools, but a reflection that LVM is relatively new in Linux, and not widely understood.
What I Tried to Do
I tried to increase the size of a logical volume with the lvextend command:
lvextend --extents 100%FREE /dev/VolumeGroup1/var
This form of the command is supposed to use all of the free space in the volume group.
The command responded with an error message that contained this text:
device-mapper: reload ioctl failed: Invalid argument
Running lvdisplay showed the following status for the volume:
LV Status suspended
Short answer: not really!
A few years ago, I was fascinated with the idea of microformats. The concept is to add structural data to common HTML tags so that certain types of data on a web page can be clearly identified. Adding structural information makes it easier to automate the parsing and indexing of web content. The original microformats included common data sets that contain well-defined data, such as contact information or a calendar event. In the intervening years, microformats were replaced by microformats2, with additional draft standards for more items such as recipes, resumes, etc.
I decided to use my resume as a “use case” for structured HTML data. My original plan was to use the h-resume draft specification from microformats2. I modified my plan when I realized that major search engines (Bing, Google, Yahoo! and Yandex) have decided to parse another semantic technology, microdata (microdata W3C spec). There is no point in creating semantic web pages if nobody is going to parse them, so I used the microdata specifications that are available at schema.org and placed them into an h-resume framework. I used the event class to represent each position I held or educational achievement, because the event class is the only one with start and end dates. Continue reading
My last post showed how to monitor networked devices with SNMP. You could try to remember to manually check the status of things periodically, but that would be missing the point of computers. Instead, automate your monitoring with Nagios, a web-based monitoring tool for Linux that automates the process of actively querying devices and doing something with the information. Nagios is available as free open source software (Nagios Core), and the company offers additional non-free products with premium features. The open-source version is fine for getting started and setting up basic monitoring. Nagios does a lot more than just SNMP monitoring. I’ll refer you to the Nagios Core documentation to get Nagios up and running, and I’ll focus on how to set up Nagios to actively monitor devices with SNMP.
In Part 1, I summarized the basic concepts of SNMP and defined the terms and acronyms used in this post. Now, I will show how to use SNMP to monitor actual devices. As an example, I will monitor an enterprise-grade uninterruptible power supply (UPS) and power distribution unit (PDUs) from Tripp-Lite. These devices have an SNMPWEBCARD installed to support communication over Ethernet.
Command-line tools for SNMP communication should be available for any Linux distribution (or any other UNIX-derived OS). Documentation for the basic SNMP tools is available online. The challenge with SNMP is figuring out what parameters are supported by a particular device. Most devices support a set of standard OIDs that return basic information such as device name, uptime, etc.
SNMP is a protocol for conveying information and controlling devices over a network. SNMP can be used in two ways:
- Active: a device sends a command to set a parameter or request information for another device
- Passive: a device sends an alert (called a trap) to another device, which is configured to receive traps and do something with the information.
The “payload” of an SNMP message is called an Object Identifier, or OID. An OID is an ordered list of non-negative numbers, such as:
The sequence is hierarchical, starting with the highest-level object and progressing to lower-level objects. The above sequence corresponds to:
iso(1) org(3) dod(6) internet(1) mgmt(2) mib-2(1) system(1) sysUpTime(3) 0
When this command is sent to a device, it will return the uptime of the device.
The translation between the numerical sequence and the human-readable form is stored in a text file called a Management Information Base, or MIB. The format of the MIB is defined in RFC 2578. Some MIB files are standard and contain object IDs that are recognized by almost all devices. Device manufacturers also provide custom MIB files in which they define specialized object IDs for a particular device. Unfortunately, some devices don’t have MIB files, and you will have to query the device to see what objects it supports and decipher what they mean.
In Part 2 of this series, I will use active SNMP to monitor infrastructure.
What do you do when you want to distribute or release source code that is stored in a Git repository? Obviously, if your target audience is using Git, you can just compress the directory that contains the repository and distribute the copies, or give the users a way to clone your repository (such as GitHub). However, your audience may not be Git users, or the hidden .git directory may be very large and you don’t want to distribute it. The solution is the git archive command, which packs the files from a tree-ish into an achive (ZIP or TAR). By “tree-ish”, they mean that you can specify a branch, commit, HEAD, etc.
git archive is somewhat analagous to the
svn export command. I find the most useful form of this command to be:
git archive --output ~/example.zip --format=zip --prefix=example/ HEAD
Do not forget the trailing slash after the directory that you specify with the
REFERENCE: How to do a “git export” (like svn export)
The GENI Project is a networking testbed that is used by researchers studying novel networking technologies. While the technology is fascinating, the web site is, unfortunately, a confusing mess. Here are some pointers to get you started (or refresh your memory). This post will be updated as I learn more.
Key GENI Links
This is where you log into the GENI Project. Your institution must have Shibboleth enabled and be part of the InCommon Federation. Click on the “Use GENI” button, enter the name of your institution into the search box, and you will be redirected to your institution’s login page.
Tutorials, How-Tos, and other documentation can be found at the GENI Experimenter Page.
GitHub is a great tool for collaborating on projects. However, sometimes it is necessary to mimic the “GitHub workflow” using a shared repository on a local Linux server. The following example shows how I shared an example repository with multiple users. We are also using the Git flow model for branching, aided by the handy git flow plugin.
On my workstation
I started by creating a repo on my local workstation and setting it up to use the git flow plugin.
git init . Continue reading
git flow init
git flow feature start first_feature