Talking with a Lisp

Technical z0ltanspeak

Enabling and configuring Tracing/Logging for the SFCB CIMOM on ESXi 5.x machines

leave a comment »

It has been my constant experience that VMware creates excellent software and deplorable documentation. Ever since my first tryst with VMware around 2008 or so when I was given the task of working with the VMware VI-SDK (for the ESX platform, and later on for the ESXi platform as well), it has been a wonderful experience seeing those APIs work beautifully, simply, and powerfully. Working on the SDK also led me to explore the internals of the ESX platform itself, and it was quite an enriching experience in and of itself. However, that was also the first time that I experienced the horrendous cesspool of detritus that VMware calls its documentation.  Finding anything of use on its website is an exercise in futility, and the forums don’t fare much better. The best way I found to work my way through this mess was to read the SDK code itself, explore the ESXi console, and try out lots of small prototype programs to see if the data was indeed correct. One saving grace was the availability of the MOB (Managed Object Browser) which arguably taught me more than anything else.

Recently I had a need to set up logging/tracing on the SFCB CIMOM used by the ESXi 4.x and 5.x series, and it was déjà vu all over again. Granted that SFCB is not a VMware product, but since it comes pre-installed on every ESXi box, I would have expected some basic guides explaining the configuration and workings of the CIMOM vis-à-vis the ESXi kernel. Absolutely none whatsoever. So in order to spare people the trouble of not finding any relevant results, let me share my experience in setting up the logging mechanism for SFCB specifically in reference to ESXi.

SFCB (Small Footprint CIM Broker) is an excellent CIMOM that is lightweight, and has a very nice pluggable interface for third-party CIM Providers. An added benefit is that a Provider crash is isolated, and is not allowed to crash the CIMOM itself. No wonder then that it is already the de facto CIMOM for a wide variety of platforms – various Linux flavors, and ESXi of course (which used to depend on the unwieldy gargoyle that is OpenPegasus till Common Sense won out). Now, with ESXi in mind, here are the three simple steps needed to configure the SFCB CIMOM for logging/tracing (especially useful when an errant CIM Provider needs to be checked for anomalous behavior):

1. Log on to the ESXi shell (you need to have SSH enabled through the vSphere Client before you can open a console through a tool such as PuTTY) and check the contents of the /etc/sfcb/sfcb.cfg file (showing the default configuration here):

~ # cat etc/sfcb/sfcb.cfg.new
# Do not modify this header
# VMware ESXi 5.5.0 build-1106514
#
# set logLevel using advanced config: CIMLogLevel
httpPort:       5988
enableHttp:     true
httpProcs:      2
httpsPort:      5989
enableHttps:    true
httpsProcs:     4
provProcs:      16
httpLocalOnly:  true
doBasicAuth:    true
basicAuthLib:   sfcBasicPAMAuthentication
useChunking:    true
keepaliveTimeout: 1
keepaliveMaxRequest: 10
providerTimeoutInterval: 120
sslKeyFilePath: /etc/vmware/ssl/rui.key
sslCertificateFilePath: /etc/vmware/ssl/rui.crt
sslClientTrustStore: /etc/sfcb/client.pem
sslClientCertificate: ignore
certificateAuthLib:   sfcCertificateAuthentication
registrationDir: /var/lib/sfcb/registration
providerDirs: /usr/lib /usr/lib/cmpi /usr/lib/cim
enableInterOp:  true
threadStackSize:     524288
rcvSocketTimeOut: 0
requestQueueSize: 10
threadPoolSize: 5
intSockTimeout: 600
maxSemInitRetries: 5
maxFailureThreshold: 3
cimXmlFdSoftLimit: 512
cimXmlFdHardLimit: 1024

You can see above the default values for various parameters as present on my local ESXi 5.5 machine. The contents should not be too different, if at all, for other versions of ESXi. This file, /etc/sfcb/sfcb.cfg is the main configuration file for the SFCB CIMOM. You can change various parameters by reading up the general documentation of the SFCB CIMOM for other Operating Systems.

2. Now, add the following lines to the /etc/sfcb/sfcb.cfg file:

traceLevel: 1
traceMask: 0x0000103
traceFile: /vmfs/volumes/50cb7c7d-30e72dbe-a165-ac162d8be508/timmy/z0ltan.log

So your new file might look something like the following:

~ # cat /etc/sfcb/sfcb.cfg
# Do not modify this header
# VMware ESXi 5.5.0 build-1106514
#
# set logLevel using advanced config: CIMLogLevel
httpPort:       5988
enableHttp:     true
httpProcs:      2
httpsPort:      5989
enableHttps:    true
httpsProcs:     4
provProcs:      16
httpLocalOnly:  true
doBasicAuth:    true
basicAuthLib:   sfcBasicPAMAuthentication
useChunking:    true
keepaliveTimeout: 1
keepaliveMaxRequest: 10
providerTimeoutInterval: 120
sslKeyFilePath: /etc/vmware/ssl/rui.key
sslCertificateFilePath: /etc/vmware/ssl/rui.crt
sslClientTrustStore: /etc/sfcb/client.pem
sslClientCertificate: ignore
certificateAuthLib:   sfcCertificateAuthentication
registrationDir: /var/lib/sfcb/registration
providerDirs: /usr/lib /usr/lib/cmpi /usr/lib/cim
enableInterOp:  true
threadStackSize:     524288
rcvSocketTimeOut: 0
requestQueueSize: 10
threadPoolSize: 5
intSockTimeout: 600
maxSemInitRetries: 5
maxFailureThreshold: 3
cimXmlFdSoftLimit: 512
cimXmlFdHardLimit: 1024
traceLevel: 1
traceMask: 0x0000103
traceFile: /vmfs/volumes/50cb7c7d-30e72dbe-a165-ac162d8be508/timmy/z0ltan.log

Explanation:

traceLevel dictates the level of logging that you wish to generate (in my experience, a level of ‘1’ should suffice for most cases, but levels 2, 3, or even 4 can be tried out depending on your requirements. The higher the level, the finer the level of logging). However, beware that increasing the level of logging also increases the memory and CPU overheads on the ESXi box, so set the level of logging with a discriminating approach.

traceMask is a bitmask that allows SFCB to enable logging for specific components (a very useful feature that produces smaller and more relevant logs). The various components are listed below along with their bitmasks. Either the int or the hex mask can be used. Also, in order to generate logs for multiple components, their bitmasks may be ORed together to generate a single bitmask to be set as the traceMask. (For instance, I have my bitmask set to: 0×0000103 (providerMgr | providerDrv | providers).

      Traceable Components:     Int        Hex
 	       providerMgr:          1	0x0000001
 	       providerDrv:          2	0x0000002
 	        cimxmlProc:          4	0x0000004
 	        httpDaemon:          8	0x0000008
 	           upCalls:         16	0x0000010
 	          encCalls:         32	0x0000020
 	   ProviderInstMgr:         64	0x0000040
 	  providerAssocMgr:        128	0x0000080
 	         providers:        256	0x0000100
 	       indProvider:        512	0x0000200
 	  internalProvider:       1024	0x0000400
 	        objectImpl:       2048	0x0000800
 	             xmlIn:       4096	0x0001000
 	            xmlOut:       8192	0x0002000
 	           sockets:      16384	0x0004000
 	         memoryMgr:      32768	0x0008000
 	          msgQueue:      65536	0x0010000
 	        xmlParsing:     131072	0x0020000
 	    responseTiming:     262144	0x0040000
 	         dbpdaemon:     524288	0x0080000
 	               slp:    1048576	0x0100000

traceFile, as the name suggests, refers to the location where you want the trace outputs to be logger. By default this is stderr (console), but this can be made to point to a file location (as seen in the sample config file shown previously). I would suggest setting this file in a persistent location with enough space availability (such as on an available datastore). The reason is that if you should choose a location within the root folder (say, /mylogs/test.log), it can quickly overwhelm your ESXi machine. Remember that everything under the root folder in ESXi is necessarily in volatile memory with size restrictions, and from my experience, these logs can quickly grown to hundreds of MBs in size.

3. Restart the SFCB CIMOM in order to reflect the changes to the config file:

~# /etc/init.d/sfcbd-watchdog restart

Note: If you want to go about in a cleaner way, I would recommend that you stop the SFCB CIMOM as the first step (before modifying the config file):

~# /etc/init.d/sfcbd-watchdog stop

Confirm that the SFCB CIMOM has indeed shutdown:

~# /etc/init.d/sfcbd-watchdog status

And then proceed with the steps mentioned before, and when the config file has been updated with the changes, start the SFCB CIMOM again:

~# /etc/init.d/sfcbd-watchdog start

Followed by a final confirmation that the SFCB CIMOM is up and running:

~# /etc/init.d/sfcbd-watchdog status

And that’s all there is to it! Now you should be able to see the log file being populated with log messages as the SFCB CIMOM starts running, and you can then trigger your own CIM operations (such as querying for specific CIM classes on your CIM provider), and those operations should be logged in the log file as well.

Written by Timmy Jose

February 14, 2014 at 3:22 pm

Adding a CIM Provider VIB file to the SCFB CIMOM on ESXi 5.0/5.1 using esxcli

with one comment

Background

The ESXi 5.x series of VMware ESX servers are a highly updated platform from its ESX/ESXi 4.x series. Aside from a ton of updates and improvements, one major change in the 5.x series is that the Service Console (which was basically a Linux based shell around the VMkernel) is completely removed. In its place, there is an optional stripped down version of a shell that has a few basic Unix-like commands (based on the Busybox package), and minimal shell command line support.

The removal of the service console essentially means that installing customized software on the ESXi server itself is substantially restricted. No longer can we merely bundle our own code/libraries and expect them to work on the ESXi server. Instead, the new format of the VIB file needs to be conformed to. Under the hood, the VIB format is simply a zipped up package (allegedly based on the Debian packaging format) that contains the binaries that we want to install as well as descriptor XML files listing out dependencies, paths where the binaries need to go, etc. In addition, a signed VIB file will contain a certificate identity as well as a unique hash identifying the package. Also, the esxcli command is the best and recommended way of installing VIB files/checking for various hardware and software information on the platform. While it takes some getting used to, it is infinitely more powerful and convenient that earlier avatars of the same command.

Lastly, one big change in the ESXi 5.x series is that the SFCB (Small Footprint CIM Broker) is the standard CIMOM that comes pre-installed on the platform. This means that if we want to plug-in some CIM providers, it would be easier to plug-in the SFCB-compliant version of the CIM provider into the SFCB CIMOM. That is the problem that will be solved in this blog, using a sample CIM provider mundanely entitled, “my-cim-provider“.

The script


#!/bin/sh

PROVIDER_VIB=my-provider
CFG_FILE=/etc/sfcb/sfcb.cfg
CFG_BACKUP_FILE=/etc/sfcb/sfcb.cfg_bk

#Check if the hostd daemon is running.
#This is required for the esxcli command.
check_hostd()
{
echo
echo "[Checking for hostd daemon]"

HOSTD_STATUS=`/etc/init.d/hostd status`

if [ "$HOSTD_STATUS" = "hostd is not running." ];then
echo "hostd is not currently running."
echo "Starting hostd as it is required for the installation"

HOSTD_START_STATUS=`/etc/init.d/hostd start`
if [[ "$HOSTD_START_STATUS"=="hostd started" ]]; then
echo "hostd started successfully"
fi
else
echo "hostd daemon is currently running on the machine"
fi

echo "[Finished checking for hostd daemon]"
echo
}

#Check if the VIB file is already installed on the machine.
check_if_vib_already_installed()
{
echo
echo "[Checking if the CIM Provider is already installed on the machine]"

esxcli software vib list | grep -i $PROVIDER_VIB >/dev/null

if [ "$?" = "0" ]; then
echo "The CIM Provider is already installed."
echo "Would you like to uninstall the VIB file? Enter 'y' or 'n'"
read option
if [ "$option" = "y" ];then
uninstall_vib_file
else
echo "Exiting installation"
exit 0
fi
else
echo "The CIM Provider is currently not installed on the machine"
fi

echo "[Finished checking if the CIM Provider is already installed on the machine]"
echo
}

#Uninstall the existing VIB file, if present.
uninstall_vib_file()
{
echo
echo "[Uninstalling the VIB file: $PROVIDER_VIB]"

/etc/init.d/sfcbd-watchdog stop >/dev/null
esxcli software vib remove --vibname=$PROVIDER_VIB --maintenance-mode -f

if [ "$?" = "0" ];then
echo "VIB file: $PROVIDER_VIB uninstalled successfully."
/etc/init.d/sfcbd-watchdog start >/dev/null
echo "Rebooting machine as it is required by the uninstallation"
reboot -f
else
echo "Failed to uninstall the VIB file: $PROVIDER_VIB"
/etc/init.d/sfcbd-watchdog start >/dev/null
exit 1
fi
}

#Edit the SFCB config file with desired values for
#CIMOM parameters.
modify_sfcb_cfg_file()
{
echo
echo "[Updating the file: $CFG_FILE]"

echo "Backing up the existing config file first..."
#Backup the original sfcb.cfg file
cp -f $CFG_FILE $CFG_BACKUP_FILE
echo "Finished backing up the config file to $CFG_BACKUP_FILE"

#Values to be changed
doBasicAuth=false
enableHttp=true
httpLocalOnly=false
sslClientCertificate=ignore
httpProcs=10

#Set the values in the config file
sed -i "s/doBasicAuth:.*/doBasicAuth:   $doBasicAuth/g" $CFG_FILE
sed -i "s/enableHttp:.*/enableHttp:   $enableHttp/g" $CFG_FILE
sed -i "s/sslClientCertificate:.*/sslClientCertificate:   $sslClientCertificate/g" $CFG_FILE
sed -i "s/httpLocalOnly:.*/httpLocalOnly:   $httpLocalOnly/g" $CFG_FILE
sed -i "s/httpProcs:.*/httpProcs:   $httpProcs/g" $CFG_FILE

#Restart the scfb service
/etc/init.d/sfcbd-watchdog restart >/dev/null

echo "[Finished updating the config file: $CFG_FILE]"
echo
}

#In the case the user wants to reboot the machine later.
reboot_canceled()
{
echo "You have decided to cancel the machine reboot. Please reboot the machine to complete the installation"
echo "[Installation of CIM Provider complete]"
exit 0
}

#The main installation logic.
install_vib_file()
{
echo
VIB_FILE=`pwd`/qlogic-cna-provider.vib

echo "[Installing the QLogic Provider VIB file: $VIB_FILE]"
esxcli software vib install -v file://$VIB_FILE -f --maintenance-mode --no-sig-check
echo "[Finished installing the QLogic Provider VIB file: $VIB_FILE]"

#Update the SFCB config file with specific values required by IIAS
modify_sfcb_cfg_file

#reboot the machine - required after installation
trap 'reboot_canceled' INT
echo "Rebooting the machine to complete installation. Press <Ctrl+C> to cancel reboot in "

for i in 10 9 8 7 6 5 4 3 2 1
do
echo $i seconds...
sleep 1
done

echo "[Rebooting machine NOW. Installation of CIM Provider is complete]"
reboot -f
}

#Main script starts here
echo "[Starting installation of CIM Provider]"

check_hostd
check_if_vib_already_installed
install_vib_file

Explanation

The code is pretty straightforward. Thankfully, basic shell scripting is still allowed on the ESXi 5.x console. However, please note that in order to use the command line, you need to enable the SSH service on the ESXi 5.x server using the vSphere Client (Configuration->Security Profile).

The first thing we need to do is to to check if the hostd daemon is running or not. This is required for the esxcli command to work. I found this out the hard way since it had been some time since I had exposure to the ESXi platform (the last one I worked with being the ESXi 4.1 platform), and documentation for the ESXi platform has been meager at best, and it’s even worse for the 5.x series. In case the hostd daemon is not running, we start it up.

The second thing we do is to check if the VIB file (given by the variable, PROVIDER_VIB) is already installed on the machine. In this specific case, we assume that update is not possible, and we need to uninstall the existing package before we can proceed with the installation of a possibly newer version of the same package. If this is not true, then this check can be skipped, and an update command invoked instead of the normal installation command, later on. One additional check that might possibly be done here is to check for the package version, if that is relevant to your specific needs. In this case, if the VIB file is already installed, we need to uninstall it first, and so we provide the user with that option.

If the user has chosen to proceed with the uninstallation of the existing VIB file, we need to stop the SFCB service (via its watchdog), and then invoke the command to uninstall the VIB file:


esxcli software vib remove --vibname=$PROVIDER_VIB --maintenance-mode -f

Different VIB files have different requirements when it comes to uninstallation or installation. For our CIM provider, we need to put the ESXi machine into maintenance mode, and we also need to forcefully uninstall it, if need be (using the -f flag). Also, in this case, we need to reboot the machine after the uninstallation. This need not be the case for other VIB files.

After the uninstallation is done (or if the VIB file was not present on the machine in the first place), we proceed with the actual installation of the VIB file. For this, we set up the variable, VIB_FILE, to contain the absolute path to the CIM Provider VIB file. In this case, we assume that the VIB file is in the same directory as the installer script. If this is not the case, you can set up the path to the VIB file accordingly, the only requirement being that it must be the absolute path to the VIB file, anywhere visible to the esxcli command (i.e., the ESXi 5.x console). The command used for the installation of the package is:


esxcli software vib install -v file://$VIB_FILE -f --maintenance-mode --no-sig-check

Again, we put the machine into maintenance mode using the –maintenance-mode flag, and then additionally we request the installation to forgo the check of the signature on the package using the –no-sig-check flag (if the package is signed). This is not a good practice, but it will work in case there are some problems with the signature, and we still want to proceed with the installation. Finally, we force the installation using the -f flag.

Now comes the interesting part. For our CIM Provider, my-cim-provider, we require to modify some of the default values of the SFCB CIMOM. This configuration is located in the /etc/sfcb/sfcb.cfg file (given by the variable, CFG_FILE). The specific parameters that we want to modify are: disable basic authentication (doBasicAuth=false), enable the HTTP port (enableHttp=true), enable the HTTP port for non-local connections (httpLocalOnly=false), ignore the SSL Client Certificate(sslClientCertificate=ignore) since we don’t want to use SSL, and finally increase the number of HTTP processes used by the SFCB CIMOM from the default 4 to a healthy 10(httpProcs=10). For your specific needs, the values of different parameters might need to be modified in different ways. The same approach can be used to achieve the same. Note that any time there is a change to the SFCB configuration, we need to restart the SFCB daemon.

First off, we backup the existing SFCB configuration file, so that the user can restore his original settings in case of any issues. The we use sed to update the required parameter values to the new values. A sample command is:


sed -i "s/doBasicAuth:.*/doBasicAuth:   $doBasicAuth/g" $CFG_FILE

What this line basically means is, replace the  string matched by the regex (doBasicAuth:.*) with the new string ($doBasicAuth). For syntactic sugar, we include as many spaces before the value, $doBasicAuth (which is “false”) as were in the original SFCB configuration file. The /g switch simply instructs sed to perform the replacement for every match of the regex in the whole file. This will not be the case on most machines, and this is more of a safety measure to ensure that even if there are multiple instances of the same parameter in the same file, the updates to the values are consistent, and according to the desired values. sed is a powerful tool that is often overlooked in favor of other tools such as awk and Perl, but in terms of string manipulation and replacement in-place in files, nothing really comes close to its power and versatility. Finally, we restart the SFCB service. Note that I consistently direct the output of the commands to /dev/null (not just standard errors, but all output). This is to ensure a more or less cross-platform compatibility to avoid echoing the messages from the invocation of the commands. While seeing the verbose output of the commands might be useful for debugging during development, it is hardly fair to overload the customer with such extraneous messages. Customize it as per your own needs.

Finally, we need to reboot the machine after the installation of the vib file (again, this may not be the case for your own VIB file). I again provide the user the option to reboot the machine at a later stage. For this, I make use of a nifty feature of various shells that if often under-appreciated: traps. The general for of the trap command is:


trap '<your logic/function call>' <SIGNAL, such as SIGINT or simply, INT>

For this specific script, I instruct the user to press <Ctrl+C> within 10 seconds to abort the reboot. This sends a SIGINT (or INT for short) trap, which I then redirect to the reboot_canceled function, which informs the user appropriately, and exits the installer script normally. In case the trap is not received within 10 seconds, the machine is rebooted.

After the reboot, the user can then check the status of the VIB file to ensure that it has been installed successfully. It can be done with the following command (which, arguably, can be put in its own script and then executed by the user to check the status of the VIB installation):


esxcli software vib list | grep -i my-cim-provider

So that’s it – as simple as it can get on the new ESXi 5.x platform!

Written by Timmy Jose

May 12, 2013 at 9:39 pm

Creating a service and a service watchdog using simple shell scripts in Linux

with 4 comments

Recently at work I was given a feature to support the customization and installation of OpenPegasus CIMOM (CIM Server) on Linux machines in binary mode. What this means is that instead of building from source code on the Linux machines (as would be the sane thing to do in view of the huge compatibility issues), it was decided to create the binaries on my development box, and then bundle only the required portions as part of an installation script. The main reason for this was the fact that we had a dependency on an external CIM Provider (QLogic), who obviously provided us only with the binaries built on a base Linux machine (specifically, RHEL 5.8).

There were many interesting problems that arose due to library dependencies, OS/ABI incompatibilities, and GCC/GLIBC dependencies. I also learned a lot about the whole process of working with third-party vendors. I plan to cover all of them in a series of upcoming blog posts. For now, however, I would like to post some useful information about I helped the installer team enhance their installation scripts by creating a service and a service watchdog for the OpenPegasus CIMOM bundled with the QLogic provider. For representative purposes, I will use the term “My Service” to refer to the hypothetical service. I will also provide the main logic of the relevant scripts that I wrote for the purpose without violating any NDA restrictions of my workplace! So let’s get right on to it then.

Creating a service in Linux using a shell script

Creating a service in Linux is a pretty simple task. You really just add execution privileges to the shell script, drop it into the /etc/init.d folder, and then invoke a series of commands. The code for the service that installs the OpenPegasus (version 2.11.0 used) CIMOM with the bundled QLogic CIM Provider binaries is listed as follows:

#!/bin/sh

# chkconfig: 2345 55 10
# description:My service
# processname:myservice

usage() {
        echo "service myservice {start|stop|status|"
        exit 0
}

export PEGASUS_ROOT=/opt/pegasus2.11.0
export PEGASUS_HOME=$PEGASUS_ROOT
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PEGASUS_ROOT/lib
export PEGASUS_PLATFORM=LINUX_IX86_GNU
export PATH=$PATH:$PEGASUS_ROOT/bin
export PEGASUS_HAS_SSL=yes

case $1 in

	start) $PEGASUS_ROOT/bin/cimserver
		;;
	stop) $PEGASUS_ROOT/bin/cimserver -s
		;;
	status) if [ `pidof $PEGASUS_ROOT/bin/cimserver` ]; then
			echo "Running"
		else
			echo "Not running"
		fi
		;;
	*) usage
		;;
esac

Explanation:

We start off with the usual shebang followed by the path to the “sh” executable (#!/bin/sh). The following lines are quite interesting, and worth explaining in a bit more detail. The # chkconfig 2345 90 10 line merely informs the OS that we want this service script to be activated for Linux Run Levels 2,3,4 and 5. Check out “Linux Run Levels” for more information on Run Levels in Linux. The parameter 90 refers to the priority to be assigned for service startup (we usually want this to be a moderately high value) while the last parameter 10 refers to the service stop priority (this can be a moderately low value). The specific values for this parameter will depend on your service’s usage patterns. The #description line is optional, and is used to give a descriptive name to the service. The #processname line is the name that you will use for your service, and is usually the same as your script name.

The rest of the logic is pretty simple: I want to support three options – start, stop, and status. For this purpose, I export the relevant environment variables in this script itself so that it does not pollute any other namespace (you could export them in ~/.profile, or ~/.bash_profile, or ~/.bashrc for instance if you want them to be globally available). Then I merely put the logic to start/stop/query the cimserver executable, which is the executable that actually represents the OpenPegasus CIMOM. The core logic of this service script is the command pidof $PEGASUS_ROOT/bin/cimserver, which returns the PID of the specified executable in the current environment.

To install this script as a service, the following commands are performed:

#cp myservice /etc.init.d
#chmod +x myservice
#chkconfig --add myservice
#chkconfig --level 2345 myservice on

The #chkconfig –add myservice is the command that actually adds your script as a Linux service. For this, the script must be executable (chmod +x might be too permissive, feel free to choose a lower level of execution permission), and must be present in /etc/init.d (or at least a soft-link created to the file in this directory). Then, finally, the #chkconfig –level 2345 myservice on command makes your service automatically start with system boot-up. This ensures that your service is always on so long as your Linux box is up. Neat!

But what happens if the service crashes while the machine is still up? It certainly will not restart itself. For this purpose, I decided to add a service watchdog for “myservice”, as shown in the following section.

Creating a service watchdog in Linux using a shell script

The service watchdog’s responsibility is to monitor the main service (say, every minute or so), check its status, and then restart it if it is not running. This ensures a maximum downtime of a minute (or whatever value you chose) for your service. It is quite a nifty feature indeed. This is similar to the scenario where, in Windows, you would set the service properties to “Automatically Restart”. The code for the watchdog for “myservice” is given below:

#!/bin/sh

#chkconfig: 2345 90 10
#description: watchdog for myservice
#processname: myservice-watchdog

MYSERVICE_PID=`pidof /opt/pegasus2.11.0/bin/cimserver`

check_myservice() {
        if [ -z $MYSERVICE_PID ];then
                service myservice start
        fi
}

check_myservice

usage() {
	echo "myservice-watchdog {start|stop|status}"
	exit 0
}

case $1 in
	start ) if [-z $MYSERVICE_PID ];then
		service myservice start
		else
			echo "myservice is already running"
		fi
		;;
	stop ) if [ -n $MYSERVICE_PID ];then
		service myservice stop
		else
			echo "myservice is already stopped"
		fi
		;;
	status) if [ -z $MYSERVICE_PID ];then
			echo "myservice is not running"
		else
			echo "myservice is running"
		fi
		;;
	*) usage
		;;
esac

Explanation:

The logic for the watchdog might seem curiously similar to that of the service itself, and that is right. There were a number of reasons why I chose this approach:

  • The idea is to always monitor the state of the executable itself, and not the service. This ensures that if, for some reason, the service script returns spurious data, the watchdog can avoid spawning multiple instances of the executable, which would most likely fail anyway.
  • The watchdog is also installed a service.  This is not usually required, but in this case it needs to support the following options: start, stop, and status. In addition, the check_myservice function is the one that is used to monitor the service itself (actually the executable).
  • The watchdog is triggered to be run every minute using crontab. This will only run the check_myservice function, whereas any direct invocation of the watchdog will have to supply any one of the following options: start/stop/status.
  • The idea is to always handle the executable indirectly via the watchdog (start/stop/status) rather than directly through the service itself, even if that is also possible. This is more of a best practice than a strict requirement.

The watchdog is installed as a service using the following commands:

#cp myservice /etc.init.d
#chmod +x myservice
#chkconfig --add myservice-watchdog
#chkconfig --level 2345 myservice-watchdog on

The explanation for the steps is the same as that for the installation of the main service itself. It is also worth noticing that the watchdog is also installed as a daemon.

Then we need to create a cron job that will trigger the check_myservice function of the watchdog every minute. For this, the best option (since we are triggering the whole process through an installation script) is to create a cron job in a text file, place that file in the /etc/cron.d directory (where user cron jobs can be placed), and the restarting the crond daemon process to make the new cron job visible to the OS, as follows:

#echo "* * * * * /etc/init.d/myservice-watchdog" > my.cron
#echo "" >> my.cron
#cp my.cron /etc/cron.d
#service crond restart

And that’s it! The most important bit to remember here is that the #echo “” >> my.cron line is required because of a bug in the way crontab behaves – it expects a newline or an empty line after the last cron job in the file. If it is missing, crontab will not fail, or throw an error, but silently avoid triggering the job! Trust me, this is mental agony that you definitely do not want to experience. The cronjob itself is pretty simple – simply call the watchdog every minute (read up on the syntax and semantics of cron jobs in Linux if you are confused by that line).

I hope that this serves a useful purpose for anyone that is planning to explore creating services and watchdogs using shell scripts in Linux.

Written by Timmy Jose

May 5, 2013 at 10:11 pm

Handling the back button click action using JavaScript (in pages with fragment identifiers) to route the request to another page

leave a comment »

So there is this new issue at work where the browser back button is not behaving as desired. The idea is that when the user clicks the browser back button on any page, the user should be routed to the default login page. As much as that sounds like a symptom of an inherently fundamental fault, I feel it is much better than what most other sites do – disable the back button altogether (which can be circumvented of course).

The product in question is a standard JSP-based web application with the normal JS+CSS++HTML client-side stack. The layout is pretty simple – there is a main Login Page, and then the user is presented with a navigation pane on the left-hand side, and with a main page displaying the results of the various options presented in the navigation pane. The idea is that whenever a user clicks on the back button from any page, he or she should be presented with the Login Page. Easier said than done! First of all, there is no real event associated with the browser back button click (or a lot of other browser actions for that matter) in JavaScript. There are plenty of hacks that can be done to achieve most use cases. In this specific case though, the problem was a specific option on the navigation pane – the ‘Inventory’ option, which simply displays various objects as hyperlinks in the main pane. Upon clicking any of these objects, the URL uses fragment identifiers (of the format: http://<host>:<port>/<main-url>#detailedInfo. This practically messes up the aforementioned hacks (using iframes, polling of hash changes, onbeforeunload, onunload, etc). Since the history object of the browser still contains the same base URL for both the main pane as well as the fragment identifier, options such as using iframes and onbeforeunload/onunload events fail completely. Also, using hash changes (either polling or using the basic window.onhashchange event) also do not solve the problem since the URL hash does change even on clicking the anchor for the fragment identifer (which is not desirable) as well as on clicking the back button (or the alt-leftarrow combination). However, unlike the many trolls that abound the various forums online, I present to you an actual solution to the problem at hand, which may be customized to suit any specific situation as the case may be.

For my demo, I have created a base page – Page1.html which simulates the main page. Page2.html contains the logic for detecting back-button clicks (note that since the forward button is not enabled, that case does not pose a problem. Also, the advantage with this method is that refreshing the page does not cause any unexpected or anomalous behavior either). I have also created another page, Login.html which simply simulates the default Login Page.

1. Page1.html

<!DOCTYPE html>
<html lang="en">
	<head>
		<meta charset="utf-8"/>
		<title>Page 1</title>
	</head>
	<body>
		<h1>Welcome to the Main page</h1>
		<a href="Page2.html">Page 2</a>
	</body>
</html>

2. Page2.html

<!DOCTYPE html>
<html lang="en">
	<head>
		<meta charset="utf-8"/>
		<title>Page 2</title>
		<script type="text/javascript">
			var origURL = window.document.location.href;
			var origFileName = origURL.substring(origURL.lastIndexOf("/") + 1, origURL.length);
	        </script>
	</head>
	<body>
		<h1>Welcome to the Details  page</h1>
		<a href="#detailedInfo">Details</a>
		<p id="details">Details here!</a></p>
		<script type="text/javascript">
			window.onhashchange= function() {
				var url = window.document.location.href;
				var fileName = url.substring(url.lastIndexOf("/") + 1, url.length);
			
                        	if(fileName.search(origFileName) != -1) 
                                    && fileName.search("#detailedInfo") != -1) {
                                           // Do nothing.
					} else {
                                                //This would be a proper URL when deployed on a Web Server.
						window.location.replace("Login.html");
					}
			}
		</script>
	</body>
</html>

3. Login.html

<!DOCTYPE html>
<html lang="en">
	<head>
		<meta charset="utf-8"/>
		<title>Login Page</title>
	</head>
	<body>
		<h1>Welcome to the login page</h1>
	</body>
</html>

Explanation: The logic here hinges upon the fact that most modern browsers (this code has been tested on Firefox 18.0.1, Chrome 24.0.1312.57, Opera 12.00, and Internet Explorer 9) support the window.onhashchange event. This event is basically generated when the location hash (in simple terms, the hash of the URL of the current window) changes. Thankfully, this also includes the case where the fragment id of the anchor target is appended to the main URL.

Thus, in Page2.html, I simply get a handle to the original page name when I enter the page, and the check if the new page name (that includes the fragment identifier) contains the original page name as well as the fragment identifier. If so, it means that we are still on the same page, and so we do nothing. Otherwise, it means that the back button has been clicked (or invoked through history.back(), or through the keyboard action – alt+ left arrow). In this case, we simply change the location of the current window to the Login Page. And that’s it!

P.S: The fragment identifier, #detailedInfo has been hardcoded in this snippet, but it need not be so. We can simply check for the main page, and any fragment identifier, to ascertain if we are still on the same page or not. Note that this is a very specific case – any other situations beyond this case have to be handled in their own right.

P.P.S: While Firefox supports the “contains” method (in the style of Java) for searching substring matches in string objects, all the other browsers support the “search” method, including Firefox itself! So this snippet works uniformly across all browsers.

Written by Timmy Jose

February 1, 2013 at 9:22 pm

Posted in JavaScript

Tagged with ,

Configuring Eclipse to run standalone JavaScript files (Using Node.js/Google V8 Engine)

with 5 comments

I recently started studying JavaScript in greater detail so that I could work on some side projects of my own. My new found interest was sparked in a large part by the wonderful server-side JavaScript framework, Node.js. However, while working through the various tutorials that I had collected for the same, it became painfully evident that creating anything non-trivial in standalone JavaScript was a pain in the proper place. This is hardly surprising since the entire life-cycle of JavaScript has been primarily within the confines of the browser. However, becoming swiftly tired of embedding snippets of code within the <script> tags in HTML pages to test out various concepts of the language, I began looking for alternatives.

The first obvious choice was the excellent Firefox Scratchpad (Tools->Web Developer->Scratchpad. I am using Firefox 16.0.2, but this has been always the location of Scratchpad ever since I can remember). This is a wonderful piece of software that works for most of the scenarios while learning JavaScript, but falls short in terms of useful options such as debug options, or linking script files together in a modular fashion.

The next option that I evaluated was the eval support provided by Firebug. This is a far more advanced tool than Scratchpad, but again, when the size and complexity of the code goes beyond a certain point, it is essentially doing something that it was not designed to do.

What I really wanted in this specific case was complete IDE support for executing JavaScript projects. Ideally, I would like to use Eclipse as the IDE, with a JavaScript perspective for all the formatting and validation bits, and link an external tool to execute the script files. Getting the JavaScript perspective to work on the version of Eclipse that I am using, Juno was a breeze. The latter part – getting some suitable engine to run standalone JavaScript code, and getting it to work with Eclipse was the harder bit. Having begun tinkering with Node.js, I saw that their engine basically was a wrapper around Google’s V8 JavaScript Engine. So now I had two options: follow the elaborate set of steps listed out on that site, and generate binaries using Visual Studio (while hoping for the best), or I could simply use Node.js’s own executable wrapper! A little bit of Googling, and I found the following site – http://www.epic-ide.org/running_perl_scripts_within_eclipse/entry.htm, which made life much easier for me. The example given is for Perl, but the steps work perfectly for JavaScript as well.

Steps to configure Eclipse to work with Node.js’s JavaScript engine

1. Open the ‘External Tools’ window (Run->External Tools->External Tools Configuration)

1

2. In the ‘Name’ field, enter a name for the new configuration (such as ‘JavaScript_Configuration’)

2

3. In the ‘Location’ field, enter the path to the windows executable (C:\WINDOWS\system32\cmd.exe on my Windows 7 machine)

3

4. In the ‘Working Directory’ field, enter ‘C:\WINDOWS\system32’. This is because we are referring to the executable in the ‘Location’ field as ‘cmd.exe’, for which this is the working directory.

4

5. In the ‘Arguments’ field, we need to add the following string:

/C “cd ${container_loc} && node ${resource_name}”

5

Obviously, the ‘/C’ at the beginning of the line is the flag that requests the cmd.exe tool to execute the supplied string, and then terminate. The ${container_loc} field refers to the absolute path of the currently selected resource’s (JavaScript script file in this case) parent, and the ${resource_name} variable corresponds to the name of the currently selected resource (the JavaScript script file). Check out this site for more variables associated with the External Tools configuration – http://help.eclipse.org/juno/index.jsp?topic=%2Forg.eclipse.platform.doc.user%2Fconcepts%2Fconcepts-exttools.htm.

Of course, we assume here that the “node” executable is available through Windows’ PATH environment variable.
And that’s it, we’re done! To check that everything is working as expected, I create a sample file, test.js in a new JavaScript project, which contains the following simple code snippet:

(function() {
console.log(“Hello, World!”);
})();

When we execute this file (using Run->External Tools->JavaScript_Configuration), we see that it works perfectly!

6

And of course, this approach can be applied to various other languages that are not supported by default by Eclipse, or for which there is no suitable Eclipse plugin available.

Written by Timmy Jose

December 4, 2012 at 1:58 pm

Posted in JavaScript

Tagged with , ,

A couple of projects upcoming!

leave a comment »

So it has been a busy last few weeks. Well, not busy with work as such but a semblance of work. Just one of those periods where the time passes and one feels stressed out but in the final reckoning, nothing of substantial productivity stands forth. It has been a rather boring last few weeks and I am itching for some real action!

I got my copy of Allen Holub’s ‘Compiler Design in C’ the other day and I could not be more thrilled! It was just the book that I needed for a long pending pet project of mine. Though what shape this specific project will take over the course of the next few months remains to be seen. I am thinking it is going to be a rather interesting experience all the same. The first project that I am undertaking is a Compiler Design and Implementation one – I had originally planned for the Arduino platform for two reasons – 1. There seems to be a severe paucity of mid-level languages to program in on the Arduino family of platforms, and 2. It would be a much simpler exercise in some ways (minimal functionality set, procedural language) and greatly complex in others (strict memory management, high-level syntax and low-level functionality). I feel it would be really useful and educational at the same time – a perfectly sound use of my precious time! However until I get down to the actual design phase, it remains open to new thoughts. First, I have to plow through the hundreds of man-hours of theory and hands-on. I simply relish merely the thought of it. I also plan to keep my progress updated through this blog as I progress through this project. I would give the theory around a month of effort and the actual project perhaps around three months, tentatively. After all, I am not simply planning to create a crude and low-performant C compiler for whichever platform be, but a full-fledged programming language.

The second project is on a much higher level. I had purchased a couple of domain names a couple of months back and I have been planning for some free time to finally get down to this project! I want to create my own website for my URL – http://www.timmyjose.com using Python, Django, JavaScript and CSS. The backend, templating engine and other details have to be decided still. This should serve me well on two fronts – refresh my knowledge of Python and Django and give me a chance to implement these technologies in a real world project. And get me to sit down my behind down at last and actually learn JavaScript and CSS for good! Plus what better way to showcase your own talents than on your own website, right? I would give this project anywhere between three to six months since it would be in parallel with the other project. I will keep this blog updated with my progress on this one as well – my learnings, my failures and my successes as well!

And of course, there are tons of stuff that I have blog about other than these two projects including, but not restricted to, the topics that I had promised in my earlier blogs to tackle. I should be able to maintain a more or less consistent tempo with my blog posts from now on. Fingers crossed!

Written by Timmy Jose

March 23, 2012 at 8:54 pm

Posted in Uncategorized

Accessing the Windows Registry via pure Java and the moral and technical failings of charlatans – Part 2

with one comment

As mentioned in the last blog, here is the actual source code of the migration tool that I had written to migrate the folders from version 1 format to version 2 (with enough changes to preserve product anonymity, of course).

The tool consists of two files – RegistryReader.java and Main.java. RegistryReader.java is used to access the Windows Registry in read-only mode and extact the installation path of the product so that the absolute path of the reports folder(s) can be constructed using this install location as the base location. The other file, Main.java is the entry-point for the tool and contains logic to perform the actual migration (update and rollback). Both files are explained in detail separately, as follows.

package com.z0ltan.reports.enhancedmigrate;

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.logging.Logger;
import java.util.prefs.Preferences;

/**
 * This class will access the Windows Registry to obtain the value of a
 * specified 'key'.
 * 
 * @author z0ltan
 * 
 */
public class RegistryReader {
	private static final Logger logger = Logger.getLogger(RegistryReader.class
			.getName());
	// For Windows, the instance will be a WindowsPreferences instance
	private static final Preferences userRoot = Preferences.userRoot();
	private static final Preferences systemRoot = Preferences.systemRoot();
	private static final Class<? extends Preferences> userClass = userRoot
			.getClass();
	private static final int HKEY_LOCAL_MACHINE = 0x80000002;
	private static final int READ_INSTRUCTION = 0x20019;
	private static final int SUCCESS = 0;
	private static final int NOT_FOUND = 2;
	private static final int ACCESS_DENIED = 5;

	// Define the methods 
	private static Method openRegKey = null;
	private static Method readRegKey = null;
	private static Method closeRegKey = null;

	/**
	 * Initialize the required methods statically.
	 */
	static {
		try {
			openRegKey = userClass.getDeclaredMethod("WindowsRegOpenKey",
					new Class[] { int.class, byte[].class, int.class });
			openRegKey.setAccessible(true);

			readRegKey = userClass.getDeclaredMethod("WindowsRegQueryValueEx",
					new Class[] { int.class, byte[].class });
			readRegKey.setAccessible(true);

			closeRegKey = userClass.getDeclaredMethod("WindowsRegCloseKey",
					new Class[] { int.class });
			closeRegKey.setAccessible(true);
		} catch (SecurityException ex) {
			logger.severe("Unable to get access to WindowsRegOpenKey method "
					+ ex.getLocalizedMessage());
		} catch (NoSuchMethodException ex) {
			logger.severe("Unable to get access to WindowsRegOpenKey method "
					+ ex.getLocalizedMessage());
		}
	}

	/**
	 * This will read the specified key and return its value from
	 * HKEY_LOCAL_MACHINE.
	 * 
	 * @param p6000paInstallPath
	 * @param p6000paInstallKey
	 * @return
	 */
	public static String getValue(String installPath, String installKey)
			throws Exception {
		String value = null;

		try {
			int[] openVal = (int[]) openRegKey.invoke(systemRoot,
					new Object[] { new Integer(HKEY_LOCAL_MACHINE),
							getStringBytes(installPath),
							new Integer(READ_INSTRUCTION) });

			if (openVal != null && openVal.length == 2) {
				if (openVal[1] == SUCCESS) {
					logger.info("Path " + installPath
							+ " found on HKEY_LOCAL_MACHINE");

					byte[] keyValue = (byte[]) readRegKey.invoke(systemRoot,
							new Object[] { new Integer(openVal[0]),
									getStringBytes(installKey) });
					logger.info("Found the value for the key " + installKey
							+ " on HKEY_LOCAL_MACHINE");

					closeRegKey.invoke(systemRoot, new Object[] { new Integer(
							openVal[0]) });
					logger.info("Closed handle for Path " + installPath
							+ "  and Key " + installKey
							+ " on HKEY_LOCAL_MACHINE");

					if (keyValue != null) {
						value = new String(keyValue).trim();
					}
				} else if (openVal[1] == NOT_FOUND) {
					logger.severe("Path " + installPath
							+ " not found on HKEY_LOCAL_MACHINE");
					return null;
				} else if (openVal[1] == ACCESS_DENIED) {
					logger.severe("Access denied while trying to access Path "
							+ installPath + " on HKEY_LOCAL_MACHINE");
					return null;
				} else {
					logger
							.severe("Unknown return code while trying to open Path "
									+ installPath + " on HKEY_LOCAL_MACHINE");
					return null;
				}
			} else {
				logger.severe("Unable to obtain the value of Path "
						+ installPath + " from HKEY_LOCAL_MACHINE");
				return null;
			}
		} catch (IllegalArgumentException ex) {
			logger.severe("Unable to obtain the value of Path " + installPath
					+ " and Key " + installKey
					+ " from HKEY_LOCAL_MACHINE. Reason = "
					+ ex.getLocalizedMessage());
			return null;
		} catch (IllegalAccessException ex) {
			logger.severe("Unable to obtain the value of Path " + installPath
					+ " and Key " + installKey
					+ " from HKEY_LOCAL_MACHINE. Reason = "
					+ ex.getLocalizedMessage());
			return null;
		} catch (InvocationTargetException ex) {
			logger.severe("Unable to obtain the value of Path " + installPath
					+ " and Key " + installKey
					+ " from HKEY_LOCAL_MACHINE. Reason = "
					+ ex.getLocalizedMessage());
			return null;
		}

		return value;
	} // getValue

	/**
	 * Convert the String value to bytes format taking care to ensure the '0' at
	 * the end.
	 * 
	 * @param stringValue
	 * @return
	 */
	private static byte[] getStringBytes(String stringValue) {
		byte[] bytes = new byte[stringValue.length() + 1]; // Add '0' at the end

		for (int i = 0; i < stringValue.length(); i++) {
			bytes[i] = (byte) stringValue.charAt(i);
		}

		bytes[stringValue.length()] = 0;

		return bytes;
	}
} // RegistryReader

Brief Explanation

The Preferences class in the JDK provides access to a wide variety of OS-specific functionality in Windows. An instance of the Preferences class is simply a node in a hierarchical collection of “preference” data backed up by a store. This store can take several forms depending on the actual OS at run-time – Registry, flat-files, directory servers or SQL databases. In the case of Windows, the concrete instance of this class is the WindowsPreferences object. This allows us to access the Windows Registry with consummate ease. As can be seen in the code above, the Preferences class is used to obtain a handle to the Windows Registry (There are two types of preference objects – User Preference and System Preference. For more information, read up on the official JDK documentation for the Preferences class). Once this handle has been obtained, we can use the Reflections API to invoke methods on this registry. In this particular case, only the query and retrieval API’s for the Windows Registry are used. However, the same approach can be taken to modify the state of the Windows Registry as well. The logic is pretty straightforward and the methods invoked are simply the official Windows Registry API’s documented here

Now onto the main migration logic.

package com.z0ltan.reports.enhancedmigrate;

import java.io.File;
import java.io.FileFilter;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Iterator;
import java.util.Map;
import java.util.Properties;
import java.util.logging.Logger;

/**
 * This class uses the RegistryReader to retrieve the product installation
 * location and the processes the Reports folders for that installation.
 * 
 * @author z0ltan
 * 
 */
public class Main {
	private static final Logger logger = Logger.getLogger(Main.class.getName());

	private static final String PRODUCT_INSTALL_PATH = "SOFTWARE\\Company-Name\\ProductInstaller";
	private static final String PRODUCT_INSTALL_KEY = "InstallDir";
	private static final String REPORTS_SUB_PATH = "\\ProductTools\\tomcat\\32bit\\apache-tomcat-6.0.24\\webapps\\product\\reports";

	// For upgrade/rollback
	private static final String REPORTS_TEMP_FILE_PREFIX = "Product_Reports";
	private static final String REPORTS_TEMP_FILE_SUFFIX = ".XML";
	private static final String REPORTS_FILE_NAME = "Product_Reports_Migration.XML";

	/**
	 * The Main Man.
	 * 
	 * @param args
	 */
	public static void main(String[] args) {
		String installationDirectory = null;

		if (args.length != 1) {
			logger
					.severe("Please enter the option - \"UPDATE\" or \"ROLLBACK\"");
			System.exit(-1);
		}

		// Common part 
		try {
			installationDirectory = RegistryReader.getValue(
					PRODUCT_INSTALL_PATH, PRODUCT_INSTALL_KEY);

			logger.info("Product Installation Path = " + installationDirectory);

			if (installationDirectory == null) {
				installationDirectory = "C:/Program Files/Company-Name/Product";
			}
		} catch (Exception e) {
			logger
					.severe("Unable to obtain the Product Installation Path. Proceeding with default Installation Path");
			installationDirectory = "C:/Program Files/Company-Name/Product";
		}

		logger.info("Starting processing Reports Sub-Folders");
		String reportsDirectory = installationDirectory + REPORTS_SUB_PATH;

		if ("UPDATE".equalsIgnoreCase(args[0])) {
			try {
				Main.upgradeReportsSubFolders(reportsDirectory);
				Main.deleteReportsMappingFile(new File(reportsDirectory
						+ REPORTS_FILE_NAME));
			} catch (Exception ex) {
				Main.rollbackReportsSubFolders(reportsDirectory);
			}
		} else if ("ROLLBACK".equalsIgnoreCase(args[0])) {
			Main.rollbackReportsSubFolders(reportsDirectory);
		} else {
			logger
					.severe("Incorrect option supplied. Option can only be \"UPDATE\" or \"ROLLBACK\"");
			System.exit(-1);
		}

		logger.info("Finished processing Reports Sub-Folders");
	}

	/**
	 * Process all the existing Reports folder in the format 'uSerName@HOSTNAME'
	 * to lower-case in compliance with Product 2.0 requirements.
	 * 
	 * @throws Exception
	 */
	private static void upgradeReportsSubFolders(String reportsFolder)
			throws Exception {
		logger.info("Performing Reports Sub-Folders Upgrade");

		// Set up the properties file environment
		Properties writeProps = new Properties();
		OutputStream os = null;
		File tempFile = null;
		String tempFileName = null;

		try {
			tempFile = File.createTempFile(REPORTS_TEMP_FILE_PREFIX,
					REPORTS_TEMP_FILE_SUFFIX, null);
			tempFileName = tempFile.getAbsolutePath();

			// Actual processing starts here
			File dir = new File(reportsFolder);

			if (dir.exists()) {
				if (dir.isDirectory()) {
					File[] directories = dir.listFiles(new DirectoryFilter());

					for (File directory : directories) {
						File[] subdirectories = directory
								.listFiles(new DirectoryFilter());

						for (File subdirectory : subdirectories) {
							String oldName = subdirectory.getName();
							String keyName = directory.getName() + ":"
									+ oldName;
							writeProps.setProperty(keyName, oldName);

							String newName = subdirectory.getName()
									.toLowerCase();
							File newFile = new File(subdirectory
									.getAbsolutePath()
									.replace(oldName, newName));
							if (subdirectory.renameTo(newFile)) {
								logger.info("Renamed " + oldName + " to "
										+ newName);
							} else {
								logger.severe("Failed to rename " + oldName
										+ " to " + newName);
							}
						}
					}
				} else {
					logger.severe(reportsFolder + " is not a directory!");
				}
			} else {
				logger.severe("Could not find directory " + reportsFolder);
			}

			//Now that all the sub-directories have been processed, store the
			//original names into the XML file
			os = new FileOutputStream(tempFile);
			writeProps.storeToXML(os, "Original Reports Folders Mapping");
			os.close();
			File reportsFile = new File(reportsFolder + REPORTS_FILE_NAME);
			tempFile.renameTo(reportsFile);
		} catch (Exception ex) {
			logger
					.severe("A Fatal Error occurred while performing upgrade of Reports Sub-Folders. Reason = "
							+ ex.getLocalizedMessage());
			throw ex;
		} finally {
			if (tempFile.exists()) {
				tempFile.delete();
				logger.info("Temporary XML File " + tempFileName + " deleted");
			}

		}

		logger.info("Performed Reports Sub-Folders Upgrade");
	} // upgradeReportsSubFolders

	/**
	 * Process all the updated Reports folder in the lower case format back to
	 * the original case form for instance, 'uSeRName@HOSTNAME', in compliance
	 * with Product 1.0 requirements.
	 */
	private static void rollbackReportsSubFolders(String reportsFolder) {
		logger.info("Performing Reports Sub-Folders Rollback");

		InputStream in = null;
		File reportsFile = null;

		try {
			// Set up the properties file environment
			Properties readProps = new Properties();
			reportsFile = new File(reportsFolder + REPORTS_FILE_NAME);

			if (!reportsFile.exists()) {
				logger
						.severe("Could not find the file storing the Original Reports Folders Mapping ("
								+ REPORTS_FILE_NAME + "). Exiting!");
				System.exit(-1);
			}

		       // Actual processing starts here 
			in = new FileInputStream(reportsFile);
			readProps.loadFromXML(in);

			Iterator<Map.Entry<Object, Object>> mappingMapIterator = readProps
					.entrySet().iterator();

			while (mappingMapIterator.hasNext()) {
				Map.Entry<Object, Object> entry = mappingMapIterator.next();
				String key = (String) entry.getKey();
				String arrayDirectoryName = key.substring(0, key.indexOf(":"));
				String keyName = key.substring(key.indexOf(":") + 1, key
						.length());
				String folderFileName = reportsFolder + File.separator
						+ arrayDirectoryName;
				String value = (String) entry.getValue();

				File folder = new File(folderFileName); // this will be
				// in lower-case

				if (!folder.exists()) {
					logger.info("Reports Sub-Folder " + folderFileName
							+ " does not exist! Skipping...");
					continue;
				}

				// The Reports Sub-Folder exists, revert it to its original name
				File currentReportSubFolder = new File(folderFileName
						+ File.separator + keyName);
				String curentReportSubFolderName = currentReportSubFolder
						.getAbsolutePath();

				File newReportSubFolder = new File(folderFileName
						+ File.separator + value);

				currentReportSubFolder.renameTo(newReportSubFolder);
				logger.info("Renamed " + curentReportSubFolderName
						+ " to its original name "
						+ newReportSubFolder.getAbsolutePath());
			} // while

		} catch (Exception ex) {
			logger
					.severe("A Fatal Error occurred while performing upgrade of Reports Sub-Folders. Reason = "
							+ ex.getLocalizedMessage());
		} finally {
			if (in != null) {
				try {
					in.close();
				} catch (Exception exc) {
					logger
							.severe("Error while closing the InputStream stream while performing Reports Sub-Folder rollback");
				}
			}
			Main.deleteReportsMappingFile(reportsFile);
		}
		logger.info("Performed Reports Sub-Folders Rollback");
	} // rollbackReportsSubFolders

	private static void deleteReportsMappingFile(File reportsFile) {
		// Delete the Product_Reports_Migration.XML file
		if (reportsFile != null && reportsFile.exists()) {
			reportsFile.delete();
			logger.info("Deleted the " + reportsFile.getAbsolutePath()
					+ " file");
		}
	}
}

/**
 * Simple Filter class to extract only the directories from the given list of
 * Files.
 * 
 * @author z0ltan
 * 
 */
class DirectoryFilter implements FileFilter {
	public boolean accept(File file) {
		if (file.exists() && file.isDirectory()) {
			return true;
		}
		return false;
	}
}

Brief Explanation

The Main class contains three API’s – upgradeReportsSubFolders, rollbackReportsSubFolders and deleteReportsMappingFile. The names are self-explanatory.

The basic flow is as follows – First off, the installation location of the product is extracted from the Windows Registry (since this location is user-configurable). The default installation location is used as a fallback in case the Windows Registry cannot be read or the key in the Windows Registry contains no usable value. From this base path, the absolute path of the Reports Sub-Folders is constructed for further processing.

If the user has chosen the ‘UPGRADE’ option, the upgradeReportsSubFolders API is invoked and all sub-folders in the main Reports folder are processed i.e., their names are converted to lower-case to be compliant with Product Version 2 format. For safety as well as rollback, the mapping of the original folder names to the their original cases is maintained in an XML file. Upon successful upgrade, this XML file is deleted as part of the clean-up. One point to note here is the usage of the temporary file during this processing stage – using a temporary file ensures the integrity of the XML mapping file. In other words, the atomicity of the file-population is ensured. If we were to use a normal file, any exceptions in between would leave the file in an indeterminate state! Thus once the XML file has been populated by the mappings, the temporary file is renamed to the actual XML file name. If the upgrade fails for any reason, the rollback API is called immediately.

If the user has chosen the ‘ROLLBACK’ option, the rollbackReportsSubFolders API is invoked. This API uses the XML mapping file to rename the folders to their original case-format. After this, the XML file is deleted since it is of no further use.

There you go – simple, elegant and efficient!

Written by Timmy Jose

February 13, 2012 at 3:38 pm

Follow

Get every new post delivered to your Inbox.

Join 48 other followers