Talking with a Lisp

Technical z0ltanspeak

Path to Common Lisp

leave a comment »

I have always been extremely intrigued by Common Lisp ever since I can remember. The sheer simplicity of the basic concepts of this language (which is essentially Lambda Calculus in disguise) is what drew me on to it, and for a long time, it remained just that – an enigma that attracted me every so often and yet left me exasperated with the paucity of good materials and an active community. However, of late, I have been giving it a real serious go. The immediate impetus has to be this excellent collection of interviews by Vsevolod Dyomkin (available here – Lisp Hackers. It is a veritable treasure trove of information about the latest generation of Lisp hackers. The common theme that is to be found though is that Common Lisp itself is not used as much as it is worked upon.

My current undertaking of Lisp is a rather determined one and I daresay that I have made good progress. Another advantage of this is that Scheme and Clojure (which I had tried before but didn’t like that much) are but a small leap from here. My main interest is however in mastering Lisp concepts well enough so that I can use it for my own personal projects to begin with, and then see where that leads me (I really do envy Zach Beane in this regard – the man hacks on Lisp full time at Clozure Associates!).

In this introductory post (after a long long hiatus from blogging), I would like to begin with describing my own path to Common Lisp (and I believe this should help act as a rough guide for any beginners embarking upon the Lisp journey!):

  • Start off with Land of Lisp – arguably the most user-friendly (and yet powerful) introduction to Common Lisp today. It is extremely well-structured and fun to work with. In my opinion, most of the “games” in later chapters can be safely skipped to begin with. They can taken up after a solid understanding of Lisp fundamentals is in place.
  • Next up is Practical Common Lisp. Never mind what the reviews say, this is definitely not a great first book for absolute beginners in Common Lisp. The book makes a lot more sense after finishing Land of Lisp. In my experience, some of the chapters are well-written, quite a few are slipshod and follow a bizarre series of crammed and cryptic notes (especially the chapter on CLOS), but the practical chapters are absolutely necessary to get the hands-on experience that makes one a well-rounded programmer. Overall, a great book.
  • Now one should be in a position to tackle the venerable (if opinionated) Paul Graham’s anthology – ANSI Common Lisp and On Lisp (in that order). This is where I am as well on my own journey in Common Lisp.
  • One must-read book is Let Over Lambda by Doug Hoyte. This is a good follow-up book to Paul Graham’s classics (and even on its own, a great way to expand your perspective – the first few chapters made me really “get” Closures and true Lexical Scoping at long last).
  • Tons of hands-on (this should be orthogonal to this list in any case. Practical sessions are what really teach one…well, anything really). Some few resources to get started off would be – exercism.io and 99 Problems in Lisp. The latter is really useful for practising Functional Programming in Common Lisp.
  • Just a quick note on development environments. I personally use emacs + SLIME + SBCL. SBCL is a very efficient implementation of Common Lisp. CLisp works fine too, but it’s rather very slow (understandably so since it’s not compiled to machine code like SBCL is). Some other flavours are – Clozure CL, LispWorks, and AllegroCL. The latter two are commercial distros, but they do offer personal editions which work just fine for most purposes. The fun bit is that SLIME can connect to any of these flavours in any case, so your development environment can remain consistent irrespective of which Lisp flavour you choose to work with.

    I suppose that should about do it for an introductory post on Common Lisp! Happy Hacking!

    Written by Timmy Jose

    July 27, 2016 at 11:26 pm

    Posted in Uncategorized

    Tagged with

    Enabling and configuring Tracing/Logging for the SFCB CIMOM on ESXi 5.x machines

    leave a comment »

    It has been my constant experience that VMware creates excellent software and deplorable documentation. Ever since my first tryst with VMware around 2008 or so when I was given the task of working with the VMware VI-SDK (for the ESX platform, and later on for the ESXi platform as well), it has been a wonderful experience seeing those APIs work beautifully, simply, and powerfully. Working on the SDK also led me to explore the internals of the ESX platform itself, and it was quite an enriching experience in and of itself. However, that was also the first time that I experienced the horrendous cesspool of detritus that VMware calls its documentation.  Finding anything of use on its website is an exercise in futility, and the forums don’t fare much better. The best way I found to work my way through this mess was to read the SDK code itself, explore the ESXi console, and try out lots of small prototype programs to see if the data was indeed correct. One saving grace was the availability of the MOB (Managed Object Browser) which arguably taught me more than anything else.

    Recently I had a need to set up logging/tracing on the SFCB CIMOM used by the ESXi 4.x and 5.x series, and it was déjà vu all over again. Granted that SFCB is not a VMware product, but since it comes pre-installed on every ESXi box, I would have expected some basic guides explaining the configuration and workings of the CIMOM vis-à-vis the ESXi kernel. Absolutely none whatsoever. So in order to spare people the trouble of not finding any relevant results, let me share my experience in setting up the logging mechanism for SFCB specifically in reference to ESXi.

    SFCB (Small Footprint CIM Broker) is an excellent CIMOM that is lightweight, and has a very nice pluggable interface for third-party CIM Providers. An added benefit is that a Provider crash is isolated, and is not allowed to crash the CIMOM itself. No wonder then that it is already the de facto CIMOM for a wide variety of platforms – various Linux flavors, and ESXi of course (which used to depend on the unwieldy gargoyle that is OpenPegasus till Common Sense won out). Now, with ESXi in mind, here are the three simple steps needed to configure the SFCB CIMOM for logging/tracing (especially useful when an errant CIM Provider needs to be checked for anomalous behavior):

    1. Log on to the ESXi shell (you need to have SSH enabled through the vSphere Client before you can open a console through a tool such as PuTTY) and check the contents of the /etc/sfcb/sfcb.cfg file (showing the default configuration here):

    ~ # cat etc/sfcb/sfcb.cfg.new
    # Do not modify this header
    # VMware ESXi 5.5.0 build-1106514
    #
    # set logLevel using advanced config: CIMLogLevel
    httpPort:       5988
    enableHttp:     true
    httpProcs:      2
    httpsPort:      5989
    enableHttps:    true
    httpsProcs:     4
    provProcs:      16
    httpLocalOnly:  true
    doBasicAuth:    true
    basicAuthLib:   sfcBasicPAMAuthentication
    useChunking:    true
    keepaliveTimeout: 1
    keepaliveMaxRequest: 10
    providerTimeoutInterval: 120
    sslKeyFilePath: /etc/vmware/ssl/rui.key
    sslCertificateFilePath: /etc/vmware/ssl/rui.crt
    sslClientTrustStore: /etc/sfcb/client.pem
    sslClientCertificate: ignore
    certificateAuthLib:   sfcCertificateAuthentication
    registrationDir: /var/lib/sfcb/registration
    providerDirs: /usr/lib /usr/lib/cmpi /usr/lib/cim
    enableInterOp:  true
    threadStackSize:     524288
    rcvSocketTimeOut: 0
    requestQueueSize: 10
    threadPoolSize: 5
    intSockTimeout: 600
    maxSemInitRetries: 5
    maxFailureThreshold: 3
    cimXmlFdSoftLimit: 512
    cimXmlFdHardLimit: 1024
    

    You can see above the default values for various parameters as present on my local ESXi 5.5 machine. The contents should not be too different, if at all, for other versions of ESXi. This file, /etc/sfcb/sfcb.cfg is the main configuration file for the SFCB CIMOM. You can change various parameters by reading up the general documentation of the SFCB CIMOM for other Operating Systems.

    2. Now, add the following lines to the /etc/sfcb/sfcb.cfg file:

    traceLevel: 1
    traceMask: 0x0000103
    traceFile: /vmfs/volumes/50cb7c7d-30e72dbe-a165-ac162d8be508/timmy/z0ltan.log
    

    So your new file might look something like the following:

    ~ # cat /etc/sfcb/sfcb.cfg
    # Do not modify this header
    # VMware ESXi 5.5.0 build-1106514
    #
    # set logLevel using advanced config: CIMLogLevel
    httpPort:       5988
    enableHttp:     true
    httpProcs:      2
    httpsPort:      5989
    enableHttps:    true
    httpsProcs:     4
    provProcs:      16
    httpLocalOnly:  true
    doBasicAuth:    true
    basicAuthLib:   sfcBasicPAMAuthentication
    useChunking:    true
    keepaliveTimeout: 1
    keepaliveMaxRequest: 10
    providerTimeoutInterval: 120
    sslKeyFilePath: /etc/vmware/ssl/rui.key
    sslCertificateFilePath: /etc/vmware/ssl/rui.crt
    sslClientTrustStore: /etc/sfcb/client.pem
    sslClientCertificate: ignore
    certificateAuthLib:   sfcCertificateAuthentication
    registrationDir: /var/lib/sfcb/registration
    providerDirs: /usr/lib /usr/lib/cmpi /usr/lib/cim
    enableInterOp:  true
    threadStackSize:     524288
    rcvSocketTimeOut: 0
    requestQueueSize: 10
    threadPoolSize: 5
    intSockTimeout: 600
    maxSemInitRetries: 5
    maxFailureThreshold: 3
    cimXmlFdSoftLimit: 512
    cimXmlFdHardLimit: 1024
    traceLevel: 1
    traceMask: 0x0000103
    traceFile: /vmfs/volumes/50cb7c7d-30e72dbe-a165-ac162d8be508/timmy/z0ltan.log
    

    Explanation:

    traceLevel dictates the level of logging that you wish to generate (in my experience, a level of ‘1’ should suffice for most cases, but levels 2, 3, or even 4 can be tried out depending on your requirements. The higher the level, the finer the level of logging). However, beware that increasing the level of logging also increases the memory and CPU overheads on the ESXi box, so set the level of logging with a discriminating approach.

    traceMask is a bitmask that allows SFCB to enable logging for specific components (a very useful feature that produces smaller and more relevant logs). The various components are listed below along with their bitmasks. Either the int or the hex mask can be used. Also, in order to generate logs for multiple components, their bitmasks may be ORed together to generate a single bitmask to be set as the traceMask. (For instance, I have my bitmask set to: 0x0000103 (providerMgr | providerDrv | providers).

          Traceable Components:     Int        Hex
     	       providerMgr:          1	0x0000001
     	       providerDrv:          2	0x0000002
     	        cimxmlProc:          4	0x0000004
     	        httpDaemon:          8	0x0000008
     	           upCalls:         16	0x0000010
     	          encCalls:         32	0x0000020
     	   ProviderInstMgr:         64	0x0000040
     	  providerAssocMgr:        128	0x0000080
     	         providers:        256	0x0000100
     	       indProvider:        512	0x0000200
     	  internalProvider:       1024	0x0000400
     	        objectImpl:       2048	0x0000800
     	             xmlIn:       4096	0x0001000
     	            xmlOut:       8192	0x0002000
     	           sockets:      16384	0x0004000
     	         memoryMgr:      32768	0x0008000
     	          msgQueue:      65536	0x0010000
     	        xmlParsing:     131072	0x0020000
     	    responseTiming:     262144	0x0040000
     	         dbpdaemon:     524288	0x0080000
     	               slp:    1048576	0x0100000
    

    traceFile, as the name suggests, refers to the location where you want the trace outputs to be logger. By default this is stderr (console), but this can be made to point to a file location (as seen in the sample config file shown previously). I would suggest setting this file in a persistent location with enough space availability (such as on an available datastore). The reason is that if you should choose a location within the root folder (say, /mylogs/test.log), it can quickly overwhelm your ESXi machine. Remember that everything under the root folder in ESXi is necessarily in volatile memory with size restrictions, and from my experience, these logs can quickly grown to hundreds of MBs in size.

    3. Restart the SFCB CIMOM in order to reflect the changes to the config file:

    ~# /etc/init.d/sfcbd-watchdog restart
    

    Note: If you want to go about in a cleaner way, I would recommend that you stop the SFCB CIMOM as the first step (before modifying the config file):

    ~# /etc/init.d/sfcbd-watchdog stop
    

    Confirm that the SFCB CIMOM has indeed shutdown:

    ~# /etc/init.d/sfcbd-watchdog status
    

    And then proceed with the steps mentioned before, and when the config file has been updated with the changes, start the SFCB CIMOM again:

    ~# /etc/init.d/sfcbd-watchdog start
    

    Followed by a final confirmation that the SFCB CIMOM is up and running:

    ~# /etc/init.d/sfcbd-watchdog status
    

    And that’s all there is to it! Now you should be able to see the log file being populated with log messages as the SFCB CIMOM starts running, and you can then trigger your own CIM operations (such as querying for specific CIM classes on your CIM provider), and those operations should be logged in the log file as well.

    Written by Timmy Jose

    February 14, 2014 at 3:22 pm

    Adding a CIM Provider VIB file to the SCFB CIMOM on ESXi 5.0/5.1 using esxcli

    with one comment

    Background

    The ESXi 5.x series of VMware ESX servers are a highly updated platform from its ESX/ESXi 4.x series. Aside from a ton of updates and improvements, one major change in the 5.x series is that the Service Console (which was basically a Linux based shell around the VMkernel) is completely removed. In its place, there is an optional stripped down version of a shell that has a few basic Unix-like commands (based on the Busybox package), and minimal shell command line support.

    The removal of the service console essentially means that installing customized software on the ESXi server itself is substantially restricted. No longer can we merely bundle our own code/libraries and expect them to work on the ESXi server. Instead, the new format of the VIB file needs to be conformed to. Under the hood, the VIB format is simply a zipped up package (allegedly based on the Debian packaging format) that contains the binaries that we want to install as well as descriptor XML files listing out dependencies, paths where the binaries need to go, etc. In addition, a signed VIB file will contain a certificate identity as well as a unique hash identifying the package. Also, the esxcli command is the best and recommended way of installing VIB files/checking for various hardware and software information on the platform. While it takes some getting used to, it is infinitely more powerful and convenient that earlier avatars of the same command.

    Lastly, one big change in the ESXi 5.x series is that the SFCB (Small Footprint CIM Broker) is the standard CIMOM that comes pre-installed on the platform. This means that if we want to plug-in some CIM providers, it would be easier to plug-in the SFCB-compliant version of the CIM provider into the SFCB CIMOM. That is the problem that will be solved in this blog, using a sample CIM provider mundanely entitled, “my-cim-provider“.

    The script

    
    #!/bin/sh
    
    PROVIDER_VIB=my-provider
    CFG_FILE=/etc/sfcb/sfcb.cfg
    CFG_BACKUP_FILE=/etc/sfcb/sfcb.cfg_bk
    
    #Check if the hostd daemon is running.
    #This is required for the esxcli command.
    check_hostd()
    {
    echo
    echo "[Checking for hostd daemon]"
    
    HOSTD_STATUS=`/etc/init.d/hostd status`
    
    if [ "$HOSTD_STATUS" = "hostd is not running." ];then
    echo "hostd is not currently running."
    echo "Starting hostd as it is required for the installation"
    
    HOSTD_START_STATUS=`/etc/init.d/hostd start`
    if [[ "$HOSTD_START_STATUS"=="hostd started" ]]; then
    echo "hostd started successfully"
    fi
    else
    echo "hostd daemon is currently running on the machine"
    fi
    
    echo "[Finished checking for hostd daemon]"
    echo
    }
    
    #Check if the VIB file is already installed on the machine.
    check_if_vib_already_installed()
    {
    echo
    echo "[Checking if the CIM Provider is already installed on the machine]"
    
    esxcli software vib list | grep -i $PROVIDER_VIB >/dev/null
    
    if [ "$?" = "0" ]; then
    echo "The CIM Provider is already installed."
    echo "Would you like to uninstall the VIB file? Enter 'y' or 'n'"
    read option
    if [ "$option" = "y" ];then
    uninstall_vib_file
    else
    echo "Exiting installation"
    exit 0
    fi
    else
    echo "The CIM Provider is currently not installed on the machine"
    fi
    
    echo "[Finished checking if the CIM Provider is already installed on the machine]"
    echo
    }
    
    #Uninstall the existing VIB file, if present.
    uninstall_vib_file()
    {
    echo
    echo "[Uninstalling the VIB file: $PROVIDER_VIB]"
    
    /etc/init.d/sfcbd-watchdog stop >/dev/null
    esxcli software vib remove --vibname=$PROVIDER_VIB --maintenance-mode -f
    
    if [ "$?" = "0" ];then
    echo "VIB file: $PROVIDER_VIB uninstalled successfully."
    /etc/init.d/sfcbd-watchdog start >/dev/null
    echo "Rebooting machine as it is required by the uninstallation"
    reboot -f
    else
    echo "Failed to uninstall the VIB file: $PROVIDER_VIB"
    /etc/init.d/sfcbd-watchdog start >/dev/null
    exit 1
    fi
    }
    
    #Edit the SFCB config file with desired values for
    #CIMOM parameters.
    modify_sfcb_cfg_file()
    {
    echo
    echo "[Updating the file: $CFG_FILE]"
    
    echo "Backing up the existing config file first..."
    #Backup the original sfcb.cfg file
    cp -f $CFG_FILE $CFG_BACKUP_FILE
    echo "Finished backing up the config file to $CFG_BACKUP_FILE"
    
    #Values to be changed
    doBasicAuth=false
    enableHttp=true
    httpLocalOnly=false
    sslClientCertificate=ignore
    httpProcs=10
    
    #Set the values in the config file
    sed -i "s/doBasicAuth:.*/doBasicAuth:   $doBasicAuth/g" $CFG_FILE
    sed -i "s/enableHttp:.*/enableHttp:   $enableHttp/g" $CFG_FILE
    sed -i "s/sslClientCertificate:.*/sslClientCertificate:   $sslClientCertificate/g" $CFG_FILE
    sed -i "s/httpLocalOnly:.*/httpLocalOnly:   $httpLocalOnly/g" $CFG_FILE
    sed -i "s/httpProcs:.*/httpProcs:   $httpProcs/g" $CFG_FILE
    
    #Restart the scfb service
    /etc/init.d/sfcbd-watchdog restart >/dev/null
    
    echo "[Finished updating the config file: $CFG_FILE]"
    echo
    }
    
    #In the case the user wants to reboot the machine later.
    reboot_canceled()
    {
    echo "You have decided to cancel the machine reboot. Please reboot the machine to complete the installation"
    echo "[Installation of CIM Provider complete]"
    exit 0
    }
    
    #The main installation logic.
    install_vib_file()
    {
    echo
    VIB_FILE=`pwd`/qlogic-cna-provider.vib
    
    echo "[Installing the QLogic Provider VIB file: $VIB_FILE]"
    esxcli software vib install -v file://$VIB_FILE -f --maintenance-mode --no-sig-check
    echo "[Finished installing the QLogic Provider VIB file: $VIB_FILE]"
    
    #Update the SFCB config file with specific values required by IIAS
    modify_sfcb_cfg_file
    
    #reboot the machine - required after installation
    trap 'reboot_canceled' INT
    echo "Rebooting the machine to complete installation. Press <Ctrl+C> to cancel reboot in "
    
    for i in 10 9 8 7 6 5 4 3 2 1
    do
    echo $i seconds...
    sleep 1
    done
    
    echo "[Rebooting machine NOW. Installation of CIM Provider is complete]"
    reboot -f
    }
    
    #Main script starts here
    echo "[Starting installation of CIM Provider]"
    
    check_hostd
    check_if_vib_already_installed
    install_vib_file
    

    Explanation

    The code is pretty straightforward. Thankfully, basic shell scripting is still allowed on the ESXi 5.x console. However, please note that in order to use the command line, you need to enable the SSH service on the ESXi 5.x server using the vSphere Client (Configuration->Security Profile).

    The first thing we need to do is to to check if the hostd daemon is running or not. This is required for the esxcli command to work. I found this out the hard way since it had been some time since I had exposure to the ESXi platform (the last one I worked with being the ESXi 4.1 platform), and documentation for the ESXi platform has been meager at best, and it’s even worse for the 5.x series. In case the hostd daemon is not running, we start it up.

    The second thing we do is to check if the VIB file (given by the variable, PROVIDER_VIB) is already installed on the machine. In this specific case, we assume that update is not possible, and we need to uninstall the existing package before we can proceed with the installation of a possibly newer version of the same package. If this is not true, then this check can be skipped, and an update command invoked instead of the normal installation command, later on. One additional check that might possibly be done here is to check for the package version, if that is relevant to your specific needs. In this case, if the VIB file is already installed, we need to uninstall it first, and so we provide the user with that option.

    If the user has chosen to proceed with the uninstallation of the existing VIB file, we need to stop the SFCB service (via its watchdog), and then invoke the command to uninstall the VIB file:

    
    esxcli software vib remove --vibname=$PROVIDER_VIB --maintenance-mode -f
    
    

    Different VIB files have different requirements when it comes to uninstallation or installation. For our CIM provider, we need to put the ESXi machine into maintenance mode, and we also need to forcefully uninstall it, if need be (using the -f flag). Also, in this case, we need to reboot the machine after the uninstallation. This need not be the case for other VIB files.

    After the uninstallation is done (or if the VIB file was not present on the machine in the first place), we proceed with the actual installation of the VIB file. For this, we set up the variable, VIB_FILE, to contain the absolute path to the CIM Provider VIB file. In this case, we assume that the VIB file is in the same directory as the installer script. If this is not the case, you can set up the path to the VIB file accordingly, the only requirement being that it must be the absolute path to the VIB file, anywhere visible to the esxcli command (i.e., the ESXi 5.x console). The command used for the installation of the package is:

    
    esxcli software vib install -v file://$VIB_FILE -f --maintenance-mode --no-sig-check
    
    

    Again, we put the machine into maintenance mode using the –maintenance-mode flag, and then additionally we request the installation to forgo the check of the signature on the package using the –no-sig-check flag (if the package is signed). This is not a good practice, but it will work in case there are some problems with the signature, and we still want to proceed with the installation. Finally, we force the installation using the -f flag.

    Now comes the interesting part. For our CIM Provider, my-cim-provider, we require to modify some of the default values of the SFCB CIMOM. This configuration is located in the /etc/sfcb/sfcb.cfg file (given by the variable, CFG_FILE). The specific parameters that we want to modify are: disable basic authentication (doBasicAuth=false), enable the HTTP port (enableHttp=true), enable the HTTP port for non-local connections (httpLocalOnly=false), ignore the SSL Client Certificate(sslClientCertificate=ignore) since we don’t want to use SSL, and finally increase the number of HTTP processes used by the SFCB CIMOM from the default 4 to a healthy 10(httpProcs=10). For your specific needs, the values of different parameters might need to be modified in different ways. The same approach can be used to achieve the same. Note that any time there is a change to the SFCB configuration, we need to restart the SFCB daemon.

    First off, we backup the existing SFCB configuration file, so that the user can restore his original settings in case of any issues. The we use sed to update the required parameter values to the new values. A sample command is:

    
    sed -i "s/doBasicAuth:.*/doBasicAuth:   $doBasicAuth/g" $CFG_FILE
    
    

    What this line basically means is, replace the  string matched by the regex (doBasicAuth:.*) with the new string ($doBasicAuth). For syntactic sugar, we include as many spaces before the value, $doBasicAuth (which is “false”) as were in the original SFCB configuration file. The /g switch simply instructs sed to perform the replacement for every match of the regex in the whole file. This will not be the case on most machines, and this is more of a safety measure to ensure that even if there are multiple instances of the same parameter in the same file, the updates to the values are consistent, and according to the desired values. sed is a powerful tool that is often overlooked in favor of other tools such as awk and Perl, but in terms of string manipulation and replacement in-place in files, nothing really comes close to its power and versatility. Finally, we restart the SFCB service. Note that I consistently direct the output of the commands to /dev/null (not just standard errors, but all output). This is to ensure a more or less cross-platform compatibility to avoid echoing the messages from the invocation of the commands. While seeing the verbose output of the commands might be useful for debugging during development, it is hardly fair to overload the customer with such extraneous messages. Customize it as per your own needs.

    Finally, we need to reboot the machine after the installation of the vib file (again, this may not be the case for your own VIB file). I again provide the user the option to reboot the machine at a later stage. For this, I make use of a nifty feature of various shells that if often under-appreciated: traps. The general for of the trap command is:

    
    trap '<your logic/function call>' <SIGNAL, such as SIGINT or simply, INT>
    
    

    For this specific script, I instruct the user to press <Ctrl+C> within 10 seconds to abort the reboot. This sends a SIGINT (or INT for short) trap, which I then redirect to the reboot_canceled function, which informs the user appropriately, and exits the installer script normally. In case the trap is not received within 10 seconds, the machine is rebooted.

    After the reboot, the user can then check the status of the VIB file to ensure that it has been installed successfully. It can be done with the following command (which, arguably, can be put in its own script and then executed by the user to check the status of the VIB installation):

    
    esxcli software vib list | grep -i my-cim-provider
    
    

    So that’s it – as simple as it can get on the new ESXi 5.x platform!

    Written by Timmy Jose

    May 12, 2013 at 9:39 pm

    Creating a service and a service watchdog using simple shell scripts in Linux

    with 4 comments

    Recently at work I was given a feature to support the customization and installation of OpenPegasus CIMOM (CIM Server) on Linux machines in binary mode. What this means is that instead of building from source code on the Linux machines (as would be the sane thing to do in view of the huge compatibility issues), it was decided to create the binaries on my development box, and then bundle only the required portions as part of an installation script. The main reason for this was the fact that we had a dependency on an external CIM Provider (QLogic), who obviously provided us only with the binaries built on a base Linux machine (specifically, RHEL 5.8).

    There were many interesting problems that arose due to library dependencies, OS/ABI incompatibilities, and GCC/GLIBC dependencies. I also learned a lot about the whole process of working with third-party vendors. I plan to cover all of them in a series of upcoming blog posts. For now, however, I would like to post some useful information about I helped the installer team enhance their installation scripts by creating a service and a service watchdog for the OpenPegasus CIMOM bundled with the QLogic provider. For representative purposes, I will use the term “My Service” to refer to the hypothetical service. I will also provide the main logic of the relevant scripts that I wrote for the purpose without violating any NDA restrictions of my workplace! So let’s get right on to it then.

    Creating a service in Linux using a shell script

    Creating a service in Linux is a pretty simple task. You really just add execution privileges to the shell script, drop it into the /etc/init.d folder, and then invoke a series of commands. The code for the service that installs the OpenPegasus (version 2.11.0 used) CIMOM with the bundled QLogic CIM Provider binaries is listed as follows:

    #!/bin/sh
    
    # chkconfig: 2345 55 10
    # description:My service
    # processname:myservice
    
    usage() {
            echo "service myservice {start|stop|status|"
            exit 0
    }
    
    export PEGASUS_ROOT=/opt/pegasus2.11.0
    export PEGASUS_HOME=$PEGASUS_ROOT
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PEGASUS_ROOT/lib
    export PEGASUS_PLATFORM=LINUX_IX86_GNU
    export PATH=$PATH:$PEGASUS_ROOT/bin
    export PEGASUS_HAS_SSL=yes
    
    case $1 in
    
    	start) $PEGASUS_ROOT/bin/cimserver
    		;;
    	stop) $PEGASUS_ROOT/bin/cimserver -s
    		;;
    	status) if [ `pidof $PEGASUS_ROOT/bin/cimserver` ]; then
    			echo "Running"
    		else
    			echo "Not running"
    		fi
    		;;
    	*) usage
    		;;
    esac
    
    

    Explanation:

    We start off with the usual shebang followed by the path to the “sh” executable (#!/bin/sh). The following lines are quite interesting, and worth explaining in a bit more detail. The # chkconfig 2345 90 10 line merely informs the OS that we want this service script to be activated for Linux Run Levels 2,3,4 and 5. Check out “Linux Run Levels” for more information on Run Levels in Linux. The parameter 90 refers to the priority to be assigned for service startup (we usually want this to be a moderately high value) while the last parameter 10 refers to the service stop priority (this can be a moderately low value). The specific values for this parameter will depend on your service’s usage patterns. The #description line is optional, and is used to give a descriptive name to the service. The #processname line is the name that you will use for your service, and is usually the same as your script name.

    The rest of the logic is pretty simple: I want to support three options – start, stop, and status. For this purpose, I export the relevant environment variables in this script itself so that it does not pollute any other namespace (you could export them in ~/.profile, or ~/.bash_profile, or ~/.bashrc for instance if you want them to be globally available). Then I merely put the logic to start/stop/query the cimserver executable, which is the executable that actually represents the OpenPegasus CIMOM. The core logic of this service script is the command pidof $PEGASUS_ROOT/bin/cimserver, which returns the PID of the specified executable in the current environment.

    To install this script as a service, the following commands are performed:

    #cp myservice /etc.init.d
    #chmod +x myservice
    #chkconfig --add myservice
    #chkconfig --level 2345 myservice on
    

    The #chkconfig –add myservice is the command that actually adds your script as a Linux service. For this, the script must be executable (chmod +x might be too permissive, feel free to choose a lower level of execution permission), and must be present in /etc/init.d (or at least a soft-link created to the file in this directory). Then, finally, the #chkconfig –level 2345 myservice on command makes your service automatically start with system boot-up. This ensures that your service is always on so long as your Linux box is up. Neat!

    But what happens if the service crashes while the machine is still up? It certainly will not restart itself. For this purpose, I decided to add a service watchdog for “myservice”, as shown in the following section.

    Creating a service watchdog in Linux using a shell script

    The service watchdog’s responsibility is to monitor the main service (say, every minute or so), check its status, and then restart it if it is not running. This ensures a maximum downtime of a minute (or whatever value you chose) for your service. It is quite a nifty feature indeed. This is similar to the scenario where, in Windows, you would set the service properties to “Automatically Restart”. The code for the watchdog for “myservice” is given below:

    #!/bin/sh
    
    #chkconfig: 2345 90 10
    #description: watchdog for myservice
    #processname: myservice-watchdog
    
    MYSERVICE_PID=`pidof /opt/pegasus2.11.0/bin/cimserver`
    
    check_myservice() {
            if [ -z $MYSERVICE_PID ];then
                    service myservice start
            fi
    }
    
    check_myservice
    
    usage() {
    	echo "myservice-watchdog {start|stop|status}"
    	exit 0
    }
    
    case $1 in
    	start ) if [-z $MYSERVICE_PID ];then
    		service myservice start
    		else
    			echo "myservice is already running"
    		fi
    		;;
    	stop ) if [ -n $MYSERVICE_PID ];then
    		service myservice stop
    		else
    			echo "myservice is already stopped"
    		fi
    		;;
    	status) if [ -z $MYSERVICE_PID ];then
    			echo "myservice is not running"
    		else
    			echo "myservice is running"
    		fi
    		;;
    	*) usage
    		;;
    esac
    

    Explanation:

    The logic for the watchdog might seem curiously similar to that of the service itself, and that is right. There were a number of reasons why I chose this approach:

    • The idea is to always monitor the state of the executable itself, and not the service. This ensures that if, for some reason, the service script returns spurious data, the watchdog can avoid spawning multiple instances of the executable, which would most likely fail anyway.
    • The watchdog is also installed a service.  This is not usually required, but in this case it needs to support the following options: start, stop, and status. In addition, the check_myservice function is the one that is used to monitor the service itself (actually the executable).
    • The watchdog is triggered to be run every minute using crontab. This will only run the check_myservice function, whereas any direct invocation of the watchdog will have to supply any one of the following options: start/stop/status.
    • The idea is to always handle the executable indirectly via the watchdog (start/stop/status) rather than directly through the service itself, even if that is also possible. This is more of a best practice than a strict requirement.

    The watchdog is installed as a service using the following commands:

    #cp myservice /etc.init.d
    #chmod +x myservice
    #chkconfig --add myservice-watchdog
    #chkconfig --level 2345 myservice-watchdog on
    

    The explanation for the steps is the same as that for the installation of the main service itself. It is also worth noticing that the watchdog is also installed as a daemon.

    Then we need to create a cron job that will trigger the check_myservice function of the watchdog every minute. For this, the best option (since we are triggering the whole process through an installation script) is to create a cron job in a text file, place that file in the /etc/cron.d directory (where user cron jobs can be placed), and the restarting the crond daemon process to make the new cron job visible to the OS, as follows:

    #echo "* * * * * /etc/init.d/myservice-watchdog" > my.cron
    #echo "" >> my.cron
    #cp my.cron /etc/cron.d
    #service crond restart
    

    And that’s it! The most important bit to remember here is that the #echo “” >> my.cron line is required because of a bug in the way crontab behaves – it expects a newline or an empty line after the last cron job in the file. If it is missing, crontab will not fail, or throw an error, but silently avoid triggering the job! Trust me, this is mental agony that you definitely do not want to experience. The cronjob itself is pretty simple – simply call the watchdog every minute (read up on the syntax and semantics of cron jobs in Linux if you are confused by that line).

    I hope that this serves a useful purpose for anyone that is planning to explore creating services and watchdogs using shell scripts in Linux.

    Written by Timmy Jose

    May 5, 2013 at 10:11 pm

    Handling the back button click action using JavaScript (in pages with fragment identifiers) to route the request to another page

    leave a comment »

    So there is this new issue at work where the browser back button is not behaving as desired. The idea is that when the user clicks the browser back button on any page, the user should be routed to the default login page. As much as that sounds like a symptom of an inherently fundamental fault, I feel it is much better than what most other sites do – disable the back button altogether (which can be circumvented of course).

    The product in question is a standard JSP-based web application with the normal JS+CSS++HTML client-side stack. The layout is pretty simple – there is a main Login Page, and then the user is presented with a navigation pane on the left-hand side, and with a main page displaying the results of the various options presented in the navigation pane. The idea is that whenever a user clicks on the back button from any page, he or she should be presented with the Login Page. Easier said than done! First of all, there is no real event associated with the browser back button click (or a lot of other browser actions for that matter) in JavaScript. There are plenty of hacks that can be done to achieve most use cases. In this specific case though, the problem was a specific option on the navigation pane – the ‘Inventory’ option, which simply displays various objects as hyperlinks in the main pane. Upon clicking any of these objects, the URL uses fragment identifiers (of the format: http://<host>:<port>/<main-url>#detailedInfo. This practically messes up the aforementioned hacks (using iframes, polling of hash changes, onbeforeunload, onunload, etc). Since the history object of the browser still contains the same base URL for both the main pane as well as the fragment identifier, options such as using iframes and onbeforeunload/onunload events fail completely. Also, using hash changes (either polling or using the basic window.onhashchange event) also do not solve the problem since the URL hash does change even on clicking the anchor for the fragment identifer (which is not desirable) as well as on clicking the back button (or the alt-leftarrow combination). However, unlike the many trolls that abound the various forums online, I present to you an actual solution to the problem at hand, which may be customized to suit any specific situation as the case may be.

    For my demo, I have created a base page – Page1.html which simulates the main page. Page2.html contains the logic for detecting back-button clicks (note that since the forward button is not enabled, that case does not pose a problem. Also, the advantage with this method is that refreshing the page does not cause any unexpected or anomalous behavior either). I have also created another page, Login.html which simply simulates the default Login Page.

    1. Page1.html

    <!DOCTYPE html>
    <html lang="en">
    	<head>
    		<meta charset="utf-8"/>
    		<title>Page 1</title>
    	</head>
    	<body>
    		<h1>Welcome to the Main page</h1>
    		<a href="Page2.html">Page 2</a>
    	</body>
    </html>
    

    2. Page2.html

    <!DOCTYPE html>
    <html lang="en">
    	<head>
    		<meta charset="utf-8"/>
    		<title>Page 2</title>
    		<script type="text/javascript">
    			var origURL = window.document.location.href;
    			var origFileName = origURL.substring(origURL.lastIndexOf("/") + 1, origURL.length);
    	        </script>
    	</head>
    	<body>
    		<h1>Welcome to the Details  page</h1>
    		<a href="#detailedInfo">Details</a>
    		<p id="details">Details here!</a></p>
    		<script type="text/javascript">
    			window.onhashchange= function() {
    				var url = window.document.location.href;
    				var fileName = url.substring(url.lastIndexOf("/") + 1, url.length);
    			
                            	if(fileName.search(origFileName) != -1) 
                                        && fileName.search("#detailedInfo") != -1) {
                                               // Do nothing.
    					} else {
                                                    //This would be a proper URL when deployed on a Web Server.
    						window.location.replace("Login.html");
    					}
    			}
    		</script>
    	</body>
    </html>
    
    

    3. Login.html

    <!DOCTYPE html>
    <html lang="en">
    	<head>
    		<meta charset="utf-8"/>
    		<title>Login Page</title>
    	</head>
    	<body>
    		<h1>Welcome to the login page</h1>
    	</body>
    </html>
    
    

    Explanation: The logic here hinges upon the fact that most modern browsers (this code has been tested on Firefox 18.0.1, Chrome 24.0.1312.57, Opera 12.00, and Internet Explorer 9) support the window.onhashchange event. This event is basically generated when the location hash (in simple terms, the hash of the URL of the current window) changes. Thankfully, this also includes the case where the fragment id of the anchor target is appended to the main URL.

    Thus, in Page2.html, I simply get a handle to the original page name when I enter the page, and the check if the new page name (that includes the fragment identifier) contains the original page name as well as the fragment identifier. If so, it means that we are still on the same page, and so we do nothing. Otherwise, it means that the back button has been clicked (or invoked through history.back(), or through the keyboard action – alt+ left arrow). In this case, we simply change the location of the current window to the Login Page. And that’s it!

    P.S: The fragment identifier, #detailedInfo has been hardcoded in this snippet, but it need not be so. We can simply check for the main page, and any fragment identifier, to ascertain if we are still on the same page or not. Note that this is a very specific case – any other situations beyond this case have to be handled in their own right.

    P.P.S: While Firefox supports the “contains” method (in the style of Java) for searching substring matches in string objects, all the other browsers support the “search” method, including Firefox itself! So this snippet works uniformly across all browsers.

    Written by Timmy Jose

    February 1, 2013 at 9:22 pm

    Posted in JavaScript

    Tagged with ,

    Configuring Eclipse to run standalone JavaScript files (Using Node.js/Google V8 Engine)

    with 10 comments

    I recently started studying JavaScript in greater detail so that I could work on some side projects of my own. My new found interest was sparked in a large part by the wonderful server-side JavaScript framework, Node.js. However, while working through the various tutorials that I had collected for the same, it became painfully evident that creating anything non-trivial in standalone JavaScript was a pain in the proper place. This is hardly surprising since the entire life-cycle of JavaScript has been primarily within the confines of the browser. However, becoming swiftly tired of embedding snippets of code within the <script> tags in HTML pages to test out various concepts of the language, I began looking for alternatives.

    The first obvious choice was the excellent Firefox Scratchpad (Tools->Web Developer->Scratchpad. I am using Firefox 16.0.2, but this has been always the location of Scratchpad ever since I can remember). This is a wonderful piece of software that works for most of the scenarios while learning JavaScript, but falls short in terms of useful options such as debug options, or linking script files together in a modular fashion.

    The next option that I evaluated was the eval support provided by Firebug. This is a far more advanced tool than Scratchpad, but again, when the size and complexity of the code goes beyond a certain point, it is essentially doing something that it was not designed to do.

    What I really wanted in this specific case was complete IDE support for executing JavaScript projects. Ideally, I would like to use Eclipse as the IDE, with a JavaScript perspective for all the formatting and validation bits, and link an external tool to execute the script files. Getting the JavaScript perspective to work on the version of Eclipse that I am using, Juno was a breeze. The latter part – getting some suitable engine to run standalone JavaScript code, and getting it to work with Eclipse was the harder bit. Having begun tinkering with Node.js, I saw that their engine basically was a wrapper around Google’s V8 JavaScript Engine. So now I had two options: follow the elaborate set of steps listed out on that site, and generate binaries using Visual Studio (while hoping for the best), or I could simply use Node.js’s own executable wrapper! A little bit of Googling, and I found the following site – http://www.epic-ide.org/running_perl_scripts_within_eclipse/entry.htm, which made life much easier for me. The example given is for Perl, but the steps work perfectly for JavaScript as well.

    Steps to configure Eclipse to work with Node.js’s JavaScript engine

    1. Open the ‘External Tools’ window (Run->External Tools->External Tools Configuration)

    1

    2. In the ‘Name’ field, enter a name for the new configuration (such as ‘JavaScript_Configuration’)

    2

    3. In the ‘Location’ field, enter the path to the windows executable (C:\WINDOWS\system32\cmd.exe on my Windows 7 machine)

    3

    4. In the ‘Working Directory’ field, enter ‘C:\WINDOWS\system32’. This is because we are referring to the executable in the ‘Location’ field as ‘cmd.exe’, for which this is the working directory.

    4

    5. In the ‘Arguments’ field, we need to add the following string:

    /C “cd ${container_loc} && node ${resource_name}”

    5

    Obviously, the ‘/C’ at the beginning of the line is the flag that requests the cmd.exe tool to execute the supplied string, and then terminate. The ${container_loc} field refers to the absolute path of the currently selected resource’s (JavaScript script file in this case) parent, and the ${resource_name} variable corresponds to the name of the currently selected resource (the JavaScript script file). Check out this site for more variables associated with the External Tools configuration – http://help.eclipse.org/juno/index.jsp?topic=%2Forg.eclipse.platform.doc.user%2Fconcepts%2Fconcepts-exttools.htm.

    Of course, we assume here that the “node” executable is available through Windows’ PATH environment variable.
    And that’s it, we’re done! To check that everything is working as expected, I create a sample file, test.js in a new JavaScript project, which contains the following simple code snippet:

    (function() {
    console.log(“Hello, World!”);
    })();
    

    When we execute this file (using Run->External Tools->JavaScript_Configuration), we see that it works perfectly!

    6

    And of course, this approach can be applied to various other languages that are not supported by default by Eclipse, or for which there is no suitable Eclipse plugin available.

    Written by Timmy Jose

    December 4, 2012 at 1:58 pm

    Posted in JavaScript

    Tagged with , ,

    A couple of projects upcoming!

    leave a comment »

    So it has been a busy last few weeks. Well, not busy with work as such but a semblance of work. Just one of those periods where the time passes and one feels stressed out but in the final reckoning, nothing of substantial productivity stands forth. It has been a rather boring last few weeks and I am itching for some real action!

    I got my copy of Allen Holub’s ‘Compiler Design in C’ the other day and I could not be more thrilled! It was just the book that I needed for a long pending pet project of mine. Though what shape this specific project will take over the course of the next few months remains to be seen. I am thinking it is going to be a rather interesting experience all the same. The first project that I am undertaking is a Compiler Design and Implementation one – I had originally planned for the Arduino platform for two reasons – 1. There seems to be a severe paucity of mid-level languages to program in on the Arduino family of platforms, and 2. It would be a much simpler exercise in some ways (minimal functionality set, procedural language) and greatly complex in others (strict memory management, high-level syntax and low-level functionality). I feel it would be really useful and educational at the same time – a perfectly sound use of my precious time! However until I get down to the actual design phase, it remains open to new thoughts. First, I have to plow through the hundreds of man-hours of theory and hands-on. I simply relish merely the thought of it. I also plan to keep my progress updated through this blog as I progress through this project. I would give the theory around a month of effort and the actual project perhaps around three months, tentatively. After all, I am not simply planning to create a crude and low-performant C compiler for whichever platform be, but a full-fledged programming language.

    The second project is on a much higher level. I had purchased a couple of domain names a couple of months back and I have been planning for some free time to finally get down to this project! I want to create my own website for my URL – http://www.timmyjose.com using Python, Django, JavaScript and CSS. The backend, templating engine and other details have to be decided still. This should serve me well on two fronts – refresh my knowledge of Python and Django and give me a chance to implement these technologies in a real world project. And get me to sit down my behind down at last and actually learn JavaScript and CSS for good! Plus what better way to showcase your own talents than on your own website, right? I would give this project anywhere between three to six months since it would be in parallel with the other project. I will keep this blog updated with my progress on this one as well – my learnings, my failures and my successes as well!

    And of course, there are tons of stuff that I have blog about other than these two projects including, but not restricted to, the topics that I had promised in my earlier blogs to tackle. I should be able to maintain a more or less consistent tempo with my blog posts from now on. Fingers crossed!

    Written by Timmy Jose

    March 23, 2012 at 8:54 pm

    Posted in Uncategorized

    Follow

    Get every new post delivered to your Inbox.

    Join 49 other followers