Friday, May 6, 2016

Learning (Part 2)

I recently started studying and experimenting with a few of the current features of

My next goal was to create a docker container to hold a few Swagger tools.

Since it had been a long time since I last used docker, I refreshed my skills by creating a tiny little Dockerfile which built a container that holds Ubuntu plus one package: iputils-ping.  I then published it to the public Docker Hub registry.

This simple exercise would remind me how to publish a Dockerfile, and allow me to ensure the running docker container's network interface and DNS lookup was working. More importantly, it would also be a starting point for adding the Swagger tools.

Perhaps I should have called this article Docker 101 :-)

Here's what I did...

Create Local Dockerfile and Docker Container

On a linux machine with the Docker Client installed, I created a subdirectory folder named swagger0/ and created this file named Dockerfile.  You are welcome to copy/paste it.

# Dockerfile to create a simple sandbox with
# ping installed.
# Build an image
#    docker build --tag swagger0 .
# Run an instance
#    docker run -t -i swagger0
# Inspect
#    docker images
#    docker ps -a

# Start with the latest Ubuntu OS
FROM ubuntu

# Apply latest OS updates
RUN apt-get -y update
RUN apt-get -y dist-upgrade
RUN apt-get -y autoremove 

# Install something small 
RUN apt-get -y install iputils-ping  

Next., I cd'd into the directory containing the Dockerfile, and built a docker image from the Dockerfile:

        docker build --tag swagger0 .

I confirmed by showing the list of images:

        docker images

docker images
  swagger0    latest  8b00a84ba944  8 minutes ago  161.2 MB
  ubuntu      latest  686477c12982  38 hours ago   120.8 MB

Then I started an instance of the image and SSH'd into it with this command:

        docker run -t -i swagger0

I was able to ping Google, and confirmed this local container and network were alive.

Save Dockerfile

I copied my Dockerfile to a remote/backup location, since I planned to delete the current working directory where it resided.

Publish to Docker Hub

I created a new account, named btfsplk, at

To prepare my local container for publishing to docker hub, I tagged it with my docker hub account name.

        docker tag 8b00a84ba944 btfsplk/swagger0:latest

The resulting new image appeared in my list:

        docker images

docker images
  btfsplk/swagger0  latest  8b00a84ba944  10 minutes ago  161.2 MB
  swagger0          latest  8b00a84ba944  10 minutes ago  161.2 MB
  ubuntu            latest  686477c12982  38 hours ago    120.8 MB

I logged into my new docker hub account

        docker login

For future reference, it told me that it saved my creds in a file:


I uploaded my image to docker hub

        docker push btfsplk/swagger0

To confirm, I deleted all docker artifacts

    docker ps -a
    docker rm  9d8s849t0

    docker images
    docker rmi ab23nskr9 etc etc

And deleted the subdirectory folder on my disk (confirm you have backed-up the Dockerfile from this location!!)

    rm -rf swagger0

Download from Docker Hub

Finally, the acid test:  I downloaded my container from the Docker Hub repository into a clean sandbox directory.

    docker run -i -t btfsplk/swagger0

It found the image, downloaded it, and started it.  I got dumped into an SSH window.  I confirmed ping was installed as expected:

root@4146b58b8956:/# ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=35 time=48.1 ms
64 bytes from icmp_seq=2 ttl=35 time=48.1 ms


Monday, May 2, 2016

Learning (Part 1)

I recently studied and experimented with a few of the current features of

In the past, I heard different people talking about using for vastly different purposes. I was confused. Some folks were annotating their source code and using Swagger to produce documentation for their REST services. Others were using Swagger docs to automatically generate server-side product code. Others said they were generating mobile client-side code from a Swagger doc. What was going on?

It turns out they were all right.

The basic concept of remains the same.  It's a simple standardized way to represent a REST interface in a document. However, there are now a significant number of tools packages available to produce and consume docs for various purposes.


First, I found a few blog articles from PointSource which caught my interest.

I also read a good portion of the Swagger spec:

Name change

Swagger has been donated to the open source community and renamed to Open API Specification.

Getting started

I decided my first goal was to use the online Swagger Editor to define a simple hello world API, and then auto-generate a server-side instance and test it...

Using Swagger Editor

The Swagger Editor is an online tool:

I poked around with it a little while.  It's pretty straightforward.  The left panel is an editor where you enter the representation of the API.  The right panel is a live rendering of the Swagger docs created from the left panel.

Create the simplest API you can in the left panel.  Scroll the right panel up and down until you like everything.

Export your definition to a local file for safe-keeping by clicking the upper left tab: File-> Download YAML.

Here is my YAML file.  You are welcome to use it.

# This is an example Swagger API spec in YAML
swagger: '2.0'
  title: Hello World API
  description: This is the exciting Hello World API
  version: "1.2.3"
  - https
basePath: /v001
  - application/json
      summary: Returns a known response to indicate service status.
      description: The healthcheck endpoint returns status of the running service.
        - User
          description: General information about the running service.
            type: object
                type: integer
                format: int32
                type: string
          description: Unexpected error
            $ref: '#/definitions/Error'
    type: object
        type: integer
        format: int32
        description: Error code.
        type: string
        description: Human-readable error message.

Auto-generating a server-side instance

My next step was to download a node.js server-side instance which implemented my API.

From the Swagger Editor, click Generate Server-> Node.js   It produces a zip file.  Download it to your machine.

Unzip the file.  Download the dependencies with 'npm install'.  Start the server with 'node index.js'.  Note that the server starts on port 8080 by default.

Confirm it works

The easiest way to confirm the endpoint works is to issue a REST request to it with cURL.  Notice it renders the variable names specified in the YAML, and adds dummy values to them.


This completes the easy part:
  • Manually defining a REST API using Swagger Editor
  • Auto-generating a server-side instance in Node.js
  • Verifying the server-side instance delivers the endpoint
I hope I get time to keep studying...

Friday, October 9, 2015

How to deploy a Hello World node.js app from your laptop to IBM BlueMix


I recently had the opportunity to begin learning IBM BlueMix.

To get started, I wanted to deploy a trivial Hello World program written in javascript/node.js.  And I wanted to do as many tasks as possible from the command line instead of the BlueMix web interface, because the web interface is useless for scripting and automation.

It wasn't easy.  I couldn't find complete and simple docs that worked.  Many docs I found were out-of-date, or focused on selling services.  It took me two days, working sporadically, to figure this out.  Thankfully, I found answers from bloggers (links included below).

The steps below worked on my Linux laptop (Ubuntu 12.04).  Mac should be similar.

I obfuscated my real email address to for privacy in the listings below.

Required:  Get yourself an account at IBM Bluemix.

Required:  Install the Cloud Foundy command line interface tool, cf, on your laptop.  Fetch it from  (as of this writing, I installed CF version v6.12.4, Linux 32-bit binary.)  Un-tar it and add it to your path.   IBM Doc: 

Recommended:  Install node.js on your laptop and add it to your path.  Reason:  It can be used to test the hello world locally on your laptop.

Part 1.  Prepare node.js app on your laptop

First create a node.js app and package.json file in the same folder on your laptop...

Create a new folder on your laptop for the app.

Create a file in that folder named hellohttp.js.  I found a trivial HTTP server app written by Tim Caswell.  Browse here and search for 'Hello HTTP':     Paste the contents into the file and save it.

Important: To support port mapping within BlueMix, edit the app to use environment variable VCAP_APP_PORT as shown below.  This code will listen on port 8000 on your laptop (where the env var is NOT defined), or on the BlueMix-defined port when deployed to BlueMix (where the env var IS defined)

    // Source:  Hello HTTP by Tim Caswell

    // Load the http module to create an http server.
    var http = require('http');

    // Configure our HTTP server to respond with Hello World to all requests.
    var server = http.createServer(function (request, response) {
      response.writeHead(200, {"Content-Type": "text/plain"});
      response.end("Hello World\n");

    // Listen on port VCAP_APP_PORT (if defined) or 8000 (default).
    var port = process.env.VCAP_APP_PORT || 8000;

    // Put a friendly message on the terminal
    console.log("Server running at" + port + "/");

Next create a file named package.json in the same folder as the hello http app.  Copy/paste the following content into the file.  Edit your personal info.

Note: The most important statement in this file is the scripts.start definition.  BlueMix issues this command to start your app.

I found this solution in a blog post by Brian Innes.  Brian documented several ways to start apps in BlueMix.
Of course, as soon as I discovered Brian's solution, I also found the package.json technique in the official docs:   Ugh.

      "name": "hellohttp",
      "version": "0.0.1",
      "description": "hellohttp",
      "author": "",
      "contributors": [
            {    "name": "myname",
                "email": "" }
      "scripts": {
        "start": "node hellohttp.js"

Local test

    If you installed node.js on your laptop, test the hellohttp.js app by running it locally.
            node hellohttp.js
        See response
            Server running at

            netstat -na | grep LIST | grep 8000
            tcp   0   0*   LISTEN

        Verify again
            Browse to http://localhost:8000, see response 'Hello World'.

Part 2.  Upload node.js app to BlueMix

From the same folder, issue these Cloud Foundry cf commands...

Set the API

    cf api

    Get response

        Setting api endpoint to
        API endpoint: (API version: 2.27.0)  
        Not logged in. Use 'cf login' to log in.

Log in

    cf login -u -o -s dev
        where -o is org, -u is username (both are the same when you first start using BlueMix), and -s is space dev.
    and enter your password when prompted

        Password> ********

    Get response

        Targeted org myname@example.comn
        Targeted space dev
        API endpoint: (API version: 2.27.0)  
        Space:          dev

Deploy the app.  The command 'cf push' will upload all files in your folder to BlueMix (recursively).

    cf push hellohttp

    Wait a few minutes to receive this entire response...

    Creating app hellohttp in org / space dev as
    Creating route
    Binding to hellohttp...
    Uploading hellohttp...
    Uploading app files from: /home/myname/sandbox/nodejs/hellohttp
    Uploading 1.1K, 2 files
    Done uploading              
    Starting app hellohttp in org / space dev as
    -----> Downloaded app package (4.0K)
    -----> IBM SDK for Node.js Buildpack v2.5-20150902-1526
           Based on Cloud Foundry Node.js Buildpack v1.5.0
    -----> Creating runtime environment
    -----> Installing binaries
           engines.node (package.json):  unspecified
           engines.npm (package.json):   unspecified (use default)
           Resolving node version (latest stable) via 'node-version-resolver'
           Installing IBM SDK for Node.js (0.12.7) from cache
           Using default npm version: 2.11.3
    -----> Restoring cache
           Loading 1 from cacheDirectories (default):
           - node_modules (not cached - skipping)
    -----> Checking and configuring service extensions before installing dependencies
    -----> Building dependencies
           Pruning any extraneous modules
           Installing node modules (package.json)
    -----> Checking and configuring service extensions after installing dependencies
    -----> Installing App Management
    -----> Caching build
           Clearing previous node cache
           Saving 1 cacheDirectories (default):
           - node_modules (nothing to cache)
    -----> Build succeeded!
           └── (empty)
    -----> Uploading droplet (14M)
    0 of 1 instances running, 1 starting
    1 of 1 instances running
    App started
    App hellohttp was started using this command `./vendor/initial_startup.rb`
    Showing health and status for app hellohttp in org / space dev as

    requested state: started
    instances: 1/1
    usage: 1G x 1 instances
    last uploaded: Wed Oct 7 15:11:49 UTC 2015
    stack: cflinuxfs2
    buildpack: SDK for Node.js(TM) (ibm-node.js-0.12.7)
         state     since                    cpu    memory        disk          details  
    #0   running   2015-10-07 11:12:56 AM   0.0%   55.2M of 1G   48.2M of 1G

    Determine the URL for your app.  Browse to the BlueMix web interface, look for your new app, and see the URL (in blue):

    Browse to the URL and see 'Hello World'.  For example, mine was:


It worked.  Claim success.

Sunday, September 28, 2014

Ello For Dummies

A new social network is gaining traction.  It's called Ello.

It's still in beta, so you need an invitation.  Get one and sign-in.

Here are a dozen basic instructions on how to use ello.  It's enough to get started. (Click photos to enlarge.)   And there are more references listed at the bottom.


Go to main page

Set profile photo

Set header photo

Post a comment

Post a photo

Post comment or photo to another person

Delete a post or photo

Find people by search

Find people by follows

Find people by posts

Follow a person

Change classification as friend or noise

View posts by all people classified as friends

View posts by all people classified as noise

Block spam

Hide and expose the left side panel

Log out

For more information, here are more detailed tutorials I like...

Thursday, August 14, 2014

BMIR on Mobile Devices

Updated for 2015...

BMIR (Burning Man Information Radio) is the official radio station of the annual Burning Man event.  BMIR broadcasts at the event on 94.5 FM, and streams over the internet year-round to listeners worldwide.

There are several ways to listen, on mobile devices and computers...

BMIR's Android App

Since 2011:

BMIR's iPhone/iPad App

Since 2014:

iHeartRadio for all mobile devices

Coming soon... 

Computer browsers

You can also listen on a computer:
The Flowplayer should start playing automatically, or you can click on the ListenNow link.

Tuesday, May 27, 2014

Determining web service health by inspecting JSON response data in an Uptime Plugin.


This article demonstrates how to write a Plugin for Uptime which:
- analyzes the contents of a JSON response payload response from a target web service, and
- uses the JSON information to determine and report health of the service to Uptime.

A sample plugin is provided.  The sample queries the health of the well-known Google Geocoding REST service by inspecting the contents of its JSON response.


Uptime is an application which continuously evaluates the health of web services.

To determine the health of a service, Uptime periodically issues web requests to the target service.

In its simplest form, Uptime declares the service is up if a successful response is received from the service.  Otherwise, it declares the service is down.

Uptime also provides a plugin interface which allows Uptime to be extended to perform custom operations.  This article exploits the plugin interface to inspect the contents of the JSON response from the target web service.


I wanted to use Uptime to report the health of a custom web service based upon information received in a JSON response provided by the service.

Instead of relying exclusively on receipt of a 200 response, I also wanted Uptime to analyze the contents of the JSON response payload for my custom web services.  Basically, I wanted to read the contents of the JSON response and look for specific key words and associated values "pass" or "fail".

The sample plugin described below shows how to inspect the response JSON.

Uptime setup

Ubuntu:  I set up a 32-bit machine with Ubuntu Linux 12.04.

As root, I installed and set up the following:

Mongo DB:

Mongo User/Password: Set a database user name and password into Mongo:

/root> mongo
> use uptime
> db.addUser('myUser','myPassword');
> exit

Git:  apt-get -y install git

G++:  apg-get -y install g++  (note: g++ was required for 'npm install')

As user, 

Node.js:  I fetched and unzipped the latest Node JS, and added it to user and root paths.


docs:   Scroll down to "Installing Uptime"

You will get to the important command:  git clone git://

In my case, uptime was installed to directory /home/user/uptime/...

Set the mongo database user name and password into Uptime:

/home/user> vi uptime/config/default.yaml

user:     myUser
  password: myPassword

Start uptime

/home/user> cd uptime
/home/user/uptime> node app.js


Browse http://<your_hostname>:8082/  Verify "welcome to uptime"

Click to create your first check.  Verify Uptime correctly monitors the target.

Uptime is now set up properly.  It is ready to install the sample plugin.

Note:  Henceforth, all commands are issued as user.

Hello World Plugin

I created a hello world plugin to get started.

As a sample, the plugin is designed to evaluate the health of a well known REST service, the Google Geocoding Service.

Uptime sends a web request to the service.  You specify the Google URL in the Uptime Check configuration screen (see below).

When a response comes back from Google, Uptime passes each JSON response to the plugin.  The plugin reads and evaluates the contents of the response, and reports results back to Uptime.  In this sample, it passes if it sees "status":"OK".

Install the plugin

Fetch the sample hello world plugin here:

Unzip it to create


Edit the Uptime config file, and add the plugin name:

/home/user/uptime> vi config/default.yaml

 - ./plugins/helloworld   <--- add this line
 - ./plugins/console
 - ./plugins/patternMatcher

Inspect the plugin

There are three files in directory plugins/helloworld/...   That's all it takes (though you can add more if you like).


This was copied unchanged from the httpOptions plugin.

It assists with presentation of the user configuration screen in Uptime.


This file is copied from httpOptions and slightly modified.

It also assists with presentation of the user configuration screen.


This file was adapted from a sample plugin developed by Alexander O'Donovan-Jones on github (see acknowledgements)

It contains two interesting sections.

exports.initWebApp.  This section assists with presentation of the user config screen.

exports.initMonitor.  This section parses the JSON response object according to user specified options.  (This is most likely the section you will change heavily to evaluate your own JSON response)

How to run it

Restart Uptime if it was still running.

Browse to your uptime.

Click tab Checks-> Create Check.  Type the following values.  Accept defaults for the rest.  Click 'Save' when done.

Type:  http
Name:  Google Geocoding
Poling interval: 15 s

Edit the Check again.  Click tab Checks-> Click Google Geocoding-> Click button Edit.  A new text box will appear: Hello World Options.  Type the following values on two lines, with no quotes (YAML format).  Click 'Save' when done.

Hello World Options:

geocode: google
trace: true

Restart Uptime again.

I like to pipe results to a log file, so that I can review it easily.

/home/user/uptime> node app.js > /tmp/uptime.log

I also like to monitor live progress in another command-prompt terminal

/home/user/uptime> tail -f /tmp/uptime.log


Every 15 seconds or so, the log should display messages from the JSON results analysis code in index.js.  Examples:

Evidence that user options are properly set in the config, and properly presented to the plugin:

on.PollerPolled: Entry.
on.PollerPolled: options: { trace: true, geocode: 'google' }
on.PollerPolled: t=true
on.PollerPolled: geocode=google
on.PollerPolled: url=,+Mountain+View,+CA&sensor=false

Evidence that the JSON response from Google Geocoding has been properly received and parsed.

checkGoogleGeocode: Entry. body: { results:
  [ { address_components: [Object],
      formatted_address: '1600 Amphitheatre Parkway, Mountain View, CA 94043, USA',
      geometry: [Object],
      types: [Object] } ],
 status: 'OK' }
checkGoogleGeocode: status: 'OK'
checkGoogleGeocode: Exit. Success.

And, drumroll please, Uptime should display a green indication that the service is up.


Add more javascript to index.js checkGoogleGeocoding().  Inspect other values in the JSON.  Restart Uptime.  Verify.

Edit the plugin configuration.  Disable verbose tracing  trace: false  Save.  Restart Uptime. Verify fewer log messages.

Play with it.  Learn how it works.


The sample hello world plugin demonstrates how to query a REST service and evaluate the contents of the JSON response, using a well-known service provided by Google, Inc.

Once this works, you have all the secrets you need to query your own REST services and evaluate their JSON responses.  Change the URL in the Uptime Check configuration.  Then modify index.js to evaluate your own JSON responses.

Be nice

Don't bash the Google Geocoding service continuously.  Stop uptime or delete the plugin when you are not studying it.


Many thanks to Francois Zaninotto for creating, publishing, and supporting Uptime.

Many thanks to Alexander O'Donovan-Jones for creating and sharing a plugin named jsonValidator.

And thanks to Google, Inc for providing the Google Geocoding service used in this sample.

Saturday, March 8, 2014

Chart.js As A Service


This article shows how to quickly and easily present continuous test results in a bar chart using ChartJS.


I recently helped our IT department debug and fix a networking problem in a set of lab test machines.  Over several weeks, I wrote test scripts which continuously checked the status of the networking problem.

As I steadily grew the complexity of the scripts, I found myself spending more and more time answering questions from the IT team, manually interpreting the results in my log files.

When it hit my threshold of pain, I searched and found an easy way to present the results visually.  This way, the IT team could browse and view results themselves, on-demand, without bothering me.  Score!

Technology Search

Fixing the networking problem was an ad-hoc effort, so I did not have a formal continuous-test framework with fancy dashboard where I could post the data.  I needed to set up my own.

I researched a few data warehouse and data mining products and services, but they were too complicated for my simple needs.

Eventually, I happened upon Chart.js.  I hadn't known about it or used it before.  It's great.

What is Chart.js?

Chart.js is a small javascript library file.  It is available as a free opensource project under the MIT license at

To use it, you write some HTML and javascript, provide your data in JSON format, and ChartJS will render your data in a bar chart or other type of graph which you specify.

All you need is a little DIY scripting, an apache web server, and a browser.


I learned everything I needed to know from the official docs:

And I found a perfect tutorial for my needs:

Getting started

I recommend you download Chart.js and replicate the examples in the tutorial.  Figure out how it works, and create a chart you like in an HTML file with canned data.   Then you can move on to updating the data automatically...


Here is how it all works...

Now let's put the pieces together...

JSON test results

The first thing I did was enhance my test scripting to output its results in JSON format.  It was already logging these numbers; the change was to also write them to a new JSON file.

For my purposes, I stored data in this format (only three samples shown for simplicity, and sanitized for privacy):

 { "time":"2014-0306-1801", "rc":{ "ok":11, "error0":0, "error1":0 } },
 { "time":"2014-0306-1935", "rc":{ "ok":8,  "error0":2, "error1":1 } },
 { "time":"2014-0306-2029", "rc":{ "ok":11, "error0":0, "error1":0 } },

To bootstrap things, I manually created a flat file named results.json.  I typed the opening square bracket for a JSON list, and saved the file.

Each time my test scripting finished running a test cycle, it appended a new line to the bottom of the file.  My test scripts were written in the bash language, so my code looked like this.

JSON_STATS="  { \"time\":\"${DATE_NOW}\", \"rc\":{ \"ok\":${OK}, \"error0\":${ERROR0},  \"error1\":${ERROR1} } },"

    echo "${JSON_STATS}" >> ${JSON_FILE}

You can do this with whatever scripting language you like, and I'm sure it's easier than bash.

Note that I *appended* the results to the file.  This preserves history of earlier runs.

Also, JSON afficiandos will note that each line ends with a comma, and there is no closing square bracket to terminate the JSON list. In its present form, this means the file contents are not correctly-parseable JSON.  Not to worry, this was intentional to allow easy appending.  It is handled gracefully in the next step...

Conversion from JSON to HTML

Each time my test script appends new results to the JSON file, it calls a python program I wrote to convert the data from JSON to HTML.

There are two pieces of this: swizzling the results data for use by Chart.js, and creation of the HTML file...

Swizzle the JSON data

The output from my testcase scripting is organized according to time.  That is, each line contains a timestamp along with results for that period of time.

However, for a bar chart, Chart.js requires the data to be organized differently.  It requires a list of labels to be shown along the bottom of the chart, along with lists of data for each bar on the chart.

To do this conversion, the python program opened and read file results.json and swizzled the data.

To accommodate the fact that my results.json file does not contain correctly-parseable JSON, the python script first read in all the lines of the file in string form, deleted the trailing comma, and appended a closing square bracket.  Voila, instant parseable JSON.

It handed the string to simplejson which parsed it and converted it to a list of objects.

jsonList = simplejson.loads( jsonString )

Then it swizzled the data organization as required by Chart.js.  Using the data in my previous example, it looked like this after conversion:

timeList = [ "2014-0306-1801", "2014-0306-1935", "2014-0306-2029" ]
okList = [ 11, 8, 11 ]
error0List = [  0, 2, 0 ]
error1List = [ 0, 1, 0 ]

Also, since my application was designed to present the most recent results of continuously-running tests, I chose to find and convert only the most recent data points, rather than everything in the history.  (This example shows three; my real program showed 32.)

The same python program then created an HTML file named results.html.

Create HTML file

After a bit of experimenting with Chart.js using a manually-created HTML file, I settled on a design for a bar chart which I liked.  I then converted the HTML file to be used as a template by replacing each list of data with a unique uppercase string.

That is, I removed the label list data and inserted the string TIME_LIST.  I removed each data list in the datasets section, and replaced them with OK_LIST, ERROR_0_LIST, and ERROR_1_LIST.

The data portion of my HTML template file looked like this:

var data = {
"labels": TIME_LIST,
"datasets": [
"fillColor": "rgba(0,255,0,1)",
"strokeColor": "rgba(0,255,0,1)",
"data": OK_LIST
"fillColor": "rgba(255,0,0,1)",
"strokeColor": "rgba(255,0,0,1)",
"data": ERROR_0_LIST
"fillColor": "rgba(128,128,256,1)",
"strokeColor": "rgba(128,128,256,1)",
"data": ERROR_1_LIST

To convert the uppercase strings, the python program read the entire template file to a string (named templateString) and replaced each uppercase string with the real data.  My python code looked like this:

    templateString = templateString.replace("TIME_LIST",    simplejson.dumps(timeList))
    templateString = templateString.replace("OK_LIST",      simplejson.dumps(okList))
    templateString = templateString.replace("ERROR_0_LIST", simplejson.dumps(error0List))
    templateString = templateString.replace("ERROR_1_LIST", simplejson.dumps(error1List))

The new HTML file was ready. The python program saved the new file as results.html.

Publishing on apache web server

After calling the python program to convert results.json to results.html, the last step for my bash test script was to transfer the HTML file to an apache web server.  Each time it ran, it replaced the previous file on the web server.

Because this was on a private internal network, I transferred the file using linux utilities sshpass and scp.

sshpass -p${PASWORD} scp results.html ${USER}@${HOSTNAME}:/var/www/

Monitoring results

My partners in the IT team were now able to browse to the results.html file and see current results whenever they liked.  They could tell how well any fixes they applied overnight were working througout the next day, and I was not required lose sleep to support them.  Yay.

Here is a colorful example showing lots of errors (note: I changed the contents of the chart frequently during the debug cycle; this version of the chart only contained two bar columns, red and green).

Bells and whistles

After everything was working, I added a good old 'refresh' tag to the HTML head section.  With this, the observer could leave the web page up in his browser, and it would refresh itself every two minutes without having to manually click refresh.  Bonus!

<META http-equiv="refresh" content="120">


That's all there is.

It may seem complicated described in so many words, but its' really not.  After several false starts with other technologies, developing all of this took me less than four hours, all told.

My IT team was delighted with it.  I saved myself time by developing it.  And I can use the system for other applications in the future.  Success!

Take a look at Chart.js.  You'll like it.

PS: Many thanks to the developers of Chart.js, and to the tutorial author at