Performance monitoring using Node.js, socket.io and MarkLogic - Historical reporting

In a previous article we have discussed how you can use socket.io and a Node.js library to collect some performance metrics and system data about your environment and plot the data values to a chart.

In this article we will discuss how you can extend the application and add historical reporting to it. By historical reporting I mean to have a functionality where you select a date range and a chart gets populated with the data points for that period.

The updated source code for the app can be found here: https://github.com/tpiros/system-information/tree/historical - note it's in a branch called 'historical'

 

MarkLogic does contain a built-in history monitoring tool as well to monitor the server. It could be a great addition to this project. If you're interested in reading more about the history monitoring tool please visit this site: https://docs.marklogic.com/guide/monitoring/history

Requirements revisited

Let's think about this for a moment - what is it that we want to achieve? Ideally we'd like to have an interface that has a date picker, sends a request to the backend, the backend retrieves documents from the database and pushes the results back to the client (browser) where the chart can be drawn.

This time around we will start our discussion with the front-end by adding some visual elements and JavaScript code.

Updating the template

Just for visual aesthetics the template was updated with various Bootstrap elements; One of these elements is a tab panel that has a 'Current' and a 'Historical' tab. The historical tab simply contains a date picker and a button that allows the users to submit their request:

<div role="tabpanel" class="tab-pane" id="historical">
  <div class="container">
    <div class="col-sm-6">
      <form class="form-inline">
        <div class="form-group">
          <div class="input-group date" id="datetimepicker1">
            <input type="text" id="from" class="form-control" placeholder="YYYY-MM-DD HH:mm:ss">
            <span class="input-group-addon">
              <span class="glyphicon glyphicon-calendar"></span>
            </span>
          </div>
        </div>
        <div class="form-group">
          <div class="input-group date" id="datetimepicker2">
            <input type="text" id="to" class="form-control" placeholder="YYYY-MM-DD HH:mm:ss">
            <span class="input-group-addon">
              <span class="glyphicon glyphicon-calendar"></span>
            </span>
          </div>
        </div>
        <button type="button" class="btn btn-info" id="apply">Apply</button>
        <div id="curve_chart_historical" style="width: 900px; height: 500px"></div>
      </form>
    </div>
  </div>
</div>

My choice fell on a Eonasdan's Bootstrap date picker - there are other boostrap "compatible" date pickers out there, pick the one that suits your requirements the best.

Please also notice that right under the 'Apply' button I have placed another <div> element with a different id than the other chart that appears in the template. This is crucial as otherwise the data would be loaded to the wrong chart.

Request from the client to the server

Now that the chart is in the template there has to be a method that adds data to it. In order to achieve this a new historical.js file was created that contains the required logic.

This script makes use of jQuery and it of course also uses the Google Charting API. The process is relatively simple:

  • Get the 'from' and 'to' dates from the date picker and convert them to epochs
  • Make a request to the server
  • Process the data returned and produce the chart

The code snippet below does exactly the points outlined above.

$('#apply').click(function() {
  var from = parseInt(new Date($('#from').val()).getTime());
  var to = parseInt(new Date($('#to').val()).getTime());

  $.post('/api/historical', { from: from, to: to} )
  .done(function(data) {
    parseData(data, function(historicalDataArray) {
      var options = {
        title: 'System Utilisation',
        curveType: 'function',
        legend: { position: 'bottom' },
        pointSize: 3,
        width: 900,
        height: 400
      };

      var historicalData = google.visualization.arrayToDataTable(
        historicalDataArray
      );

      var chart = new google.visualization.LineChart(document.getElementById('curve_chart_historical'));
      chart.draw(historicalData, options);
    });
  });
});

A separate function was created to deal with the data that comes backs from the server as otherwise the .done() method of the Ajax call would have been a tad too convoluted for my liking. parseData() does nothing more but prepares the data to be in the format that the Google Charting API expects:

function parseData(dataFromServer, cb) {
  var historicalDataArray = [
    ['Time', 'CPU Average (%)', 'Used Memory (GB)']
  ];
  dataFromServer.forEach(function(document) {
    historicalDataArray.push([new Date(document.content.recorded), parseFloat(document.content.cpu), parseFloat(document.content.memory)]);
  });

  cb(historicalDataArray);
}

The client side is now all setup, time to work on the backend. In the Ajax call above there was already a hint to and endpoint which needs to be created now.

The backend and database query

At the backend an appropriate endpoint needs to be created which will be responsible for querying the database and returning the data for the front-end. Because the application uses Node.js in conjunction with Express a special package also needs to be added which will be able to handle the request body coming from the browser - the package of my choice is body-parser:

const bodyParser = require('body-parser');
// ...
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
// ...

A bit on the database and indexes

Before the discussion on the endpoint continues there's something that needs to be mentioned. In the previous article the data structure of the documents saved in the database had the format of { cpu: 'value', memory: 'value' }. For the purposes of doing historical reporting this data structure needs to be slightly updated to have the format of: { cpu: 'value', memory: 'value', created: 'epoch' }. This will help greatly to retrieve the appropriate documents.

Remember that the documents in the MarkLogic database are stored with the URI format of /data/EPOCH.json therefore there exist other ways to retrieve the documents from the database for historical reporting - a mapping between the timestamps arriving from the client and the URIs could be created but it'd involve some more work.

MarkLogic also allows the definition of indexes and this time an index on the created JSON property needs to be added, enabling range queries to be executed with ease.

Indexes will also enable the retrieval of documents in a sorted order based on the values stored inside the index.

To learn more about Range Indexes and their requirements in MarkLogic please take a look at this article.

Retrieving the documents

Time to create the endpoint. In Express defining endpoints is rather easy. Remember that in the client side an HTTP POST request was specified against the /api/historical endpoint.

let historicalRoute = (req, res) => {
  let from = req.body.from;
  let to = req.body.to;
  db.documents.query(
    qb.where(
      qb.and([
        qb.range('recorded', '>=', from),
        qb.range('recorded', '<=', to)
      ])
    )
    .orderBy(qb.sort('recorded'))
    .slice(0,500)
  )
  .result().then((response) => {
    res.json(response);
  }).catch((error) => {
    console.log(error);
  });
};

router.route('/api/historical').post(historicalRoute);

The code snippet above gets both the from and to values via the req.body property sent from the client and executes a query. The query in plain English says: 'show me 500 documents where the recorded JSON property is greater than the from value and is less than the to value.

The limit of 500 data points was added as good measure, it can be changed anytime, or it could also be sent as a parameter from the client.

The query then returns a response in a JSON format and that is the data that arrives to the client.

Conclusion


The performance monitoring application is now complete - it has real-time reporting on the CPU and memory utilisation and it also has a reporting interface that let's you go back in time and allows users to see past performance metrics.

The application tries to showcase how socket.io can be used to create real time apps - feel free to modify the application and add your own metrics to it. I will be interested to see what you do with it.

Show Comments