Real-Time Data Logging for Embedded Systems and IoT – Tutorial

markus-spiske-666905-unsplash-opt

Overview

Data logging abounds in the world of embedded systems. Between the burgeoning field of IoT and the broader spectrum of embedded devices as a whole, the use cases where logging is useful, if not downright necessary, are as numerous as the devices in the cloud. Whether you’re trying to track variations in sensor data for data acquisition (DAQ), evaluate system resources, or simply debug your latest killer application, if you’re a developer at any level, chances are high that you’ve needed to track changing data at some point.

In this article, we review an example that showcases a modular, efficient real-time data logging system designed to run on a NANO54415, though it can be easily ported to other NetBurner modules. The example uses the data logging system to instantiate a logging object with some dummy values which are modified over time. These values are then written to a log file which is stored in volatile flash memory (deleted when you turn off the module) and can be transmitted to a computer via FTP.

As written, the data logging system writes out integers or floating-point values of various sizes in their raw binary encoding. Unless you happen to be Neo staring into the Matrix, this sort of raw data is rarely useful. However, logging data in this format can provide a substantial benefit as it minimizes the runtime computational power usage of your device. To turn the resulting mess into something your typical engineer can make sense of, a separate command-line tool, Read, parses the data and can expose it in several different ways. This small, handy utility provides a host of useful functionality when trying to dig into the overwhelming dataset typical of most log files. In addition to reviewing the code in the example and explaining the usage of the logging system, we will cover how to use Read to get exactly what you’re looking for from your logged data, and how to turn it into a format you can easily use: a CSV file.

Both the Read application and the example code used throughout this article can be found on our GitHub repo.

Note: One precaution with this example is to ensure the log file size, defined by LOG_SIZE in the source code, will be smaller than the amount of available flash memory on your board. For systems that need to log an exceptional amount of information or want to safeguard the logged data against power-outages, this example can be modified to write the log data to an SD card using the EFFS-FAT system library that is included with the NNDK (NetBurner Network Development Kit).

The Code

Our Example

Let’s take a moment to get an overview of the code and see exactly what’s going on here. The example has two core components, which correspond to the different source files. These are:

main.cpp:  This contains the definition of our logging object, as well as the entry point of our application, UserMain(). In UserMain(), we manipulate our logged values once per second, update the object, and send the data on its way.

introspect.cpp/h: This is where the family of Introspection classes are declared and defined, as well as the loggable data types. It drives the magic that enables us to create the logging objects and have them generate output.

We won’t dive too far into the mechanics of how the logging works behind the scenes, other than to show you the how to use the objects and some additional useful functions that are available. To get started, let’s dive into main.cpp to see what’s going on and show how it can be tailored to suit your particular needs.

After the standard included header files, application name, and defining the task priority for FTP, you will run into the following macro:

LOGFILEINFO   // Logs the version of this file

Including this snippet at the top of any file in your project will allow you to spit out that file’s last build time with the Read utility. This can be incredibly useful when trying to remember the last time a particular piece of a project was modified.

Immediately under that, we see the following chunk of code:

// Our logging object
START_INTRO_OBJ(MainLogObject, "Log")
public:
int_element m_time{"time"};
float_element m_floatElem{"FloatElement"};
uint16_element m_intElem{"IntElement"};
char GetCharElem(){ return m_charElem; }
void SetCharElem(char c) { m_charElem = c; }
private:
char_element m_charElem{ "CharElement" };
END_INTRO_OBJ;
MainLogObject mainLog;

The macro START_INTO_OBJ indicates the beginning of our logging object. It is called with two parameters, the first being the name of the class that will be created by the macro during preprocessing, and the second being the value that will be displayed in the generated log file.

What follows is the class definition. As you might notice, it follows the same structure and syntax as any other C++ class, as this is exactly what it will become when the code is compiled. You might notice that the member variables are of types that end in “_element”. These types are defined in introspect.h and are the only types that can currently be logged.

At the end of our class definition, we place the trailing macro, END_INTRO_OBJ. Finally, we declare our global logging object, mainLog, as an instance of the logging class that we just defined, MainLogObject.

extern "C" void UserMain(void *pd)
{
   // Basic network initialization
   init();
   int secs = 0;
   
   // FTP Server for Log Transmission
   InitLogFtp(FTP_PRIO);

   // Start Logging
   bLog = true;
   LogFileVersions();

   while (1)
   {
       printf("Updating Main\n");

       // Update data values that will get logged for 20 seconds
      if (secs++ > 20)
      {
bLog = false;
       }
       else
       {
           printf("Logging at %d secs\n", secs);
       }

       mainLog.m_time = secs;
       mainLog.m_floatElem = mainLog.m_floatElem / 2;
       mainLog.m_intElem = mainLog.m_intElem * 2;
       mainLog.SetCharElem(mainLog.GetCharElem() + 1);
       mainLog.Log();
}

   OSTimeDly(TICKS_PER_SECOND);   // Update main every second
}

After our class definition comes the meat of our file and the entry point of our application, UserMain(). All NetBurner applications use this function as an entry point, and the first thing that all applications need to do is set get their ducks in a row by calling init(). We won’t get into the details here, but this function does things like set up your stack, get an IP address through DHCP (if needed), etc.

We now define the variable that will be used to track time in this example. The NetBurner system libraries have more precise timers available, but for our use case in this demonstration, a simple integer will suffice.

The call to InitLogFtp() starts the FTP task that will be used to access the log file. We will give a bit more detail on how to get to that beautiful pile of data below.

The next bit of code is where our actual use of the logger begins. To indicate that we want to start actually logging data, we need to set the boolean, bLog, to true. This will allow calls to Log() from our object to write out the data. If Log() is called when bLog is set to false, the function will simply return. Having this toggle is incredibly useful for systems that are resource constrained, as it allows you to use conditional logic to dictate precisely when your logging takes place. More on this in a bit.

The function, LogFileVersions(), will output the last build data for files that have the LOGFILEINFO macro entered at the top.

Finally, we start our main application loop. For this example, we are only going to log data for 20 seconds. If we have gone through our loop more than 20 times, we set bLog to false, which will prevent any future calls to our logging functions from doing anything. You might be thinking it would be just as easy to stick the logging object, bLog, itself in the if() statement. You would be right. However, for applications where the logging is widely distributed throughout the code base, having a single value that can be used to toggle logging on and off can save a lot of time and energy.

Below our check is where we modify the values of the object itself, which is exactly the same way you would modify the private or public member variables of any other class in C++. After the elements have been updated, we make our call to Log(), which, assuming bLog is set to true, writes the data to our log file.

One point to note is that if we happen to write enough data that we run out of space, the logging system will simply wrap around to the beginning of the log file and overwrite what was previously there. No need to worry about the system failing, but it’s a good idea to have a sense of how much you are writing versus how much space you have to put it. Again, using bLog is an excellent way to help regulate this.

Finally, we have the delay at the bottom of our while loop that allows other tasks to run and ensures we are logging data at the intervals we want to.

Additional Functions

In addition to the functions LogFileVersions(), InitLogFtp(), and IntrospecObject::Log() that were covered above in our example, there are a few other handy functions provided by introspect.h that should not be overlooked. Below is the list and a short description of each:

  • void LogMessage(const char *cp, bool force = false) – Writes a message directly to the log file. The first parameter, cp, is the message to write, and the second parameter, force, is whether or not the message should be written if the value of the global variable bLog is false.
  • void LogEvent(bool force = false) – Marks a single event in the logging stream. The parameter force dictates whether or not the event should be written if the value of the global variable bLog is false.
  • int GetLogPercent() – Returns what percentage of the current log space is used.
  • int GetLogSize() – Returns the current log size.
  • void LogAppRecords() – Logs the configuration system’s app data section (NNDK 3.0 only, comment out for 2.x).
  • void IntrospecObject::LogStructure() – This class function logs the structure of the logging object. It is automatically run the first time Log() is called on any object.

Getting to the Log File

After the application is up and running, the log file can be easily accessed by pointing an FTP client, such as WinSCP, at the device. The following session options should be set when trying to connect:

 File Protocol: FTP
Encryption: No encryption
Port Number: 21
Host Name: The IP address of your device
User Name: Anything
Password: Anything

If you are using version 3.x of our NNDK, you can find the IP address of your device by going to discover.netburner.com and looking at the entries. If you only have one running on your local network, it should be the only one listed. If there are several, then you will need to reference the MAC number (listed on a white sticker on your module), to determine which one is yours. For earlier versions of our NNDK (2.x and prior) you will want to use our IPSetup tool, which can be freely downloaded from our site.

The default port for unencrypted FTP is 21. With these values filled out, you should be able to connect to your device and see a file named Log.bin. This will be the file that contains all of your data, and is what will be used with the Read application in the steps below. Download it to your PC, and proceed to the next section.

Security Considerations

By default, any username and password entered should allow you to access the device. Needless to say, in a production environment you may want to be a bit less cavalier. The same can be said for the encryption type, which for simplicity we opted not to use in our example. In a real world scenario, we highly recommend using an encrypted connection, which is fully supported by our NNDK.

Using Read

Once the application is up and running, and you have downloaded your log file, you will be ready to dive in with the Read utility to figure out exactly what treasures are buried in the data. It is a command line tool that will be run from the command prompt, and is included as a part of the repository with the example code.

To get a feel for all of the options available with the tool, type read -? into the command prompt. You should see the following list:

Usage is: read <options> file
Options:
-A emit All
-L list names
-C count elements
-D make CSV file
-V verbose diagnostic
-M show messages
-E show events
-H hold last valid value
-Ooptionsfile   list of data fields to display in same format as list


To get started, it might be a good idea to see what data elements are present in the log file. While our example is fairly simple, I’ve seen how one of our engineers used this in his autonomous car when preparing for the AVC race, and trust me, it got heavy in a hurry. To pop the lid off of this data dump and peer inside, use the -L option. When used in our example, the following is displayed:

> read -L Log.bin
Reading from Log.bin
messages
MainLogObject.CharElement
MainLogObject.IntElement
MainLogObject.FloatElement
MainLogObject.time

From this, we see our object’s elements listed by the names we gave them in our example from the previous section. We also see a category, messages, which contains the build information of our files due to including the macro, LOGFILEINFO. To get more info on this, we can use the -M flag and see the following:

Reading from Log.bin
Msg[Source src/main.cpp build on Feb 27 2019 at 17:35:16.]
Msg[Source src/introspec.cpp build on Feb 27 2019 at 17:32:37.]

I

If you just want to see a list of all the data, you would use the -V flag, and see something like the following (but with more entries):

> read Log.bin -V
Reading from Log.bin
MainLogObject:[
time:+1
FloatElement:0.000000
IntElement:0
CharElement:+1
]
MainLogObject:[
time:+2
FloatElement:0.000000
IntElement:0
CharElement:+2
]
...

This isn’t much better than just watching it scroll by on a serial terminal, however, and certainly isn’t the highlight of this example. What we really want to do is select which attributes to look at and put it in a format that is easy to digest. Fortunately, Read is able to write out selected attributes to a .csv file.

To accomplish this feat, we first need to tell the utility which attributes we are interested in. We will output all of the options into a text file and designate which ones we want to peruse. At the command prompt, type read -L Log.bin > attrList. Open the file attrList in an editor, and you should see the same list of attributes that we saw previously.

Go ahead and add the word “emit” after the elements you want saved to the .csv file. Values that aren’t marked this way will appear as if they aren’t changing. This is not true for messages, which will get displayed anyway. For this article, we will look at everything but the messages (which we removed manually from the top of attrList in the editor), so the file now looks like the following:

MainLogObject.CharElement emit
MainLogObject.IntElement emit
MainLogObject.FloatElement emit
MainLogObject.time emit

To generate the .csv file, we will run read –O attrList –D Log.bin > Log.csv. When we open up Log.csv to verify our results, we get the following:

"MainLogObject.CharElement","MainLogObject.IntElement","MainLogObject.FloatElement","MainLogObject.time"
+1,1,1.000000,+1
+2,3,1.500000,+2
+3,7,1.750000,+3
+4,15,1.875000,+4
+5,31,1.937500,+5
+6,63,1.968750,+6
+7,127,1.984375,+7
+8,255,1.992188,+8
+9,511,1.996094,+9
+10,1023,1.998047,+10

+11,2047,1.999023,+11
+12,4095,1.999512,+12
+13,8191,1.999756,+13
+14,16383,1.999878,+14
+15,32767,1.999939,+15
+16,65535,1.999969,+16
+17,65535,1.999985,+17
+18,65535,1.999992,+18
+19,65535,1.999996,+19           

Here we can see that the first row provides the names of the values. Following this are rows of our data, with each entry corresponding with a call to Log() on our logging object.

With that, our data logging journey is at an end. That said, we would love to hear how you put this to use. If you have any questions, comments, or stories, please let us know in the comments below, or directly at sales@netburner.com.

Share this post

Subscribe to our Newsletter

Get monthly updates from our Learn Blog with the latest in IoT and Embedded technology news, trends, tutorial and best practices. Or just opt in for product change notifications.

Leave a Reply
Click to access the login or register cheese