Visualizing Sensor Data with WebGL and WebSockets


The saying goes that a picture is worth a thousand words. By that reasoning, a web-based 3D scene complete with models, textures, lighting, and animations must be worth at least a million. It can’t be denied that data visualization can be extremely beneficial when it comes to understanding complex and frequently multi-dimensional data-sets. What better place to take advantage of these informative and rich 3D or 2D visualizations than in the field of embedded systems? It is a realm where sensors and measurements abound, and analyzing the data flow coming from them can often feel like taking a drink from a firehose. If you are still displaying your data from CSV files or boring tables, this tutorial will open your eyes to some truly amazing possibilities!

Unfortunately, these visualizations often come at a hefty price, with the currency being memory and processing time. In systems that are typically constrained when it comes to memory, power, peripherals, and processing capabilities, the dream of doing anything with data other than passing it along or using it immediately in some other onboard system is often just that. However, with the advent of browser based graphic technologies such as WebGL (Web Graphics Library), there is hope for those of us that yearn for a more visually stimulating data viewing experience.

In this article, we are going to combine a few software libraries available from NetBurner’s Development Kit in order to build an application that takes derived (and faked) positional and rotation data from an accelerometer and a gyroscope, and passes that data to a web page. This page will use it to animate and display a 3D model to visually represent the data. To ease us in, here is a list of the libraries and their role in the application:

Embedded Flash File System (EFFS) – We are going to be using some pretty big (from an embedded perspective) files. There’s just no way around that. The 3D models, their textures, and other resources needed for a visualization can get massive in a hurry. Being able to store those resources, as well as the .html and JavaScript files that will make use of them, on an SD card will go a long way to alleviate the memory constraints that continue to plague embedded system programmers everywhere. Thankfully almost all of NetBurner’s Systems on a Module (SOMs) come equipped with this capability (sorry MOD5213 fans, it is the exception here).

Web Server – In addition to reading and processing sensor data, NetBurner devices can function as a web server and are able respond to HTTP and HTTPS requests. This behavior is fully customizable by the enterprising developer, and there are several mechanisms in place to assist with dynamic web content creation.

WebSockets – We have a few articles already on using WebSockets in your IoT projects, so I won’t elaborate on them too much here. Essentially, they will provide us with a way to maintain a persistent data connection between the server and the browser so that the server can continue to send data updates without having to be polled by the client. Suffice it to say, WebSockets will be instrumental in making sure that everything runs efficiently and responsively.

File Transfer Protocol (FTP) – We could get away with not using FTP here, but it is so much easier to update our files on the SD card when we can just open an FTP client, connect to the NetBurner device, and drag everything over. The alternative involves pulling out the SD card from the development board, putting it into your computer, moving over your changes, putting it back into your development board and then repeating. It’s only fun the first 100 times or so.

Additionally, we will be taking advantage of some pretty impressive (and free!) web based technologies. WebGL provides a 2D and 3D graphics API that can be used with just about all modern browsers. For our example, we are actually going to take it one step further and use a higher level graphics library called Three.js. This library uses WebGL and provides an intuitive approach to creating complicated 3D scenes. It also provides some extremely convenient features, such as file loaders for a multitude of data formats (FBX, Collada, GLFL, etc).

The hardware used for this application was a NetBurner MOD54415 seated in a MOD-DEV-70CR development board. I also used a micro SD card that sat in the SD card slot on the development board via an micro SD card adapter. If you need all of this gear you can purchase it as a single kit (minus the SD card and adapter) from our store.

The example itself, including all of the NetBurner application code, JavaScript libraries, and resources needed to build and run the application can be found on our GitHub repo. If you are interested in really understanding the nooks and crannies of how the example works, it is highly recommended that you have the source code available while going through the article. It will help tremendously to see how these pieces are organized and fit together.

Now, with the pleasantries out of the way, let’s start to dig in. First, we will walk through the directory structure of the application, explain what is what, and what goes where. Next, we will briefly discuss how to build the application and get it loaded onto your NetBurner device. Then we will break down the NetBurner application into its main components, get an idea of how they work, and how they interact with one another. Finally, we will look at the web page and the code needed to get our 3D sensor model on the screen and moving around.

Example Directory Structure

I know what you’re thinking. What could possibly be so special about the application’s directory structure that it warrants its own section? You’re right, this isn’t something that usually gets a tremendous amount of attention, especially for a small project. However, because the example contains some files that will get compiled into the application, and some files that need to be copied to the SD card, it felt best to just bring up a few things explicitly.

Below you will see an image that gives an outline of our application’s directory structure. This doesn’t show all of the files present (it doesn’t show the JavaScript libraries used, for example), but it does show the ones we discuss specifically in this article.

Figure 1. WebGL example directory structure

In the main folder, you will find all of the .cpp and .h files, as well as the makefile that can be used to build the application from the command line (alternatively, you can import this code directly into an NBEclipse project).

In addition to these files, you will also notice two directories, html, and SdCardFiles. The former contains an index.html file that will be sent from the web server if there is no index.html file on the SD card. This serves as an effective backup just in case something goes awry when using the EFFS.

The last directory contains all of the resources that will be used by our web page. This includes the primary index.html file, all of the model and texture files, as well as the WebGL and Three.js libraries. The contents of this folder (not the folder itself) will need to be copied directly to your SD card for this application to work as is.

Building And Loading

Before we will be able to see any stunning 3D scenes, we will need to build and load the application on a NetBurner device. Once you have cloned the repo, and assuming you have NetBurner’s development tools installed, you should be able to navigate to it from a command prompt, type in make, and hit enter. This will build the application, and put the resulting executable, WebGL_APP.s19, in bin directory. From here, the application can be loaded using the AutoUpdate utility, which can be found in pcbin.

Alternatively, if you prefer IDEs to the command line, you can use NBEclipse to build a new project, import the code from the example, and load the freshly built application with the click of a button. Both the “MOD54415 Quick Start Guide”, and the “NBEclipse Getting Started Guide” (located in docsEclipse) cover this information in great detail.

Additionally, the quick start guide covers the initial setup and configuration for the device, which, unless you’re planning on doing something fancy, is as follows:

Connect your device to the network through the RJ-45 port on the module. Some folks like to connect their device directly to their computer with a cross-over cable, but it is much easier just to connect it directly to the network through a switch or router. If you have this option, and unless you have a reason not to, it’s definitely the way to go.

Power your device and connect it to your computer through the micro USB port on the development board.

For additional configuration options, or for more details, please see the guides referenced previously.

NetBurner Application

Before we get into the meat of the NetBurner application, we will need to make one change to the system. EFFS will need to be configured for long file name support. This is required so that we can correctly reference our web resources on the SD card. To make the switch, complete the following steps (also found in the “EFFS Programmer’s Guide”, located in docsEFFS):

Edit includeconstants.h and increase the user task stack size to a minimum of 8096:

#define USER_TASK_STK_SIZE (8096)

From your command line, navigate to pcbin and run the batch file, longfilenames.bat, by typing in the batch file’s name and hitting enter. This will switch the EFFS library to the long file name version.

Rebuild the system libraries.

From NBEclipse, select “NBEclipse -> Rebuild System Files”.

For command line users, go to system, and run “make clean” followed by “make”.

The NetBurner application has four major components to it. These are:

  • The program initialization and main loop
  • The web server and WebSockets interface
  • The filesystem (EFFS) interface
  • The FTP interface

The application’s interactions with EFFS and the FTP server library are all done from the first two components. Because EFFS and FTP are documented heavily, and knowledge of their internals is unnecessary for understanding their role in the application, we will focus the discussion on the first two components. For further information on EFFS and the FTP server library, please see the “EFFS Programmer’s Guide”, and the “NetBurner Runtime Libraries Guide”, both of which can be found in docs.

Program Initialization

As with all NetBurner applications, ours begins by running UserMain(), found in main.cpp. Well, technically, there is some initialization code and setup that runs on the device prior to this, but for our purposes, this is where the magic starts to happen. If you’re already familiar with NetBurner’s set up, a lot of this will look familiar to you. For those that aren’t, however, let’s take a minute to walk through what is happening.


void UserMain( void *pd )
    // EFFS Setup
    OSChangePrio( HTTP_PRIO );

    OSChangePrio( FTP_PRIO );

    OSChangePrio( MAIN_PRIO );

    // Initialize the CFC or SD/MMC external flash drive

    // Start the web server

    // Setup our callbacks for HTTP GET and WebSockets

    // Start FTP server with task priority higher than UserMain()
    int status = FTPDStart( 21, FTP_PRIO );
    if ( status == FTPD_OK )
       iprintf("Started FTP Serverrn");
       if( F_LONGFILENAME == 1 )
         iprintf("Long file names are supportedrn");
         iprintf("Long file names are not supported- only 8.3 formatrn");
       iprintf( "** Error: %d. Could not start FTP Serverrn", status);

    iprintf( "Starting WebGL Examplern" );

    // Dump the contents of the current EFFS directory to see what you have

    while( 1 )
        // Update the position and rotation of our simulation

         // If we have a valid WebSocket file descriptor
        if (ws_fd > 0)
            // Send our fake position data
        // Small time delay, just so we don't swamp anything
        OSTimeDly( 5 );

Here’s a brief explanation of the above code – skip it if you don’t really want the deeper comprehension. First, we need to initialize the stack, set the main task priority (the task currently running), get a DCHP address if necessary, etc. It’s a lot of work. Thankfully, we are able to take care of all of this with our call to init().

Next, we need to lay the groundwork for our filesystem. As mentioned previously, this is heavily documented in the “EFFS Programmer’s Guide”, but we will go ahead and cover a few basics here. We have to call f_enterFS() for every task priority that will be using the filesystem. For our purposes, this includes the main task, the task running the web server, and the task running the FTP server. However, because we haven’t started the web server or the FTP server, we fudge this by temporarily changing the main task’s priority to those of the other tasks. It’s okay (and even necessary) that we aren’t actually calling f_enterFS() from the tasks themselves. Just don’t forget to change the main task’s priority back! Finally, we call InitExtFlash(), which goes through the leg work of verifying an SD card is inserted and mounting it.

After the EFFS is ready to go, we start our web server with a call to StartHTTP(), and register callback functions that will handle our GET requests and WebSocket connection (more on them later) with a call to RegisterWebFuncs().

Our last set up step is to start the FTP server, which is done with a call to FTPDStart(). As mentioned, you can find a lot more information on the FTP server library in the “NetBurner Runtime Libraries” documentation, located in docsNetBurnerRuntimeLibrary.

Finally, we start our main loop, where we run our simulation and look to see if we have a valid WebSocket connection. This is done by checking the value of the WebSocket file descriptor, ws_fd. File descriptors are handles to a network connection, a serial port, or some other peripheral, and are extremely useful. Any value over 0 is a greenlight for us. If our connection is good, we then start sending our simulated (i.e. faked) data.

Now that we have covered the steps involved in getting our application up and running, let’s look at what else is going on in main.cpp. We can skip the details of our “simulation” (found in UpdatePosAndRot()), but let’s take a moment to understand how this data is passed from our NetBurner device to the web client. In SendWebSocketData(), we have the following:

void SendWebSocketData()
    ParsedJsonDataSet jsonOutObj;

    jsonOutObj.Add("x", Pos[0]);
    jsonOutObj.Add("y", Pos[1]);
    jsonOutObj.Add("z", Pos[2]);
    jsonOutObj.Add("x", Rot[0]);
    jsonOutObj.Add("y", Rot[1]);
    jsonOutObj.Add("z", Rot[2]);

    // Print JSON object to a buffer and write the buffer to the WebSocket file descriptor
    int dataLen = jsonOutObj.PrintObjectToBuffer(ReportBuffer, ReportBufSize);
    writeall(ws_fd, ReportBuffer, dataLen);

Here we are creating a ParsedJsonDataSet object that we then proceed to stuff with our data. This object builds an valid JSON blob that we then stick into a buffer and send using our WebSockets file descriptor with the call to writeall(). With this knowledge now in your toolbox, let’s switch gears and take a look at how we actually setup that WebSockets connection and serve up the HTML pages to web clients.

Web Server and WebSocket Processing

You might recall when we initialized everything way back in our main.cpp file with UserMain(), that we also called a function RegisterWebFuncs(). As mentioned previously, this function registers two callback functions that we use to handle HTTP GET requests, and the WebSocket upgrade request. These are MyDoGet() and MyDoWSUpgrade(), respectively, and they are both located in web.cpp.

Let’s begin with MyDoGet(). We won’t include the entirety of this function here, as there is a lot of string parsing and file shenanigans going on, but we will take a minute to highlight a few key features. We start off with parsing the URL, and separating the file requested, the extension of that file, and the directory the file should be located in. We change the directory to what is specified in the URL with a call to f_chdir(), and if successful, we try to load the file with a call to f_open(). Finally, we send the file. If the file is not found, then the compiled application image will be checked for a matching file to return.

The WebSockets function MyDoWSUpgrade() is a bit more straightforward:

int MyDoWSUpgrade(HTTP_Request *req, int sock, PSTR url, PSTR rxBuffer)
    iprintf("Trying WebSocket Upgrade!rn");
    if (httpstricmp(url, "INDEX"))
        if (ws_fd > 0)
            iprintf("Closing prior websocket connection.rn");
            ws_fd = -1;
        int rv = WSUpgrade(req, sock);
        if (rv >= 0)
            iprintf("WebSocket Upgrade Successful!rn");
            ws_fd = rv;
            NB::WebSocket::ws_setoption(ws_fd, WS_SO_TEXT);
            return 2;
            return 0;

    NotFoundResponse(sock, url);
    return 0;

In the above code, we first check to ensure we are receiving the request from the proper URL (for us, that’s INDEX.HTML). We then look to see if we have an open WebSocket connection already, and if we do, we shut it down. There are more appropriate, not to mention polite, ways of handling this case, but for now, this will do the job nicely. We proceed with attempting the WebSocket upgrade by calling… you guessed it, WSUpgrade(). If successful, we save the WebSockets file descriptor (i.e. ws_fd) so that we can use it when sending out our data.

From the NetBurner application perspective, that’s pretty much it! We pride ourselves on ease of use, and strive to make things as quick and straightforward as possible. There is a lot of code that you can dig into with this example in terms of the various file system utilities we’ve included, as well as the FTP functions, but we’ve covered the basics of what you need in order to make use of it. Okay, shameless self promotion over. Let’s move on to the webpage and resources used.

The Web Page

If you crack open the INDEX.HTML found in the SdCardFiles directory, you will see a tiny amount of HTML, and a good chunk of JavaScript. The JavaScript functions and code in this example have been added directly to the HTML file for ease of use, but it would be just as easy to move this to a separate JavaScript file and reference it the way we do with the other APIs. The JavaScript can essentially be broken up into two pieces: the section that handles the WebSocket connection and data, and the section that manages the 3D scene. The next two sections will breakdown both of these parts and give a better idea of what’s going on.

WebSocket Connection

Since we finished off our discussion of the NetBurner application with the details of the WebSocket connection, let’s pick that up here and see what the other side of the coin looks like.

        function CreateWebSocket() {
            if ("WebSocket" in window) {
                if ((ws == null) || (ws.readyState == WebSocket.CLOSED)) {
                    ws = new WebSocket("ws://" + window.location.hostname + "/INDEX");
                    ws.onopen = function () { };
                    ws.onmessage = function (evt) {
                        var updateData = JSON.parse(;
                        posX = updateData.PosUpdate.x;
                        posY = updateData.PosUpdate.y;
                        posZ = updateData.PosUpdate.z;
                        rotX = updateData.RotUpdate.x;
                        rotY = updateData.RotUpdate.y;
                        rotZ = updateData.RotUpdate.z;
                    ws.onclose = function () { };

The first thing the above code does is to ensure that the browser being used actually supports WebSockets. If not, well, that’s pretty much a deal breaker. If it does (and really in this day and age, it should) we go ahead and check to see if we have already created a connection. If we haven’t, then we go ahead and attempt to do so with a call to WebSocket(). We then set our callback functions for when the connection is opened (in which case we do exactly… nothing) and when we receive a message from the server.

When we do receive a message, you will notice we expect to get a JSON object and attempt to parse it. We then abuse the fact that we already know the structure of the data, since we are the ones who sent it, to directly assign the position and rotation values (in radians) we want to update to our global variables that will be used by our visualization. That’s all there is to it. Easy peasy lemon squeezy.

The Visualization with Three.js and WebGL

Data visualizations and computer graphics on the whole can be a lot of fun. It can also be a bit overwhelming when first starting out. Fortunately, Three.js alleviates a lot of the headache with an API that is both easy to use and flexible enough to do some really impressing things (see their examples page for a small sampling of what I’m alluding to).

They also have some really impressive documentation and tutorials which I highly recommend. Again, here we will cover the basics of what is needed in order to get the scene up and rendering. First, all scenes will need a few key components. These are:

  • A scene object (THREE.Scene()) – This corrals all of the other objects that will be used in the rendering.
  • A camera (THREE.PerspectiveCamera()) – This dictates what is actually being viewed in the scene.
  • A renderer (THREE.WebGLRenderer()) – This does the actual magic of drawing everything with WebGL.

Technically speaking, you could get away with just the above items, but you would end up with a scene that contains nothing but a soul-eating darkness. To add a little life and help us visualize our sensor, we are going to also add the following items:

  1. Two light objects – These will provide the lighting we need in order to actually see anything. We are using both a HemisphereLight, which is positioned directly above the scene, and a PointLight, which will create a light source that originates from a specific point in space. The latter provides some nice lighting effects, and will keep the scene from looking too flat.
  2. Controls (THREE.OrbitControls()) – These allow us to move around the scene, much like you would in a 3rd person style video game.
  3. A ground plan (THREE.Mesh()) – This will give our visualization a frame of reference, and is actually built from a few other Three.js objects. That said, the mesh is the one we need to hang on to.
  4. A Model/Texture Loader (THREE.GLTFLoader()) – This is used to load our 3D model and the various texture/lighting/normal maps that are associated with it. There are way more file formats than we have time to cover, and they each have their own pros and cons. Fortunately, Three.js really went the distance here and has provided several loaders for many of them. I chose GLTF here because… well, for one, it was recommended by Three.js, and also because the structure of the data is saved in a JSON file that you can then go and modify manually if needed. In the case of this example, it turned out that it was needed, so this choice really paid off.

The model and texture data – This is obtained from the loader mentioned previously. When the load() function is called on the loader, we grab the scene object from the passed in parameter, gltf, and save it as our gadget. This contains all of the model and texture data, and is what we will be manipulating with our position and rotation data.

If you look closely, you will notice that with the exception of the renderer, the camera, and the scene itself, all of our objects mentioned above are added to the scene with a call to add(). This is how we tell the renderer what things we actually want to draw.

The last piece to this rendering puzzle is understanding how we actually use these things to draw something to the screen. This is exactly where the animate() function comes in:

        function animate() {
            camera.lookAt(new THREE.Vector3(0, 0, 0));

            // Animate our gadget
            if (loadedGadget) {
                gadget.rotation.x = rotX;
                gadget.rotation.y = rotY;
                gadget.rotation.z = rotZ;

                gadget.position.x = posX;
                gadget.position.y = posY;
                gadget.position.z = posZ;

            renderer.render(scene, camera);

In the JavaScript code above, we first tell the camera where to look. This prevents the controller from moving the camera away from our gadget, and essentially keeps it locked on target (though you can still zoom in and out, and rotate around it). We then call requestAnimationFrame(), which will queue up the animate() function to be called again. This will create a continuous loop of calls to animate(), which is exactly what we want.

Next, we check to see if we have loaded our gadget. If we have, we go ahead and update its position and rotation from the data most recently received from our WebSocket connection. This is a good place to point out the fact that we are assigning values here, not adding a change (often called delta) to existing values. This is because we want to ensure the visualization is as accurate in its representation of the sensor as possible. Imagine, for example, if the simulation had been running for hours before our first websocket connection was made. If the client were only receiving positional and rotational changes of our device, we would have no idea of what the initial values were that needed to be updated. It’s possible to modify the application to send the initial values on connection, and then send the corresponding deltas after. However, if there are any issues at all in receiving or processing data and we get out of sync with the server, then we will start to see a disparity between the device’s position and orientation and that of our model on screen. While it would still look cool, it would be arguable less useful.

Finally, we make our call to the renderer, which draws your beautiful scene to the screen.

That about wraps up this tutorial. We always looking for new ways to push our technology, and adding data visualizations to your project can be a creative and entertaining way of turning your lists of numbers into something that is more visually intuitive, especially for those coworkers and customers who are more right brain dominant. Here, we specifically dealt with 3D models and animations, but there are hundreds, if not thousands, of ways that your dataset can be visualized in any number of dimensions, and more techniques are being researched and developed all the time. If you take some time to create your own, let us know what you come up with — we’d love to see it! Feel free to share via the blog comments or in our forum.

Share this post

Subscribe to our Newsletter

Get monthly updates from our Learn Blog with the latest in IoT and Embedded technology news, trends, tutorial and best practices. Or just opt in for product change notifications.

Leave a Reply
Click to access the login or register cheese