NetBurner 3.3
Protecting Shared Data

The following RTOS mechanisms can be used to protect shared data resources. They are listed in a decreasing order of severity as regarding system latency (all pend and post functions are at the same level).

OSSemPend() OSSemPost() Protects an area of memory or resource. A task calls OSSemPend, which will block until the resource is available. OSSemPost releases the resource.
OSMboxPend() OSMboxPost() Same as semaphore, except a pointer variable is passed as the “message”. A task or ISR can post a message, but only a task can pend on a message. Both the posting task and pending task must agree on what the pointer points to.
OSQPend() OSQPost() A Queue is basically an array of mailboxes. It is used to post one or more messages. When you initialize a Queue, you must specify the maximum number of messages. The first message posted to the queue will be the first message extracted from the queue (FIFO).
OSFifoPend() OSFifoPost() OSFifoPostFirst() OSFifoPendNoWait() A FIFO is similar to a queue, but is specifically designed to pass pointers to OS_FIFO structures. The first parameter of the structure must be a (void *) element, which is used by the operating system to create a linked list of FIFOs. When initializing a FIFO, you do not specify the maximum number of entries as with a queue. Instead, your application has the ability (and responsibility) to allocate memory (static or dynamic) in which to store the structures. This can be done statically by declaring global variables, or dynamically by allocating memory from the heap. As with a queue, the first message posted to the FIFO will be the first message extracted from the queue.
OSCritEnter OSCritExit OSCritObj This is a counted critical section that restricts access to resources to one task at a time, sometimes called a “mutex”. For example, you have a linked list that is maintained by 3 separate tasks. If one task is manipulating the list, you could first call OSCirtEnter for that object (the list). If any other task tries to manipulate the list, it will block at the OSCritEnter until the task that previously called OSCritEnter, calls OSCritExit. Note that the number of enter calls must match number of exit calls. OSCritObj is a C++ implementation that uses scoping to automatically call the enter and exit functions. See example below.
OSLock() OSUnlock() OSLockObj Disables other tasks, but not interrupts. Increments for each OSLock, decrements for each OSUnlock. The C++ object OSLockObj was created to assist in making sure that an unlock is called for each lock. When an OSLockObj is created, the constructor calls OSLock( ). When the object goes out of scope, OSUnlock( ) is automatically called by the destructor.
USER_ENTER_CRITICAL() USER_EXIT_CRITICAL() Macro that disables other tasks and interrupts. Increments count on enter, decrements on exit.

How do you decide which type of mechanism to use? Some guidelines are:

  • If you need some type of signal, but do not need to pass any data, use a Semaphore. A semaphore is a single 32-bit integer that increments and decrements for each pend or post.
  • If you want to pass a single 32-bit number, you can use a Mailbox or Queue. Most applications use the 32-bit number as the data, but it could also be a pointer to a structure or object. A queue is like an array of mailboxes. You declare the number of queue entries a compile time.
  • If you want to pass a structure or object, then use a FIFO. You may be wondering how a FIFO differs from a Queue. The difference is that a Queue has a predefined number of entries. The FIFO implementation uses a linked list, so the only limit to the number of entries is available memory. Using a FIFO is not as simple as any of the other mechanisms, because your application must implement some type of memory management to allocate and deallocate the FIFO objects. This is usually done by managing a predeclared array of objects, or through dynamic memory allocation. We encourage all embedded designers to avoid dynamic memory allocation if at all possible, since in any embedded system memory fragmentation could eventually occur and the call to allocate a new object could fail. If you create an array of objects at compile time you will always be guaranteed the maximum number can exist.