The C++ Secret to Making Mutexes Work Share your comment!

In my previous blog, we talked about the pitfalls of trying to coordinate threads in parallel programming  and relying on mutexes. When not properly handled, this can result in performance bottlenecks that are no better (or even worse) than serial algorithms. Nevertheless, you may well need to make use of mutexes. If you do, Threading Building Blocks includes some very nice mutex template classes to help you out.

In the reference guide for Threading Building Blocks at the start of the Mutexes section, you’ll see a mention that the mutexes enforce the “scoped locking pattern.” The reference explains the benefits, but doesn’t actually define what it is. Essentially, it’s this: if you acquire a lock in a certain part of your code, and the variable holding that lock goes out of scope, the lock will be released.

For example, if you have a function that uses a lock, and the function ends, the lock is released automatically. This means you don’t have to worry about explicitly releasing it. (However, from a purist computer science standpoint, that could encourage sloppy programming. But we live in a world of programming languages such as Java and C# that support garbage collection and automatic deletion of objects, so perhaps this isn’t a bad thing either.)

In addition to getting released when a lock goes out of scope, if an exception is thrown in the middle of the critical code, the lock will also be released automatically. This is good, as it allows the exception handle to run for the thread, and simultaneously let other threads continue running, including accessing the critical section. And that means the program can continue running without freezing up. The exception code can gracefully handle the error, while the system continues running without simply freezing and then crashing. (And if you’re the engineer supporting the software, that means you get to continue sleeping that night and can deal with it in the morning!)

Using C++ Scoping Features

The general idea with a scoped lock is to make use of the scoping features built right into C++ , whereby an object gets created when the code is called. The object’s destructor gets called when the scope ends, such as through a return statement or through a loop exiting. And the usual way in C++ to allow for this to happen automatically is by creating the variable on the stack rather than in the heap.

If you’re not accustomed to C++ and work in other modern languages, here’s a quick explanation of that concept. In C-based languages, when a function is called, the local variables in the function are stored in the part of memory that the processor’s stack points to. The stack gets the return address of the calling function, and the stack gets the local variables. When the function is completed, the assembly code created by the C++ compiler includes code that pops the data off the stack—the data that was used for the local variables. Then, one more value is popped off the stack, which is the location in memory of the code that called the function. That value is pushed into the instruction pointer, allowing the code to resume where it left off. The space where the variables are held is in the stack, and are called stack variables or variables “stored on the stack.” In C++, if any of those variables are instances of classes, the compiled assembly code will automatically call the destructors for those objects. And right there is how C++ can easily implement the automatic release of the lock.

The Heap Option

The alternative place to store variables is in the heap, an area of memory allocated for the program and any function can store variables in there. In C++, you create objects in the heap through the new operator. And the assumption is that because you created an object in the heap, you want it to remain  after the function ends, so the destructor isn’t called automatically. But remember: You will probably be saving a pointer to the object in a local stack variable. That pointer will get deleted, but not the object it points to, unless you explicitly call delete.

That’s the general premise behind how mutexes automatically clean up themselves. Mutexes are stored on the stack, and when the scope ends, the objects representing them are automatically deleted.

With that, we can now explore how to use mutexes. We’ll take that up next time.

Meanwhile, feel free to ask any questions in the comments about this mechanism for automatically cleaning up objects. 

Posted on by Jeff Cogswell, Geeknet Contributing Editor
3 comments
andy_suter_uk
andy_suter_uk

However, automatically releasing the mutex (via a RAII implementation in a stack unwind exception case) enables you to handle the error gracefully. Leaving it looked gives a far higher risk of putting you in a deadlock state

Marty Deneroff
Marty Deneroff

Having a lock automatically release when its owner goes out of scope solves very little, since this will generally just be painting over a real programming bug in a way that may make the problem harder to identify. If a thread leaves a critical section without releasing a lock, chances are it is because of one of two (bad) things:

- an exception occurred unexpectedly, causing the flow of the program to be interrupted without completing the work being done in the critical section. Generally, critical sections must be written in such a way that exceptions are impossible if the system is to work properly.

or

- the critical section has a path out of the code that omits releasing the lock, thus leaving it set for much longer than intended.

Yes, having the lock clear itself will allow the overall system to keep running, at least for a while, but it will not be running correctly!

atomicenxo
atomicenxo

I never thought of it this way, but a mutex is a bit like a Roomba... Seriously, thanks for great post Jeff