Most Recent Build Posts RSS

Faster Parallel Code with ‘Lockless’ Programming

When you read about parallel programming, you’re likely to hear about the importance of lockless programming. What exactly is this? Why is it important? Jeff Cogswell walks you through the answers to these questions and more. As you learn more about parallel programming, you’re going to occasionally see people talking …

Read Full Post Posted in Build | Leave a comment

Programming Parallel Sections with OpenMP

OpenMP lets you declare blocks of code that will run in parallel with each other. These blocks of code are called sections. Jeff Cogswell shows you how to get them working.

Read Full Post Posted in Build | Leave a comment

Using Atomic Operations in TBB

Threading Building Blocks (TBB) can make use of low-level processor operations to accomplish atomic operations. Jeff Cogswell looks at the template functions available to use atomic operations in TBB.

Read Full Post Posted in Design | Leave a comment

OpenMP Atomic Operations Relieve Race Conditions

Simple operations such as reading a variable and then modifying it based on the value can be problematic in parallel code. Atomic operations in OpenMP help alleviate the race conditions that can result. Jeff Cogswell shows you how. Tasks that seem trivial in serial programming are sometimes problematic in parallel …

Read Full Post Posted in Build | Leave a comment

Speculative Locks Are Powerful When Used with Caution

Speculative locks work closely with Transactional Synchronization Extensions (TSX), allowing for multiple threads to obtain a lock simultaneously. The technology is difficult to use and requires careful considerations on when to use it. Jeff Cogswell gives you the details. Back in April, I talked about Intel’s Transactional Synchronization Extensions (TSX) …

Read Full Post Posted in Build | Leave a comment

The Future of Reducers in OpenMP

The Intel compiler allows you to perform simple operations with reducers. Jeff Cogswell explores how to use min and max reducers, and looks at what the future holds for reducers. In a recent video, I explained how to use reducers in OpenMP. Using OpenMP pragmas, you can declare that a …

Read Full Post Posted in Build | Leave a comment

New MPI 3.0 Features Embrace Parallel, Clustering Tech

Message Passing Interface (MPI), which originated 20 years ago in 1994, has now received a complete overhaul with version 3.0. Intel’s MPI Library 5.0 implements the changes to the 3.0 standard. Jeff Cogswell highlights the changes for you. The newest version of the Message Passing Interface is now available, and it …

Read Full Post Posted in Build | Leave a comment

How to Create Reducers with OpenMP

When multiple threads need to work together to perform a combined mathematical operation such as a sum, one way to avoid race conditions is using reducers. In this video, Jeff Cogswell shows you how to accomplish reducers with OpenMP.

Read Full Post Posted in Build | Leave a comment

IDF14: Tech Sessions with Parallel Programming Experts

The upcoming Intel Developers Forum (IDF) Sept 9-11 in San Francisco is chock-full of sessions that should whet the appetite for the parallel processing and HPC community. Below is a look at just a sample of what you can expect when you attend this year’s IDF. Technical sessions include: • …

Read Full Post Posted in Design | Leave a comment

Accurately Time Your Parallel Loops in OpenMP

When you need to time how long an OpenMP program runs, you can use a wall clock timer that is included as part of the OpenMP library. Jeff Cogswell walks you through how.

Read Full Post Posted in Build | Leave a comment