Most Recent Build Posts RSS

OpenMP Atomic Operations Relieve Race Conditions

Simple operations such as reading a variable and then modifying it based on the value can be problematic in parallel code. Atomic operations in OpenMP help alleviate the race conditions that can result. Jeff Cogswell shows you how. Tasks that seem trivial in serial programming are sometimes problematic in parallel …

Read Full Post Posted in Build | Leave a comment

Speculative Locks Are Powerful When Used with Caution

Speculative locks work closely with Transactional Synchronization Extensions (TSX), allowing for multiple threads to obtain a lock simultaneously. The technology is difficult to use and requires careful considerations on when to use it. Jeff Cogswell gives you the details. Back in April, I talked about Intel’s Transactional Synchronization Extensions (TSX) …

Read Full Post Posted in Build | Leave a comment

The Future of Reducers in OpenMP

The Intel compiler allows you to perform simple operations with reducers. Jeff Cogswell explores how to use min and max reducers, and looks at what the future holds for reducers. In a recent video, I explained how to use reducers in OpenMP. Using OpenMP pragmas, you can declare that a …

Read Full Post Posted in Build | Leave a comment

New MPI 3.0 Features Embrace Parallel, Clustering Tech

Message Passing Interface (MPI), which originated 20 years ago in 1994, has now received a complete overhaul with version 3.0. Intel’s MPI Library 5.0 implements the changes to the 3.0 standard. Jeff Cogswell highlights the changes for you. The newest version of the Message Passing Interface is now available, and it …

Read Full Post Posted in Build | Leave a comment

How to Create Reducers with OpenMP

When multiple threads need to work together to perform a combined mathematical operation such as a sum, one way to avoid race conditions is using reducers. In this video, Jeff Cogswell shows you how to accomplish reducers with OpenMP.

Read Full Post Posted in Build | Leave a comment

IDF14: Tech Sessions with Parallel Programming Experts

The upcoming Intel Developers Forum (IDF) Sept 9-11 in San Francisco is chock-full of sessions that should whet the appetite for the parallel processing and HPC community. Below is a look at just a sample of what you can expect when you attend this year’s IDF. Technical sessions include: • …

Read Full Post Posted in Design | Leave a comment

Accurately Time Your Parallel Loops in OpenMP

When you need to time how long an OpenMP program runs, you can use a wall clock timer that is included as part of the OpenMP library. Jeff Cogswell walks you through how.

Read Full Post Posted in Build | Leave a comment

Charm++ Parallel Programming Language Cools HPC Power Problems

The Charm++ parallel programming model supports load balancing between cores while adjusting the frequencies of the cores during high temperature processes. Researchers at the University of Illinois at Urbana-Champaign’s Parallel Programming Laboratory have improved their Charm++ parallel programming model and runtime’s load balancing system to adjust the balancing based on …

Read Full Post Posted in Build | Leave a comment

Stop Threads from Clashing Over Variables in OpenMP

OpenMP lets you allocate blocks of code that will be duplicated across threads. These can be in the form of loops or just simple blocks. To help you with your data, variables can be duplicated within each thread. Jeff Cogswell shows you how. Last time we explored a bit of …

Read Full Post Posted in Build | Leave a comment

Taking OpenMP Out for a Spin

OpenMP provides a way to write parallel code using pragmas embedded in your C++ code. Jeff Cogswell tries out a simple pragma that results in spawning multiple, identical parallel threads. In my last blog, I briefly introduced OpenMP, which is a technology whereby you can write parallel code in ways …

Read Full Post Posted in Build | Leave a comment