Most Recent Verify Posts RSS

Applying Scientific Apps to Parallel Programming

To come up with a highly parallel scientific app for modeling seismic activity, developers turned to an important mathematical technique for solving differential equations. Jeff Cogswell discusses how similar functions work well with parallel programming. We recently ran a news story on Go Parallel about a high-performance application called SeisSol. …

Read Full Post Posted in Verify | Leave a comment

Vectorization Gets Explicit with Intel’s Updated Parallel Tool

Intel Parallel Studio XE 2015′s explicit vectorization feature allows programmers to define SIMD and other loops that should split and run in parallel on separate compute nodes. Intel has released a new version of its HPC application development suite with support for more hardware and development languages and a simple …

Read Full Post Posted in Verify | Leave a comment

IDF14: Tech Sessions with Parallel Programming Experts

The upcoming Intel Developers Forum (IDF) Sept 9-11 in San Francisco is chock-full of sessions that should whet the appetite for the parallel processing and HPC community. Below is a look at just a sample of what you can expect when you attend this year’s IDF. Technical sessions include: • …

Read Full Post Posted in Design | Leave a comment

Charm++ Parallel Programming Language Cools HPC Power Problems

The Charm++ parallel programming model supports load balancing between cores while adjusting the frequencies of the cores during high temperature processes. Researchers at the University of Illinois at Urbana-Champaign’s Parallel Programming Laboratory have improved their Charm++ parallel programming model and runtime’s load balancing system to adjust the balancing based on …

Read Full Post Posted in Build | Leave a comment

How to Create Vectorized, Multicore Loops in OpenMP with Ease

OpenMP includes pragma directives that let you create both vectorized and multicore loops, and even use both in a single loop. Jeff Cogswell walks you through how to do it. Regular readers of Go Parallel know that we focus on two main areas of parallel programming: Multicore programming and SIMD …

Read Full Post Posted in Verify | Leave a comment

Use GNU C++ and Intel Compilers with OpenMP

Most compilers today support OpenMP. Jeff Cogswell shows you how to compile OpenMP programs using both the Intel and GNU C++ compilers.

Read Full Post Posted in Verify | Leave a comment

Stop Threads from Clashing Over Variables in OpenMP

OpenMP lets you allocate blocks of code that will be duplicated across threads. These can be in the form of loops or just simple blocks. To help you with your data, variables can be duplicated within each thread. Jeff Cogswell shows you how. Last time we explored a bit of …

Read Full Post Posted in Build | Leave a comment

Taking OpenMP Out for a Spin

OpenMP provides a way to write parallel code using pragmas embedded in your C++ code. Jeff Cogswell tries out a simple pragma that results in spawning multiple, identical parallel threads. In my last blog, I briefly introduced OpenMP, which is a technology whereby you can write parallel code in ways …

Read Full Post Posted in Build | Leave a comment

Determine Processor SIMD Features at Runtime

The Intel compiler can generate code that behaves differently for different processors. Sometimes you might want to manually check the processor features. Or you might just want to know how the generated code does it. In this video, Jeff Cogswell shows you how to use the CPUID assembly instruction to …

Read Full Post Posted in Build | Leave a comment

Timing Matters in Threading Building Blocks

When you want to time how long a set of parallel tasks takes to complete, you want to use the actual time, not the CPU time. And you want the time-measuring mechanism to be thread-safe. Jeff Cogswell shows you how to use the timing classes in Threading Building Blocks to …

Read Full Post Posted in Build | Leave a comment