Tag Archives: OpenMP

Data Races: What They Are, How to Fix Them

I have talked a lot about the parallelization of loops using OpenMP. It is an easy way to improve performance in your applications, especially if you can apply the technique to loops that happen often or loops with many iterations. In many cases, OpenMP provides optimized performance with no down-side risks. But there are other […]

Posted in Tune, Uncategorized | Tagged , | Leave a comment

What is the Effect of Simultaneous OpenMP Loops?

OpenMP simplifies code parallelization, but can one overdo their use of this valuable tool? In the blog Slashdot Media Contributing Editor Rick Leinecker creates some gnarly code to see if it creates a performance hit I have spent a lot of time here at Go Parallel talking about OpenMP loops. The OpenMP standard provides simple […]

Posted in Tune | Tagged , | Leave a comment

Breaking Down OpenMP Loops

OpenMP can bring amazing performance boosts to your applications. This presentation breaks down OpenMP loops that have no dependencies. It also shows how easy it is to parallelize with OpenMP by using compiler directives.

Posted in Tune, Video | Tagged , | Leave a comment

Improving Data Compression: a Parallel Algorithm for “Shannon Entropy”

A great deal of my personal research is in the area of data compression. I have been doing this type of research for about 20 years. A closely-related topic is data entropy. Data entropy is similar to the thermodynamic entropy that many people think of. The higher the data entropy, the more chaotic and unpredictable […]

Posted in Tune | Tagged , | 1 Comment

Best Approaches to Multithreading with OpenMP

Multithreading has a lot of facets to cover to be successful. In this video, Slashdot Media Contributing Editor Rick Leinecker examines several targets and methodologies to consider as you get underway.

Posted in Video | Tagged , | Leave a comment

Using OpenMP to Fine Tune Vectorization

Adopting OpenMP can have significant payoffs, for vectorization and more Everyone wants their programs to execute fast and smooth. For instance, Microsoft Word does complex image manipulation easily without noticeable delay. The march of software toward even greater levels of performance helps satisfy our need for speed. This blog talks about a technology known as […]

Posted in Tune | Tagged , | Leave a comment

OpenMP: Past the Basics

So, you’ve started tinkering with OpenMP to help parallelize your code. Now what?  This video by Slashdot Media Contributing Editor Rick Leinecker points OpenMP newbies in the right direction to go beyond parallelizing for loops, and demonstrates how to avoid data race conditions while you’re doing it.  

Posted in Tune, Video | Tagged , , | Leave a comment

Parallelizing Binary Searches

Binary searches are orders of magnitude faster than linear searches. In this tutorial, Slashdot Media Contributing Editor Rick Leinecker shows how to speed all manner of searches in your code by paralellizing using OpenMP

Posted in Tune | Tagged , | Leave a comment

MIT Labs “Milk” C/C++ to speed Big Data Jobs

OpenMP extension dubbed “Milk” designed to speed big data processing by 3-4 x without changing languages

Big Data developers need not turn to new languages such as “R” to get the job done, thanks to C++ extensions MIT’s Computer Science and Artificial Intelligence Laboratory have made by augmenting OpenMP, according…

Posted in News | Tagged , | Leave a comment

Avoiding Data Races with Reducers

Fixing what happens when simultaneous threads modify the SAME memory.

Parallelizing code drastically speeds execution, but if you’re not careful data race conditions can produce very unwanted results. See how Slashdot Media Contributing Editor Rick Leinecker gets at the problem and fixes it automatically with a reduction clause.

Posted in Tune | Tagged , | Leave a comment