- NewsTech Documents
- UC San Diego To Build Cyberinfrastructure for NASA ResearchFixing what happens when simultaneous threads modify the SAME memory. Parallelizing code drastically speeds execution, but if you’re not careful data race...
- Old Supercomputers Never Die, They Just Get RepurposedFixing what happens when simultaneous threads modify the SAME memory. Parallelizing code drastically speeds execution, but if you’re not careful data race...
- Intel Fuels Wave Of Training To Stave Off Data Scientist ShortageFixing what happens when simultaneous threads modify the SAME memory. Parallelizing code drastically speeds execution, but if you’re not careful data race...
- New Intel Hybrid Cloud Datacenter Security Aims at “Shadow IT” Users
- Supercomputing Expected To Help Increase Life Expectancy By A Decade
- Supercomputer Models Speed New Materials Discovery
- UC San Diego To Build Cyberinfrastructure for NASA Research
Tag Archives: Parallelization
I am an OpenMP evangelist. I use it, and I love it. This semester I spent one week in my advanced architecture class showing how it contributes to the continuance of Moore’s Law. I have also spent a lot of time here at Go Parallel talking about OpenMP, and showing how to get the most […]
Free 20-hour webinar series includes parallel programming, performance optimization, remote access to advanced servers Intel partner Colfax Research is offering a free 20-hour hands-on in-depth training on parallel programming and performance optimization in computational applications on Intel architecture. The first run in 2017 begins January 16, 2017. Broadcasts start at 17:00 UTC (9:00 am in […]
I have talked a lot about the parallelization of loops using OpenMP. It is an easy way to improve performance in your applications, especially if you can apply the technique to loops that happen often or loops with many iterations. In many cases, OpenMP provides optimized performance with no down-side risks. But there are other […]
OpenMP simplifies code parallelization, but can one overdo their use of this valuable tool? In the blog Slashdot Media Contributing Editor Rick Leinecker creates some gnarly code to see if it creates a performance hit I have spent a lot of time here at Go Parallel talking about OpenMP loops. The OpenMP standard provides simple […]
OpenMP can bring amazing performance boosts to your applications. This presentation breaks down OpenMP loops that have no dependencies. It also shows how easy it is to parallelize with OpenMP by using compiler directives.
A great deal of my personal research is in the area of data compression. I have been doing this type of research for about 20 years. A closely-related topic is data entropy. Data entropy is similar to the thermodynamic entropy that many people think of. The higher the data entropy, the more chaotic and unpredictable […]
What is the impact of Intel’s Threaded Building Blocks for Multiprocessing? Listen to this interview with James Reinders to recap 10 years of TBB Author and parallel computing James Reinders recently retired after a brilliant 25 year career at Intel. Just prior to his retirement, James sat down with Intersect 360’s Addison Snell to discuss […]
So, you’ve started tinkering with OpenMP to help parallelize your code. Now what? This video by Slashdot Media Contributing Editor Rick Leinecker points OpenMP newbies in the right direction to go beyond parallelizing for loops, and demonstrates how to avoid data race conditions while you’re doing it.
Binary searches are orders of magnitude faster than linear searches. In this tutorial, Slashdot Media Contributing Editor Rick Leinecker shows how to speed all manner of searches in your code by paralellizing using OpenMP
Fixing what happens when simultaneous threads modify the SAME memory.
Parallelizing code drastically speeds execution, but if you’re not careful data race conditions can produce very unwanted results. See how Slashdot Media Contributing Editor Rick Leinecker gets at the problem and fixes it automatically with a reduction clause.