Image processing can be significantly optimized with the help of OpenMP. Rick Leinecker shows you how, walks you through the OpenMP techniques employed that make the processing faster, and analyzes an iterative OpenMP construct. Since OpenMP applies concurrent processor operations, synchronization is essential to avoid race conditions. In the demonstration program, a histogram of the sample image is calculated using parallel techniques, and the OpenMP locking mechanism is used to prevent race conditions.
Calendar of Events
- No events to show
Achieve high performance on modern Intel® processors and coprocessors through the simpler creation of fast, reliable parallel code. Includes C++ and Fortran compilers, performance/data analytics libraries, error checking, and performance profiling. Try it now>Scale your efforts with the first HPC suite that integrates Intel's cluster/analysis tools to meet fast-evolving performance, capacity, and efficiency needs. Download a free 30-day trial. Try it now>Build fast parallel code with less effort for better performance from modern Intel® processors and coprocessors. Get industry-leading compilers and performance libraries, including C/C++ and Fortran compilers with vectorization capabilities. Try it now>Focus on the code that matters to improve performance and multicore scalability. Understand how your application is performing and why, tune CPU and GPU data compute performance, and quickly find bottlenecks. Try it now>Go big with your big data applications. This new data analytics library features object-oriented APIs for C++ and Java for simple integration and support for popular analytics platforms (Hadoop, Spark). Get it today>