New Deep Learning SDK from Intel: Open-Source Framework and Tools Boost AI Share your comment!

deep learning2

Deep Learning (DL) is the hottest thing in AI; now Intel offers tools to enable HPC and big-data users to reap the benefits DL offers.

Intel has released a series of tools and frameworks designed to make it easier for organizations to expand their use of deep-learning techniques for everything from advanced analytics to voice-response interfaces for consumer electronics.

The version of machine learning called deep learning seems to have helped turn decades of research into artificial intelligence into a genuine, practical, deployable technology.

Amazon’s Echo smart speaker and Apple’s Siri electronic virtual assistant use deep-learning techniques to recognize commands. Facebook uses a deep-learning system to answer simple customer-support questions.

Applying deep learning to its Android voice-recognition service reduced the error rate by 25 percent almost immediately, according to a Jan. 30, 2017 Harvard Business Review article.

During 2016 an integration/training/research project between Google Brain and Google Translate gave Google Translate in speed and accuracy as it had gained, collectively, in its entire history, according to Google engineers cited in a  Dec. 14, 2016 New York Times story.

Deep learning is a subset of machine learning that uses deep neural networks to collect sets of characteristics about data points that allow it to recognize answers that are more likely to be correct and focus its analysis on them.

Google is one of many online search and social-networking companies converting to deep-learning-based artificial intelligence to help keep up with the volume and complexity of the routine questions and problems they’re asked to solve every day.

That added level of speed, accuracy and insight has fired the imagination of advocates of big-data and other advanced analytic methods for systems able to handle ever-larger data sets because they’ve been trained by processing equally large data sets to identify more quickly which bits of data in those sets really count.

“What we really want is not just a bunch of bits on disk that we can process,” according to a Jan. 30 Datanami story quoting legendary distributed-computing researcher Jeff Dean, who is currently a Google Fellow. “What we want [is] to be able to understand the data. We want to be able to take the data that our products or our systems can generate, and then built interesting levels of understanding.”

Success in deep learning requires high-performance parallel computing systems, however — systems that are most widely available only to those able to manage frameworks such as the open-source TensorFlow. Those applications often run primarily on specialty systems such as those Google used to advance Translate, after deciding that high getting performance high enough required not only exascale computing networks, but also a new kind of chip called a “tensor processing unit,” according to the NYT.

Intel pushing deep learning into shallower waters

Intel is pushing back at that gentrification of artificial intelligence with a combination of new processors, new frameworks and new tools designed to make deep-learning applications accessible to a wider variety of organizations.

More recently, it released a series of open-source frameworks, and tools including the Intel Deep Learning SDK — a free package of tools aimed at data scientists and software developers.

The SDK is designed to make it simpler to prepare design models, training data and training models for deep-learning applications and to create automated deep-learning experiments and visualizations.

It comes with a ready-to-run version of the Intel Distribution for Caffe deep-learning framework, but will soon gain support for TensorFlow and other deep-learning frameworks. It is available for download here.

Intel also announced a deep-learning library called Big DL that is designed to run on the Apache Spark architecture for use in big-data implementations and by data scientists building deep-learning networks.

The library is designed to create an efficient, large-scale distributed-computing platform that can be used either as a dedicated deep-learning implementation, or as a unifying analytics platform bringing Hadoop, Spark and other systems together for data storage, processing, mining and deep-learning workloads.

“AI is a huge focus right now and this is a great on-ramp to deep learning frameworks,” according to a Feb. 8 blog by Chuck Freedman, chief developer advocate for AI and Analytics solutions at Intel. “Through its ability to simplify the development, training, deployment, and optimization of deep learning solutions, the SDK can greatly accelerate the path to innovative AI solutions.”

Hardware to support the software

Last year Intel announced that the next major version of its Xeon, the Skylake version of the chip that is due during the first half of this year, will be applicable for general workloads, but will also be optimized for artificial intelligence.

A later version, called Knights Crest, will integrate Xeon processors from recent Intel acquisition Nervana, which specializes in artificial intelligence.

Intel also plans a new generation of its Xeon Phi graphics co-processor, which is able to handle the work of both CPUs and GPUs, will be available this year with a 4-fold improvement in performance in deep-learning applications.

It also announced a partnership with Google to improve and expand the enterprise use of high-performance cloud computing through the cloud, especially high-performance for artificial intelligence applications.

Posted on February 16, 2017 by John O'Donnell, Slashdot Media Contributing Editor