Models, Frameworks Accelerate Deep Learning for Enterprise Big Data Share your comment!

deep learning ai

Deep Learning is the cutting edge of AI, but proven data and deployment models and available frameworks make it a solid practical choice for advanced analytics

Deep learning techniques are advancing artificial-intelligence based analytics far enough that a team of researchers at MIT has succeeded in getting a computer to predict the future.

In December, a team from MIT’s Computer Science and Artificial Intelligence Laboratory team published results of a test demonstrating the ability of an algorithm to analyze a single image from a video and create its own video simulating the most likely future of the action in that image.

The predictions are short-term and predictable from the point of view of ordinary humans, but revolutionary for a computer working from nothing but a single image of a human running around a track, for example, with no personal experience of what might happen next.

Deep learning — which uses layers of neural networks to collect information about the data it has analyzed and problems it has solved to allow more rapid decoding of ever-more-complex analyses — is the current state of the art in applied artificial intelligence.

And artificial intelligence — as everyone from George Lucas to Stephen Hawking have predicted — is the future of digital analysis, especially in business.

That doesn’t mean it’s simple to implement, according to Arjun Bansal, co-founder and VP of algorithms at Intel acquisition Nervana Systems and developer of its Deep Learning frameworks program.

However, Deep learning does have enough of a track record and sufficient resources exist to provide models and tools to companies interested in accelerating their big-data analytics efforts, Bansal wrote in January.

There is quite a library of deep-learning models already trained using existing data, for example, models that can be applied across a variety of industries and data types including object localization, video and speech analysis, Bansal wrote

That simplifies the ramp-up of companies worries that they might have to gather, process and label vast datasets before their high-performing analytics are able to learn deeply enough to bbe useful.

The more training data a deep-learning model can access, the better it works, however. So the best results go to companies that can take models such as the ImageNet dataset and refine its 1,000 classes of data with their own datasets of thousands or tens of thousands of images.

Many of the other tips Bansal offers organizations trying to make deep-learning work in the enterprise also involve building on the frameworks and tools available from Intel to tailor and reinforce the ability of deep-learning frameworks to expand.

Training in-house staff using the Intel Nervana AI Academy, for example, can help even skilled analytics staff expand their skills with information from related fields such as computational neuroscience and higher-level mathematics. Adding talent from those fields can also add important, communicable skills. So too can hiring professional-services organizations from Intel or other providers who can help make sure the new ground an organization breaks covers territory it will find useful later.

Deep learning is not a rip-and-replace proposition, but does require careful evaluation of existing infrastructures, the addition of libraries of analytics and tools, frameworks based on languages like Python and the processing power available either through a scalable HPC cloud service or on-premises solution based on the highest-performing chips and server hardware.

Most important in the long term is to have a specific plan covering the organization’s goals for both its traditional and AI-based analytics. That’s more than policy, Bansal writes, however.

An effective deep-learning roadmap requires a data strategy that identifies the source, type, processing requirements and potential value of the data, and a deployment plan that describes a scalable, coherent set of decisions on technology options that will allow steady, predictable increases in scale, speed and capacity.

Bansal also advises that companies get used to planning for “ludicrous speed” as well, however. Deep learning, and AI in general, is at the cutting edge of high-performance computing. Deep learning but has also been proven in applications that include the detection of tumors, sifting seismic data to identify new oil fields, improving speech recognition and real-time translation — all of which are more practical to business strategists than the ability to predict a runner taking a flying step on a track will continue to run around the same track.

“Encompassing compute methods like advanced data analytics, computer vision, natural language processing and machine learning, artificial intelligence is transforming the way businesses operate and how people engage with the world,” Diane Bryant, EVP and General manager of the Data Center Group at Intel wrote in August, 2016 during the announcement of Intel’s acquisition of Nervana. “Machine learning, and its subset deep learning, are key methods for expanding the field of AI.”

“Our mission is no less than to transform all industries with the power of AI and deep learning,” Bansal wrote in January.

Posted on February 13, 2017 by John O'Donnell, Slashdot Media Contributing Editor