This site may earn affiliate commissions from the links on this page. Terms of use.

Intel is making a huge push into AI and deep learning, and intends to build custom variants of its Xeon Phi hardware to compete in these markets. Several months ago, the Santa Clara corporation bought Nervana, an AI startup, and this new announcement is seen equally building on that momentum. AI and deep learning accept become huge focuses of major companies in the past few years — Nvidia, Google, Microsoft, and a number of smaller firms are all jockeying for position, chasing breakthroughs, and building their own custom silicon solutions.

The upcoming Knights Manufacturing plant is still pretty hazy, but Intel has stated that the chip volition be up to 4x faster than existing Knights Landing hardware. Right now, the company is working on iii split up forays into the AI / deep learning market place. Commencement upward, there's Lake Crest. This product is based on Nervana engineering science that existed prior to the Intel purchase. Nervana was working on an HBM-equipped chip with up to 32GB of retention, and that's the product Intel is talking about rolling out to the wider market in the first half of 2017. Lake Crest volition be followed by Knights Crest, a flake that takes Nervana's technology and implements it side-by-side along with Intel Xeon processors.

"The technology innovations from Nervana will be optimized specifically for neural networks to deliver the highest performance for deep learning, as well as unprecedented compute density with high-bandwidth interconnect for seamless model parallelism," Intel CEO Brian Krzanich wrote in a contempo blog postal service. "We await Nervana's technologies to produce a breakthrough 100-fold increase in operation in the next three years to train complex neural networks, enabling data scientists to solve their biggest AI challenges faster."

Does Intel need a GPU?

To date, the companies that have done well with AI — well, company — has been Nvidia, whose GPU engineering science is powering a great bargain of cut-edge R&D. Claims that Intel needs a specific GPU compages to compete, nonetheless, are mistaken. GPUs are practiced at these kinds of computing projects because the projects map well on to the hardware we use for gaming — not considering in that location's something magic about graphics processors that makes them uniquely and specifically suited to the tasks. Put differently, you could build a GPU-style compute engine without any of the IP blocks or hardware that transform it into a graphics card.

AI-Day-Slide-SMALL-1067x600

Xeon Phi began life as a GPU (admitting a GPU with a very different focus than cards from AMD or Nvidia) and was reinvented into a vector processor. At that place's nothing to say Intel can't bend it back a chip, possibly by edifice lower-precision registers or offer them equally options on sure types of hardware. Deep learning and AI typically use much less precision than other types of workloads; Intel CPUs support the IEEE 754 floating point standard and tin offer upwards to 80 bits of precision, while most deep learning and AI workloads are washed with eight-bit or 16-bit calculations.

AMD is also dipping a toe into this business surface area via GCN, but we don't know yet if deep learning and AI will have an affect on the company's upcoming Vega architecture. Most of AMD's focus remains on the gaming market, where its console wins have been critical to shoring up the company's business organization.