But read an interesting article on Wired that kind of answers some questions
http://www.wired.com/wiredenterprise/2013/05/hinton/
Really intrigued to see what they could do going from 1million to 1 trillion nodes. That's a pretty big jump.
http://www.wired.com/wiredenterprise/2013/05/hinton/
He ended up working with one of Googles top engineers to build the worlds largest neural network; A kind of computer brain that can learn about reality in much the same way that the human brain learns new things. Ngs brain watched YouTube videos for a week and taught itself which ones were about cats. It did this by breaking down the videos into a billion different parameters and then teaching itself how all the pieces fit together.
But there was more. Ng built models for processing the human voice and Google StreetView images. The company quickly recognized this works potential and shuffled it out of X Labs and into the Google Knowledge Team. Now this type of machine intelligence called deep learning could shake up everything from Google Glass, to Google Image Search to the companys flagship search engine.
Its the kind of research that a Stanford academic like Ng could only get done at a company like Google, which spends billions of dollars on supercomputer-sized data centers each year. At the time I joined Google, the biggest neural network in academia was about 1 million parameters, remembers Ng. At Google, we were able to build something one thousand times bigger.
Ng stuck around until Google was well on its way to using his neural network models to improve a real-world product: its voice recognition software. But last summer, he invited an artificial intelligence pioneer named Geoffrey Hinton to spend a few months in Mountain View tinkering with the companys algorithms. When Androids Jellly Bean release came out last year, these algorithms cut its voice recognition error rate by a remarkable 25 percent. In March, Google acquired Hintons company.
...
It typically takes a large number of computers sifting through a large amount of data to train the neural network model. The YouTube cat model, for example, was trained on 16,000 chip cores. But once that was hammered out, it took just 100 cores to be able to spot cats on YouTube.
Googles data centers are based on Intel Xeon processors, but the company has started to tinker with GPUs because they are so much more efficient at this neural network processing work, Hinton says.
Google is even testing out a D-Wave quantum computer, a system that Hinton hopes to try out in the future.
But before then, he aims to test out his trillion-node neural network. People high up in Google I think are very committed to getting big neural networks to work very well, he says.
Really intrigued to see what they could do going from 1million to 1 trillion nodes. That's a pretty big jump.