• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AI pioneer Hinton working on trillion-node neural network at Google

Status
Not open for further replies.

Kinitari

Black Canada Mafia
But read an interesting article on Wired that kind of answers some questions

http://www.wired.com/wiredenterprise/2013/05/hinton/

He ended up working with one of Google’s top engineers to build the world’s largest neural network; A kind of computer brain that can learn about reality in much the same way that the human brain learns new things. Ng’s brain watched YouTube videos for a week and taught itself which ones were about cats. It did this by breaking down the videos into a billion different parameters and then teaching itself how all the pieces fit together.

But there was more. Ng built models for processing the human voice and Google StreetView images. The company quickly recognized this work’s potential and shuffled it out of X Labs and into the Google Knowledge Team. Now this type of machine intelligence — called deep learning — could shake up everything from Google Glass, to Google Image Search to the company’s flagship search engine.

It’s the kind of research that a Stanford academic like Ng could only get done at a company like Google, which spends billions of dollars on supercomputer-sized data centers each year. “At the time I joined Google, the biggest neural network in academia was about 1 million parameters,” remembers Ng. “At Google, we were able to build something one thousand times bigger.”

Ng stuck around until Google was well on its way to using his neural network models to improve a real-world product: its voice recognition software. But last summer, he invited an artificial intelligence pioneer named Geoffrey Hinton to spend a few months in Mountain View tinkering with the company’s algorithms. When Android’s Jellly Bean release came out last year, these algorithms cut its voice recognition error rate by a remarkable 25 percent. In March, Google acquired Hinton’s company.

...

It typically takes a large number of computers sifting through a large amount of data to train the neural network model. The YouTube cat model, for example, was trained on 16,000 chip cores. But once that was hammered out, it took just 100 cores to be able to spot cats on YouTube.

Google’s data centers are based on Intel Xeon processors, but the company has started to tinker with GPUs because they are so much more efficient at this neural network processing work, Hinton says.

Google is even testing out a D-Wave quantum computer, a system that Hinton hopes to try out in the future.

But before then, he aims to test out his trillion-node neural network. “People high up in Google I think are very committed to getting big neural networks to work very well,” he says.

Really intrigued to see what they could do going from 1million to 1 trillion nodes. That's a pretty big jump.
 

Kinitari

Black Canada Mafia
Split bump for a cool article.
Thanks~

So Google = Skynet?

But seriously, that's one hell of a project. Leave it to Google to attempt it.

Again, one of the reasons I'm really fond of the company - they like to push these sorts of boundaries.

Considering what we got out of their first real deep learning/neural network venture (huge reduction in voice recognition errors, for one - as well as an ever increasing quality of image recognition) - I can only imagine what something 1000x as powerful could accomplish.

I'm really interested in how something like the ability to understand the nuance of documents can affect searching.
 

Darkkn

Member
How is neural network different from data-center? Is it just way to say 'we got a shit ton of CPU cores on this mofo'. Sounds really cool, I like Googly for stuff like this.
 

Tacitus_

Member
How is neural network different from data-center? Is it just way to say 'we got a shit ton of CPU cores on this mofo'. Sounds really cool, I like Googly for stuff like this.

It's a learning computer. After teaching it, you can feed data into it and it'll give out connections in that data.
 

Cyan

Banned
So, the figure I remember for the human brain is something like 100 billion neurons. I don't know how similar neural nodes are to human neurons (which can have 1000s of synaptic connections), but that at least suggests that a network like this could approach or surpass human levels of processing power. Which doesn't necessarily mean much as far as AI is concerned, but it's still... interesting.
 

Tacitus_

Member
So, the figure I remember for the human brain is something like 100 billion neurons. I don't know how similar neural nodes are to human neurons (which can have 1000s of synaptic connections), but that at least suggests that a network like this could approach or surpass human levels of processing power. Which doesn't necessarily mean much as far as AI is concerned, but it's still... interesting.

From what I've read, neural network nodes don't map 1:1 to (human) brain neurons. Still, they should be able to do something amazing with this much power.
 

Randdalf

Member
I don't know how complex these are... basic neural nets are pretty dumb and only really apply to certain problems, in particular, pattern recognition.

From what I've read, neural network nodes don't map 1:1 to (human) brain neurons. Still, they should be able to do something amazing with this much power.

I've been told by one of my lecturers (a computational neuroscientist), that you'd need a supercomputer to accurately model a single neuron in the human brain. That would be a full biological simulation though, which isn't exactly necessary.
 

SJRB

Gold Member
A trillion-node network, I can't even comprehend this.

I'd love to see a documentary about this. Or pictures of what this machine looks like.
 

RJT

Member
I've taken the AI Coursera class by Ng, and the simplicity of neural networks blew my mind. I even did a test to detect the worst comments on news sites, with pretty funny results.

Computers came such a long way since the nineties...
 

Kinitari

Black Canada Mafia
How is neural network different from data-center? Is it just way to say 'we got a shit ton of CPU cores on this mofo'. Sounds really cool, I like Googly for stuff like this.

How is neural network different from data-center? Sounds really cool, I like Googly for stuff like this.

A data center is essentially a huge store of information (well, they actually do a lot more - but I'll keep it simple). A simple way to think about is it being full of excel spreadsheets (they aren't using excel spreadsheets, but you get the idea). People can then 'query' these data centres and it will do it's best to get you relevant information.

For example, lets say I query "pictures of cats riding roombas".

A regular ass Google query, something that is still extremely sophisticated, would look for images with labels/titles/key words near it that match 'cat(s)' 'riding' and 'roomba(s)'. Doing its best to put the most likely results (probably something that says "my cat riding a roomba") near the top.

Now, what Neural networks and deep learning does is entirely different. Again, I am going to simplify it, this time because I really don't have the intelligence to grasp all it's intricacies let alone explain them - but for the same query of "pictures of cats riding roombas":

It would 'look' at the images themselves. It would first have already taught itself what a roomba looks like from all sorts of angles, what a cat looks like from all sorts of angles, what riding could/should mean, and what that sentence would mean in a picture. Thus it would then find an actual image with a cat riding a roomba - regardless of what it was titled or what words are surrounding it.

How it does this is extremely complicated system work? Again, I'm not qualified to give you a good answer but basically, a neural network is designed after a biological system of neuron systems. These are massively parallel systems where (in for example the human brain) billions of nodes are connected to billions of other nodes with trillions of total analog connections. Simulating this is extremely difficult, but it is a venture a lot of different companies are pursuing.

Last year google made a 1million node practical system, which is one of the largest available. This system utilized 16000 processors, and they basically just told it to watch a shitload of youtube videos. It did, and afterward it came out the other end 'categorizing' things. It took stills of human faces, cats and other things and put them in their own categories without being told that these things are similar or even what these things were - it was just able to recognize patterns. The 'learning' process took almost all the power of the system, but replicating the search apparently only takes 100 processors (I had no idea about this until this article, but it makes sense). That technology was folded into the technology available at Google, and one of the great things that came out of it for example was a huuuuge reduction in errors regarding voice recognition, something like 30% - which is why Googles voice recognition tech is so impressive.

Hinton is working on a 1 trillion node system.
 

Sol..

I am Wayne Brady.
This article answers a thread made last month or so asking why google was hiring all these AI dudes and I almost called it.
 

Timedog

good credit (by proxy)
So, the figure I remember for the human brain is something like 100 billion neurons. I don't know how similar neural nodes are to human neurons (which can have 1000s of synaptic connections), but that at least suggests that a network like this could approach or surpass human levels of processing power. Which doesn't necessarily mean much as far as AI is concerned, but it's still... interesting.

SAME AMOUNT OF STARS IN OUR GALAXY! THE UNIVERSE IS ALIVE ON A FANTASTIC LEVEL! PHYSICAL MATTER IS THE GOD YAHWEH WHO SPEAKS WITH INNUMERABLE TONGUES!

MILKYWAYII.JPG
 

Kinitari

Black Canada Mafia
This article answers a thread made last month or so asking why google was hiring all these AI dudes and I almost called it.
That was also my thread! I'm still not entirely sure what they're aiming to do with the AI players they have hired, aside from the usual "make our services better" stuff (which is awesome) - but I imagine something like a 1 trillion neutral network would produce unique results.

I guess Google doesn't really know what they could do with it either, not until they do it.
 

Kinitari

Black Canada Mafia
So it's a learning computer?

That's a simple way to put it. Neural Networks/deep learning sytsems do some things much better than 'regular' AI. Like image and voice recognition, as well as just understanding things like context
 

Legend

Member
sounds interesting. I have used a NN for a project in college, as a classification technique, and it was amazing to see it get shit done easily :D
 

Rapstah

Member
There's plenty of videos on Youtube of people implementing simulated small-scale neutral nets to, for example, teach a car how to drive along a small track.
 
Status
Not open for further replies.
Top Bottom