• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Google's brilliant approach that makes their self driving cars work

Status
Not open for further replies.
At first, it doesn't seems quite as impressive as inventing AI that can learn on the fly and not just some glorified virtual track overlay on top of current road infrastructure. It seems daunting, mapping out the entire US road network the same way they are doing it for Google maps and Streetview.

But here's the thing, they only have to do it once and they have pretty much mapped almost the entirety of the US road network and then some other countries. The real brilliance of the way Google is doing it that the cars learn whatever new thing they encounter after passing through this virtual overlay, the same way how Google maps currently update real time traffic information on your phone, basically giving us virtual real time feedback for the whole network. The way they are doing it has big implications to AI, and will partially explain why they are acquiring so many of the robotics company in the last 12 months. Exciting times!

Anyway, have a read. It's a great write up from The Atlantic:


http://www.theatlantic.com/technolo...-makes-googles-self-driving-cars-work/370871/



Google's self-driving cars can tour you around the streets of Mountain View, California.

I know this. I rode in one this week. I saw the car's human operator take his hands from the wheel and the computer assume control. "Autodriving," said a woman's voice, and just like that, the car was operating autonomously, changing lanes, obeying traffic lights, monitoring cyclists and pedestrians, making lefts. Even the way the car accelerated out of turns felt right.

It works so well that it is, as The New York Times' John Markoff put it, "boring." The implications, however, are breathtaking.

Perfect, or near-perfect, robotic drivers could cut traffic accidents, expand the carrying capacity of the nation's road infrastructure, and free up commuters to stare at their phones, presumably using Google's many services.

But there's a catch.

Today, you could not take a Google car, set it down in Akron or Orlando or Oakland and expect it to perform as well as it does in Silicon Valley.

Here's why: Google has created a virtual track out of Mountain View.

The key to Google's success has been that these cars aren't forced to process an entire scene from scratch. Instead, their teams travel and map each road that the car will travel. And these are not any old maps. They are not even the rich, road-logic-filled maps of consumer-grade Google Maps.

They're probably best thought of as ultra-precise digitizations of the physical world, all the way down to tiny details like the position and height of every single curb. A normal digital map would show a road intersection; these maps would have a precision measured in inches.

But the "map" goes beyond what any of us know as a map. "Really, [our maps] are any geographic information that we can tell the car in advance to make its job easier," explained Andrew Chatham, the Google self-driving car team's mapping lead.

"We tell it how high the traffic signals are off the ground, the exact position of the curbs, so the car knows where not to drive," he said. "We'd also include information that you can't even see like implied speed limits."

Google has created a virtual world out of the streets their engineers have driven. They pre-load the data for the route into the car's memory before it sets off, so that as it drives, the software knows what to expect.

"Rather than having to figure out what the world looks like and what it means from scratch every time we turn on the software, we tell it what the world is expected to look like when it is empty," Chatham continued. "And then the job of the software is to figure out how the world is different from that expectation. This makes the problem a lot simpler."

While it might make the in-car problem simpler, but it vastly increases the amount of work required for the task. A whole virtual infrastructure needs to be built on top of the road network!


The more you think about it, the more the goddamn Googleyness of the thing stands out: the ambition, the scale, and the type of solution they've come up with to this very hard problem. What was a nearly intractable "machine vision" problem, one that would require close to human-level comprehension of streets, has become a much, much easier machine vision problem thanks to a massive, unprecedented, unthinkable amount of data collection.

Last fall, Anthony Levandowski, another Googler who works on self-driving cars, went to Nissan for a presentation that immediately devolved into a Q&A with the car company's Silicon Valley team. The Nissan people kept hectoring Levandowski about vehicle-to-vehicle communication, which the company's engineers (and many in the automotive industry) seemed to see as a significant part of the self-driving car solution.

Every vehicle's data is being incorporated into the maps. That information "helps them cheat."
He parried all of their queries with a speed and confidence just short of condescension. "Can we see more if we can use another vehicle's sensors to see ahead?" Levandowski rephrased one person's question. "We want to make sure that what we need to drive is present in everyone's vehicle and sharing information between them could happen, but it's not a priority."

What the car company's people couldn't or didn't want to understand was that Google does believe in vehicle-to-vehicle communication, but serially over time, not simultaneously in real-time.

After all, every vehicle's data is being incorporated into the maps. That information "helps them cheat, effectively," Levandowski said. With the map data—or as we might call it, experience—all the cars need is their precise position on a super accurate map, and they can save all that parsing and computation (and vehicle to vehicle communication).

There's a fascinating parallel between what Google's self-driving cars are doing and what the Andreesen Horowitz-backed startup Anki is doing with its toy car racing game. When you buy Anki Drive, they sell you a track on which the cars race, which has positioning data embedded. The track is the physical manifestation of a virtual racing map.


The Google cars are not dumb machines. They have their own set of sensors: radar, a laser spinning atop the Lexus SUV, and a suite of cameras. And they have some processing on board to figure out what routes to take and avoid collisions.

This is a hard problem, but Google is doing the computation with what Levandowski described at Nissan as a "desktop" level system. (The big computation and data processing are done by the teams back at Google's server farms.)

What that on-board computer does first is integrate the sensor data. It takes the data from the laser and the cameras and integrates them into a view of the world, which it then uses to orient itself (with the rough guidance of GPS) in virtual Mountain View. "We can align what we're seeing to what's stored on the map. That allows us to very accurately—within a few centimeters—position ourselves on the map," said Dmitri Dolgov, ​the self-driving car team's software lead. "Once we know where we are, all that wonderful information encoded in our maps about the geometry and semantics of the roads becomes available to the car."


The lasers and cameras of a Google self-driving car.
Once they know where they are in space, the cars can do the work of watching for and modeling the behavior of dynamic objects like other cars, bicycles, and pedestrians.

Here, we see another Google approach. Dolgov's team uses machine learning algorithms to create models of other people on the road. Every single mile of driving is logged, and that data fed into computers that classify how different types of objects act in all these different situations. While some driver behavior could be hardcoded in ("When the lights turn green, cars go"), they don't exclusively program that logic, but learn it from actual driver behavior.

In the way that we know that a car pulling up behind a stopped garbage truck is probably going to change lanes to get around it, having been built with 700,000 miles of driving data has helped the Google algorithm to understand that the car is likely to do such a thing.

Most driving situations are not hard to comprehend, but what about the tough ones or the unexpected ones? In Google's current process, a human driver would take control, and (so far) safely guide the car. But fascinatingly, in the circumstances when a human driver has to take over, what the Google car would have done is also recorded, so that engineers can test what would have happened in extreme circumstances without endangering the public.

So, each Google car is carrying around both the literal products of previous drives—the imagery and data captured from crawling the physical world—as well as the computed outputs of those drives, which are the models for how other drivers might behave.
 
Very interesting read. I hope they don't crutch too significantly on their ultra-detailed reconstructions as development progresses, because that means more rural, less populated areas will be neglected, as is usually the case when a solution requires region-specific implementation.
 

HoodWinked

Member
the best part is when you get in an accident before, you'd sue the other driver. now you can sue the company that made the software.
 
the best part is when you get in an accident before, you'd sue the other driver. now you can sue the company that made the software.

This is actually an interesting question. Who takes on the risk associated with car insurance claims related to self driving cars? Obviously the driver if it's being manually driven at the point of impact, but if the software fails to avert an accident I would imagine it has to be Google' fault wouldn't it? (though that sets a terrible precedent, fraudulent claims would likely skyrocket)
 

Abounder

Banned
Google is going to be a very scary drone and robotics company like those guys in the infamous Robocop reboot.

Anyway 700k miles of accident-free driving is great.
 

CDX

Member
So Google's solution to help the AI was a "massive, unprecedented, unthinkable amount of data collection."

makes sense for Google.
 
Very interesting read. I hope they don't crutch too significantly on their ultra-detailed reconstructions as development progresses, because that means more rural, less populated areas will be neglected, as is usually the case when a solution requires region-specific implementation.

I had read the article a few days ago, and upon thinking more about it, I think that approach is doomed to fail.

Maps and cities are not static, but in constant evolution. The AI needs to be able to adapt to changes in both layout and traffic signals for it to work. It's not realistic to expect google to keep up with EVERY change on the roads for everywhere in the world. Since the impact of an error is potentially very high, a system like this would be doomed to fail.

Now if the detailed maps were only required for the first time and the AI had the ability to adapt then it could work. However since they have to tell it such specific stuff like the height of traffic lights for it to work, I don't know if there'd be any benefits to a first detailed pass.

Basically if the AI is smart enough to make corrections on its own, it's probably smart enough to drive on a rough map as well.
 

Cyan

Banned
I had read the article a few days ago, and upon thinking more about it, I think that approach is doomed to fail.

Maps and cities are not static, but in constant evolution. The AI needs to be able to adapt to changes in both layout and traffic signals for it to work. It's not realistic to expect google to keep up with EVERY change on the roads for everywhere in the world. Since the impact of an error is potentially very high, a system like this would be doomed to fail.

The article suggests that Google is addressing this problem by having the cars continually gather data and send it back to HQ so things stay up-to-date.
 
I had read the article a few days ago, and upon thinking more about it, I think that approach is doomed to fail.

Maps and cities are not static, but in constant evolution. The AI needs to be able to adapt to changes in both layout and traffic signals for it to work. It's not realistic to expect google to keep up with EVERY change on the roads for everywhere in the world. Since the impact of an error is potentially very high, a system like this would be doomed to fail.

Now if the detailed maps were only required for the first time and the AI had the ability to adapt then it could work. However since they have to tell it such specific stuff like the height of traffic lights for it to work, I don't know if there'd be any benefits to a first detailed pass.

Basically if the AI is smart enough to make corrections on its own, it's probably smart enough to drive on a rough map as well.

That is exactly what the article says. Google's approach is two fold. They want to lighten the load of your car's brain by doing most of the heavy lifting by giving it a predetermined path, but smart enough to make its own decisions right there and then. Then feed that into an algorithm to learn from whatever new thing is on the road while simultaneously communicating that with the car behind you, essentially making the whole system learn in almost real time.
 

Phoenix

Member
So Google's solution to help the AI was a "massive, unprecedented, unthinkable amount of data collection."

makes sense for Google.

And if you want to make this work around the world - guess what the best approach would be to gathering the requisite data? A shitload of drones. Google is thinking several moves ahead.
 

thefro

Member
Very interesting read. I hope they don't crutch too significantly on their ultra-detailed reconstructions as development progresses, because that means more rural, less populated areas will be neglected, as is usually the case when a solution requires region-specific implementation.

That shouldn't be a problem since driving on nearly-empty roads in rural areas is a much easier task than city driving. Animals running out in front of the road would probably be the biggest challenge.
 
The article suggests that Google is addressing this problem by having the cars continually gather data and send it back to HQ so things stay up-to-date.

Yes I read that, but it's heavily implied this wouldn't work in real time. The processing is done at HQ.

This means that if let's say a yield sign is changed for a traffic light (something that happens both frequently and quickly enough that the car might never detect the work in progress) their previous data of driving behavior for other cars becomes useless and the AI itself won't be looking for a traffic sign (again because it needs to be told specifically where it is). Something as simple as that might break the driving model and you're suddenly running a red light, which is catastrophic.

I'm a big believer that auto driving cars are the future, I'm just not sure this is the right approach.
 
Yes I read that, but it's heavily implied this wouldn't work in real time. The processing is done at HQ.

This means that if let's say a yield sign is changed for a traffic light (something that happens both frequently and quickly enough that the car might never detect the work in progress) their previous data of driving behavior for other cars becomes useless and the AI itself won't be looking for a traffic sign (again because it needs to be told specifically where it is). Something as simple as that might break the driving model and you're suddenly running a red light, which is catastrophic.

I'm a big believer that auto driving cars are the future, I'm just not sure this is the right approach.

Again, the article (and others like it a few weeks back http://www.citylab.com/tech/2014/04...s-self-driving-car-handles-city-streets/8977/) states that the car themselves are sophisticated enough to know all these. Those sophisticated cameras and sensors aren't there for nothing. The cars can even learn to read signals from impromptu stop signs like those you see being held by crossing guides outside of schools.
 

Cipherr

Member
Yes I read that, but it's heavily implied this wouldn't work in real time. The processing is done at HQ.

This means that if let's say a yield sign is changed for a traffic light (something that happens both frequently and quickly enough that the car might never detect the work in progress) their previous data of driving behavior for other cars becomes useless and the AI itself won't be looking for a traffic sign (again because it needs to be told specifically where it is). Something as simple as that might break the driving model and you're suddenly running a red light, which is catastrophic.

I'm a big believer that auto driving cars are the future, I'm just not sure this is the right approach.


The bolded is all a wrong assumption. It uses real time cameras and radar in addition to the already stored data. Which just makes it more efficient. Go and read more articles on it, the car was able to stop when a construction worker held up a 'stop' sign in an area where construction is taking place.

It absolutely looks around and calculates in real time. The detailed pre-mapping is just an added bonus that makes things even better.

Edit: Apparently it was a crossing guard, but still. Same concept.
 

jonezer4

Member
So Google's solution to help the AI was a "massive, unprecedented, unthinkable amount of data collection."

makes sense for Google.

Yeah. While it's a fascinating read, I must confess I'm a bit let down that the self-driving cars require so much data collection beforehand. It was a lot more intriguing when I was under the assumption the car was doing all the processing on the fly: that you could just drop it off anywhere and it would figure everything out. I guess that's not entirely practical, but it sure would have expedited the proliferation of this technology.

I'd also have concerns, as others have voiced, that the reliance on so much external information is a major liability. With how fragmented Android has become, I can't help but envision accidents where cars are operating on different or outdated "versions" of the road maps and similar complications.

I'm sure it's real-time abilities to catch things like construction workers and crossing guards on the fly is solid... but I doubt it's error free. If it was, there wouldn't be a need for it to have a pre-made map. So that means, you'd essentially have to have constant, error-free communication between this road database and every single entity that may be altering the "characteristics" of any part of a road or risk the car falling back on its real-time capabilities, which are (presumably) not as safe/error-free. Assuming you have constant updates to this road map (which is a big assumption as it would require HUMANS to be sure they report any deltas perfectly to this road database), you then have to ensure every single car gets every single update exactly on time. A hiccup in any one of those areas (in addition to, perhaps, bugs in the actual car and its algorithms), and people can die. This is to say nothing of the fact that a massive database like this upon which people's lives directly depend would be a terrorist's dream.

Big can of worms, again, most of it directly correlating to the fact that this external, pre-developed road map is apparently necessary for the cars to drive properly. I hope their plans are to eventually wean off of that, as the real-time capabilities become more mature.
 

tokkun

Member
Yes I read that, but it's heavily implied this wouldn't work in real time. The processing is done at HQ.

This means that if let's say a yield sign is changed for a traffic light (something that happens both frequently and quickly enough that the car might never detect the work in progress) their previous data of driving behavior for other cars becomes useless and the AI itself won't be looking for a traffic sign (again because it needs to be told specifically where it is). Something as simple as that might break the driving model and you're suddenly running a red light, which is catastrophic.

I'm a big believer that auto driving cars are the future, I'm just not sure this is the right approach.

Identifying signs and traffic lights is pretty basic. I would be more concerned about scenarios where you have weird things going on due to construction. Ultimately I think that it will come down to how gracefully these systems handle failover to manual driving. That is, can it give you enough warning to safely take over if you are reading a book or something. Because if you have to be constantly vigilant it takes away much of the advantage of an autonomous vehicle.
 

Kimawolf

Member
Yeah. While it's a fascinating read, I must confess I'm a bit let down that the self-driving cars require so much data collection beforehand. It was a lot more intriguing when I was under the assumption the car was doing all the processing on the fly: that you could just drop it off anywhere and it would figure everything out. I guess that's not entirely practical, but it sure would have expedited the proliferation of this technology.

I'd also have concerns, as others have voiced, that the reliance on so much external information is a major liability. With how fragmented Android has become, I can't help but envision accidents where cars are operating on different or outdated "versions" of the road maps and similar complications.

I'm sure it's real-time abilities to catch things like construction workers and crossing guards on the fly is solid... but I doubt it's error free. If it was, there wouldn't be a need for it to have a pre-made map. So that means, you'd essentially have to have constant, error-free communication between this road database and every single entity that may be altering the "characteristics" of any part of a road or risk the car falling back on its real-time capabilities, which are (presumably) not as safe/error-free. Assuming you have constant updates to this road map (which is a big assumption as it would require HUMANS to be sure they report any deltas perfectly to this road database), you then have to ensure every single car gets every single update exactly on time. A hiccup in any one of those areas (in addition to, perhaps, bugs in the actual car and its algorithms), and people can die. This is to say nothing of the fact that a massive database like this upon which people's lives directly depend would be a terrorist's dream.

Big can of worms, again, most of it directly correlating to the fact that this external, pre-developed road map is apparently necessary for the cars to drive properly. I hope their plans are to eventually wean off of that, as the real-time capabilities become more mature.
Just have phones/cars auto upload information to Google as many phones do now and they'd have real time data.

I still wonder though what happens of, like computers do. Your car computer brain glitches and sees people as part of a road or the car goes berserk and runs down a bunch of pedestrians in a park or crosswalk? Hell Google autocorrect screws up enough on its own.
 

tfur

Member
The collection on shape/map data makes things more efficient. You just have to do difference correlation from expected shapes. It is not unlike how humans see and cognitively see changes.

I could see a set of roads approved to allow these cars at first.

I also wonder how they are going to approach the auto industry.
 
Identifying signs and traffic lights is pretty basic. I would be more concerned about scenarios where you have weird things going on due to construction. Ultimately I think that it will come down to how gracefully these systems handle failover to manual driving. That is, can it give you enough warning to safely take over if you are reading a book or something. Because if you have to be constantly vigilant it takes away much of the advantage of an autonomous vehicle.

Is it? How do you know that? If it was, why would they bother measuring the height of EVERY traffic light. Maybe on a clear day, but the article clearly says the cameras have some problems with rain. On a rainy night, without a perfect map I very much doubt the current version of the car is better than a human driver.

I think the construction scenario you mention might not be that bad. The car would certainly detect the foreign objects on the road (cones, people, etc) and can be taught to react accordingly and the car would certainly go slow in that situation. Missing a traffic light though is much worse.
 

Cipherr

Member
I'm really shocked that people are shocked that pre-collection of these areas help with safety and navigation.

Seriously, driving in areas you are familiar with will likely make you more prone to responding to unforeseen obstacles, and you are a human being. We have all kinds of weird intersections here in the KC/Olathe area that I struggled with when I first moved here; not because I didn't know how a round-a-bout worked, but because others would often not and it would cause near accidents. Being familiar with the areas after having lived here for some time has helped me navigate these areas much more safely than when I didn't know the streets well.

I would expect familiarity with an area to be just as useful for a machine as well.
 

tokkun

Member
Is it? How do you know that? If it was, why would they bother measuring the height of EVERY traffic light. Maybe on a clear day, but the article clearly says the cameras have some problems with rain. On a rainy night, without a perfect map I very much doubt the current version of the car is better than a human driver.

I think the construction scenario you mention might not be that bad. The car would certainly detect the foreign objects on the road (cones, people, etc) and can be taught to react accordingly and the car would certainly go slow in that situation. Missing a traffic light though is much worse.

When you are thinking about the relative difficulty, keep in mind that the system must be capable of identifying and tracking dynamic objects, ranging from cars, to bikes, to pedestrians. Compared to those, a traffic light is relatively simple.

-It is immobile
-It is always in the same 2 orientations (horizontal or vertical)
-It has clear features laid out in a specific order (circular lights of specific color in specific order
-It is unobstructed from the road

So why bother providing information about its location to the car in advance? The car has a finite amount of processing power. Searching for the location of the light would require more processing. If the car runs into a situation with too many unknowns and runs out of processing power, it needs the driver to take over. Think about the situation you posed earlier, where the car expects a yield sign in a specific location. The car should easily be able to check that location and confirm the sign is still there. If it isn't, then it can search the scene for a new sign location (or the addition of stop lights) or switch to manual controls if it can't find anything to guide it.

Simplify the problem in whatever way you can so you have more spare processing power to handle the unexpected events.
 

GaimeGuy

Volunteer Deputy Campaign Director, Obama for America '16
I had read the article a few days ago, and upon thinking more about it, I think that approach is doomed to fail.

Maps and cities are not static, but in constant evolution. The AI needs to be able to adapt to changes in both layout and traffic signals for it to work. It's not realistic to expect google to keep up with EVERY change on the roads for everywhere in the world. Since the impact of an error is potentially very high, a system like this would be doomed to fail.

Now if the detailed maps were only required for the first time and the AI had the ability to adapt then it could work. However since they have to tell it such specific stuff like the height of traffic lights for it to work, I don't know if there'd be any benefits to a first detailed pass.

Basically if the AI is smart enough to make corrections on its own, it's probably smart enough to drive on a rough map as well.

They don't have to tell it such stuff. They tell it such stuff to make the system work better. If the cameras know where the lights should be located at a particular intersection, it also knows where the lights will appear in captured images (from the camera). It doesn't have to seach the whole image, maybe only 10-20% of the image, instead. If the light isn't there, it widens its search area.

Instead of searching the whole area for a light, it first looks where the light should be.

Now, instead of using X amount of computational power to find the light, you're using X amount of power to find the light, check surroundings, analyze weather/road conditions from sensors, etc. You've saved a ton of un-necessary work for the traffic light, now you can use that extra computing power for better situational awareness.

That's just a tiny, specific example
 
When you are thinking about the relative difficulty, keep in mind that the system must be capable of identifying and tracking dynamic objects, ranging from cars, to bikes, to pedestrians. Compared to those, a traffic light is relatively simple.

-It is immobile
-It is always in the same 2 orientations (horizontal or vertical)
-It has clear features laid out in a specific order (circular lights of specific color in specific order
-It is unobstructed from the road

So why bother providing information about its location to the car in advance? The car has a finite amount of processing power. Searching for the location of the light would require more processing. If the car runs into a situation with too many unknowns and runs out of processing power, it needs the driver to take over. Think about the situation you posed earlier, where the car expects a yield sign in a specific location. The car should easily be able to check that location and confirm the sign is still there. If it isn't, then it can search the scene for a new sign location (or the addition of stop lights) or switch to manual controls if it can't find anything to guide it.

Simplify the problem in whatever way you can so you have more spare processing power to handle the unexpected events.

I see your point, but I think large moving objects (like pedestrians and other cars) are much easier to detect than immobile objects, again in less than ideal weather conditions. Road debris for example could be a real problem. Potholes as well (though to be fair, they're also a problem for human drivers).
 
I wonder how these cars handle emergency vehicles coming up behind them.

Which then makes me wonder whether customers will be able to pay extra to be treated as such special vehicles on the streets and highways. "Hey buddy, I have a Platinum Google License"
 

Josh7289

Member
Yeah. While it's a fascinating read, I must confess I'm a bit let down that the self-driving cars require so much data collection beforehand. It was a lot more intriguing when I was under the assumption the car was doing all the processing on the fly: that you could just drop it off anywhere and it would figure everything out. I guess that's not entirely practical, but it sure would have expedited the proliferation of this technology.

Think of it this way. If you took a human with not only no driving experience, but also no experience with cars or roads or anything of that sort at all and just dropped them off in a car, do you think they'd actually be able to figure it out on their own? Humans need to be taught a lot, over the years, before they can learn to drive well. The same is true for computers.
 

tfur

Member
I see your point, but I think large moving objects (like pedestrians and other cars) are much easier to detect than immobile objects, again in less than ideal weather conditions. Road debris for example could be a real problem. Potholes as well (though to be fair, they're also a problem for human drivers).


There is always a chance that the systems cannot work optimally, but that is the same with people as well. I would imagine the system would do a start test to see if conditions were acceptable to start the auto pilot.

Check out this article. There is a nice video at the bottom that shows some LIDAR imaging in action.

http://www.popsci.com/cars/article/2013-09/google-self-driving-car
 

KevinRo

Member
I wonder how far advanced or similiar Google's system and other car manufacturer's autonomous driving vechicles are?
 
It'll take me a while before I trust it. Google Maps still screws up all the time. Heck, Google Maps screwed up just yesterday on me. It was off by a mile, in fact I was supposed to turn right instead of left like Google Maps told me to. But unlike the car, a single Google Maps mistake won't kill me...

Imagine a new stop sign that it doesn't see because it's raining...

You're dead. It will be very hard for me to trust a driving computer that has to calculate all of these things, and one mistake = dead.

Edit - Some people here are relieved that it depends on a pre-calculated map that could be different than the road you are currently driving. It scares me that they still need that.
 
I find it brilliant in a way that it uses already pre-existing data collection (via Google Map cars roving around and your phone's google maps) and applying it to a seemingly unheard of technology just five years ago.

Actually, according to the article, the data is specifically collected for this purpose and completely different from the stuff used in google maps. Basically they had to do it all over again and if they want to roll the program to other cities, they have to collect the data for those cities from scratch.

The cool thing is that crazy amount of work doesn't seem to faze google at all.
 

fallagin

Member
I think that cars will eventually have to move to closed AI systems because of hackers and stuff, interesting system though.
 
Status
Not open for further replies.
Top Bottom