Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This seems absurdly paranoid. Why is it, on the face of things, more likely that the LIDAR-packing robot which is scanning 360 degrees hundreds of times per second got into a fender-bender than that the human engineer who sometimes has to drive that same car did?

For all I know there could be a million other reasons why it crashed, none of them related to the hardware. Maybe it was a combination of a software problem and failure of the human driver to intervene properly. Maybe the sensors picked up something that confused the software thinking the car ahead was still moving. Maybe there was some kind of hardware failure. Maybe the AI simply never had to do the kind of unexpected emergency stop that caused this incident and the minimum stopping distance wasn't programmed properly for the road conditions at the time of the incident. Or, maybe, it actually was fully operated by a human. Neither of us knows for sure.

> What reason do you think Google has to lie about this to the public-- considering that if it were the result of a software bug, and that bug made it to production, and somebody died, that little white lie could easily be worth millions of dollars of liability and you better believe that Google PR knows it?

You don't really have to ask that question do you? Google has already spent millions upon millions on this technology, and probably had to pull out all the stops to lobby for the legislation changes they need for their tests. Incidents like this could instantly kill their ambitions and severely hurt the credibility of this technology. There basically is no risk blaming the driver, who was, in fact, in the car and behind the wheel when it happened. Current legislation actually demands there is a human driver behind the wheel for exactly this purpose: to be able to assign responsibility for what the car does to someone in case of an accident. Legally, the 'driver' likely even was accountable for the incident, even if he wasn't driving the car at all.

Nobody is served by saying it was the computer: not the owners of the other cars, not law enforcement, not the government that allowed these cars to drive around town, and definitely not Google.

> And this is even sillier. You use a track precisely because you are not confident that the car will not go off the road, which is very likely to happen while you are building a robotic car.

I brought the other incident up just to make the point that you don't hear anything when the Google cars fail, just success stories that don't include any details about failures or incidents. You don't assume the Google cars are perfect yet, right? So if Google is so open and honest about everything, like you seem to assume, why don't they tell us how often the cars require human intervention, or what kind of situations are still a problem for the AI?

> Allowing the car into traffic indicates that they are confident.

You realize that none of these cars actually goes on the road without a human driver ready to step in when it fails, right? And that all the routes the cars drive are carefully selected and likely full of pre-programmed and scripted details?

This not to say the technology is worthless just because Google is still learning, but it's statements like yours that nicely show what's so strange about this driverless cars discussion. Just because Google is confident to have AI cars with people behind the wheel driving through town, you are confident that you have any insight at all about how those cars would actually do without a human backup driver behind the wheel. Unless you are a Google employee working on these cars, you know nothing more than what Google wants you to know, and that most likely will not include all the possible points of failure of this technology.



I don't expect Google to be completely open, I just don't expect them to lie for no reason, and so don't see why I should doubt what they've said.

They've been quite reasonably open about the limitations of their technology: It requires mapped-out roads, visible road markings, fair weather, et cetera. It requires a human driver at the wheel because there are some traffic situations it is not able to navigate; an example given was meeting an opposing car in a narrow road where the car was not sure if there was enough room to pass. In these situations, a voice announces politely that the human should resume control. If the human does not, one can only assume the car will come to a complete stop.

However, they have demonstrated the ability to autonomously navigate most types of traffic, including reacting to unexpected obstacles or pedestrians, dealing with panic stops ahead, negotiating with other drivers at a four-way stop, et cetera.

They claim hundreds of thousands of miles of fully-autonomous driving, with occasional human intervention being necessary in atypical circumstances. That seems like a completely plausible claim considering what they've shown us, and lacking any way in which they could profit from lying about it, I don't see any reason to take that claim at anything but face value.


The idea that it could've been a lie -- that the AI was engaged during the crash -- doesn't have to be a conspiracy involving PR. I think if it was in fact a lie it would be much more likely to originate from the engineer in the car.

We work hard on the systems we build and a simple lie like that could absolutely seem to be in the interest of the project at a time when this technology still freaks-out lawmakers.

Should we believe Google? Sure, probably.

Is it absurd to question their veracity? Are you kidding? Are you familiar with American capitalism?


> The idea that it could've been a lie -- that the AI was engaged during the crash -- doesn't have to be a conspiracy involving PR. I think if it was in fact a lie it would be much more likely to originate from the engineer in the car.

Yeah, I was thinking the same thing. If there's any chance that it's a lie, that's the only way it happens-- the guy at the wheel decided to take the fall without telling anyone else. Maybe he pushed the button by accident, the thing freaked out, and he decided no one needed to know. Possible.

But that still doesn't fly. Everything that happens to that car is measured and recorded for later analysis. There is a verifiable record of when it is under human control and when it isn't. Faking that record convincingly enough to cover up the only public accident in the history of the project is almost certainly beyond the capabilities of a single engineer who was just in a fender bender. (And for what it's worth, Google has claimed to have logs which prove that the car was in manual mode, which I assume are available to legislators.)

Human beings crash cars all the time. Autonomous vehicles crash-- well, there's actually no evidence that Google's self-driving car has ever crashed[0]. So if one of their self-driving cars crashes with a human behind the wheel, outside of the context of an autonomous test-drive, and he says at the scene that he was driving, and Google confirms that they have proof that he was driving, and considering that lying to the public about something provable is a really, really bad idea when you're trying to get a law passed...

If there were any evidence, any shred of inconsistency to their story, I'd be skeptical. But there isn't. There's just no reason to doubt them besides "Companies always lie." Or "Of course they'd say that." Yeah, I'd say that's absurd.

[0] In traffic, obviously. One can only presume it has hit many obstacles during development.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: