Back when this originally came out, Uber claimed "this incident was due to human error" [1]. Well technically, if you really twist the meaning of words, Uber could interpret this as "human error", considering:
- It was "human error" that the programmers who designed the self-driving AI failed to properly implement red-light detection and braking.
- It was also "human error" that the human driver in the front seat failed to notice the red lights and stop the car.
Uber's statement is effectively true, if you hideously twist the meaning of words. I fully believe that's what they did in their statement. There's similar wordplay for the word "natural", i.e. claiming "All pollution is natural", because humans are part of nature and everything we do is natural, so all consequences of our actions are also "natural". Deceiving yet ultimately, effective.
The full article from the NYT[0] which is the source for the information presented in the Verge post includes a quote indicating that Uber blames the operator for not manually stopping the vehicle. Their story is essentially that the car isn't ready to handle this scenario on its own and the operator should have known to intervene.
Which could (and will) be said for any autonomous accident in the past and future.
The NTSB takes a similar view: autopilot failures that result in aircraft crashes are still human error, since the human should have corrected. Still not very encouraging, since the driver in this case would have had seconds at most to identify and fix the error with a panic stop.
> who designed the self-driving AI failed to properly implement red-light detection and braking.
From the article it sort-of seems like that error is actually in the mapping software that didn't mark that intersection as having traffic lights, so the car didn't look for them.
Shouldn't a self-driving car always be passively looking for red lights?
I mean, take a school bus for instance. They don't have permanent red-light fixtures, but if you're behind them they could enable the red light sign at any moment. If the self-driving car isn't always looking wouldn't that be an issue?
What about the case the stoplight stops functioning? This happens many times. Recently late at night in Flushing NY crew was working on replacing switches on the stoplight and that caused outage so the Department of Transportation actually have asked the police department to send out officers to help direct traffic. Yeah, good luck with that if we are running self-driving car.
Why? Two things could happen here: The self-driving car could recognize hand gestures. Or the police could carry a simple radio transmitter device which would transmit directions to the self-driving cars (while also making gestures for human-driven cars). This is hardly some insurmountable problem.
Self-driving car needs to adapt to the world; if police need a special device to communicate with the car, the car is unsafe and should be banned from public roads.
So the enormous benefits of self-driving cars (many saved lives, mobility for people who cannot drive, reduced costs) should be prevented because we cannot equip police with a few radio transmitters? -- Assuming that is even needed, because Waymo's cars can recognize gestures made by cyclists already, so it's hardly a stretch to imagine them recognizing gestures made by policemen.
It's not just a matter of equipping police. Police are not the only people who occasionally need to make gestures that need to be recognized by cars: think construction workers in a crowded city, parking lot attendants, or anyone at the scene of an accident.
The car needs to be able to communicate with people outside of it, or it needs to have a human inside who can drive it.
Because I don't want my car stopping when a panhandler/hitchhiker/etc is trying to wave me down, this is actually a much more complicated problem than simply recognizing gestures.
It think it's more pragmatic than that. If Kansas doesn't want to equip their police with transmitters, then transmitters simply won't happen. It's a political problem which is much thornier than technical problems. I'm sure you could already apply that rubric today : If only government agency X would implement simple technology Y, everyone's life would be easier. In the real world, that fails to happen all the time.
Self driving cars are probably just going to have to recognize hand signals. That's simply the reality.
I don't think the transmitter option is viable until there's a critical mass of self-driving cars that all agree on the same communications protocol. And, as we all know, the great thing about standards is that there are so many to choose from.
That is really interesting if true. Do you have any idea where you read it? I am genuinely interested; this is not some backhanded way of saying "citation needed."
Do you work in the field? I'm a bit skeptical about that. Waymo's self-driving car seems to do pretty well with things that aren't encoded in maps like road works and unexpected obstacles. I don't see why red lights would be exceptionally challenging to detect.
Forget if you're behind a school bus as I'd hope it would stop for any vehicle stopping in front of it. If you're on a normal rural highway (i.e. one with two but oppositely travelling lanes) and a school bus which is coming towards you makes a stop, you must stop so that any exiting kids can safely cross the street to their homes. What happens when a school bus makes a stop and the car decides to keep rolling through the bus's displayed and flashing red stop sign?
Fairly sure (though it likely depends on geographic location laws) that no matter your lane you need to stop if a bus displays its stop sign and lights, because, like you said, kids might try crossing the street at any time/place. So a self-driving car might be a lane or two away from the bus (not directly behind it) and still need to know to stop.
The exact rules do depend on the area. Where I am you always have to stop behind a bus on your side in any lane, but traffic on the other side does not have to stop if there are 4+ lanes.
Much of the material I studied for my learners permit written test has since become just part of how I drive (I hope), but the rules regarding stopping for school-busses have always stuck with me for some reason.
In Florida, the rule is that all traffic going in either direction must stop, except when the road is "a divided highway with an unpaved space of at least 5 feet, a raised median, or a physical barrier"[0], in which case the opposing traffic does not have to stop. It's interesting to me to see how little things like this change from place to place, and how these localized rules will be handled by self-driving vehicles.
Yeah don't beta test your BETA on real drivers, even after the DMV has asked you to stop.
Screw Uber. I hope the CA government comes down hard on them. But we all know Uber will find yet another slimy way to keep going.
For the record, I'm the biggest proponent of autonomous cars and can't wait for that day. But companies that just put people in danger like this are not going to get us there faster. They're going to slow everything down by their carelessness and greed.
I don't like Uber either, and I don't know what they were thinking testing in California without a permit, but they were otherwise doing the same thing all of the 23 companies with testing permits in California are doing. (Any fly-by-night operation can get an av testing permit in California so long as they meet some basic requirements. It costs $140 and the approval process takes 3 business days).
The AI is expected to fail occasionally, so a human driver is onboard ready to intervene the instant that happens. The test driver wasn't paying attention, and the car ran a red light. It was a human error. Uber did not lie, they didn't mince words or twist the definition of things, the human driver just didn't do his job.
Irrespective of all the other bullshit going on with Uber, this is a mistake that any test driver with any other company testing autonomous prototypes on public roads could have made.
Just like with Tesla's Autopilot, any autonomous test vehicle is technically a level 2 vehicle, meaning the human driver is legally responsible for any traffic violation the vehicle makes.
Irrespective of all the other bullshit going on with Uber, this is a mistake that any test driver with any other company testing autonomous prototypes on public roads could have made.
They could have done, but they haven't. At best that means Uber are just unlucky; at worst it means that Uber are far behind every other self-driving car company and Uber's software just doesn't work for really simple, basic traffic laws. That should be very worrying for any investors.
Except that this car is out there driving alongside people. Running red lights could get somebody killed. That's why google took forever to actually take their prototype to the streets.
No, Google had a permit from the DMV when they started testing. Can't find the reference now, but I remember how they'd had to take DMV people in the car to convince them that they were serious.
'The Google researchers said they had carefully examined California’s motor vehicle regulations and determined that because a human driver can override any error, the experimental cars are legal' [1]
That's from a 2010 NYT article by John Markoff. They ultimately hired Ron Medford, former NHTSA director to lead a big Johnny Appleseed awareness raising campaign with regulators, but initially they just went and started testing.
It's fine that the car is a work in progress and it's also fine that they'll be tested on the streets before being fully baked.
What's not fine is insisting you don't need a permit (the primary purpose of which appears to be transparency regarding safety), having a safety incident, and then implying in a public statement that it wasn't related to a software failure when it was.
It's clear how Uber botched this. That traffic signal is not at an intersection. It's a heavily used mid-block crosswalk.[1] It's a very well marked crosswalk, with six redundant full size traffic signals all visible in the direction the Uber vehicle was traveling. This indicates Uber's system only looks for mapped traffic signals.
SF has a database of their traffic signals, and this signal is listed. It's object #902.[2] Apparently Uber gets their data from somewhere else.
I'm not defending Uber, but there seems to be a pattern of startups that at first everybody is in love with. The startup explodes and grows to become widely successful and morphs from startup to corporation. An incident happens, and then everybody jumps on the bandwagon and bashes them mercifully. Boycotts insue. Clone competitors pop up proclaiming to be "not-evil" and the cycle starts again.
Let me backup my claim with examples:
GitHub
AirBnB
Uber
CloudFlare
Who is the next unicorn to join the PR nightmare show?
It's only natural. When a company is small and just starting out we're forgiving of mistakes. Once you start getting more established you really should have better practices in place.
Also when a company is small it's hard for them to make big mistakes because smallness precludes. Also, it's easy to give away awesome stuff to almost-nobody.
And, I think it goes deeper than that and reaches in to social signalling and self-identification. We like to be in on the new thing before everyone knows about it. Once that thing is big, and everyone is using it, and especially if it makes a mistake, then we lay on the criticism and distance ourselves from it.
I know I'm guilty of that. I wonder if this is an intrinsic part of being human or a learned behaviour. I'm leaning toward it being, as you say, only natural.
I've been predicting failure for Instacart for a while, mostly because of their business model. But when the funding dries up and the business is unsustainable, businesses start doing shadier things (like stealing tips), so maybe they'll go out on a PR comet.
I still remember when everyone liked Facebook and Mark Zuckerberg (yeah, I'm old!). Facebook was the clean MySpace without all the flash and spamminess. The launch of their API/app platform was almost seen as an act of altruism towards third party developers. Mark was a nobody who had "made it", thanks to Silicon Valley's meritocracy. Ah, the good old days :)
I think Facebook is a considerable exception as it started as an exclusive place for University kids to plan and meet up then changed into the platform it is now where every one and their mom's mom has a facebook account and spends all their time on it. Facebook changed its audience, model of operation, and offerings completely, and also started to throw its weight around when it wanted things. This is a pretty far cry from what it had been and it also signaled the new age of the web.
Yeah people used to love Facebook. But it definitely changed its entire focus with multiple decisions and incidents, not just one or two hiccups.
I'm not a heavy Facebook user, I login once every few months, but I don't feel like it has changed dramatically in the ~12 years I've been using it. There are more ads today but it's otherwise basically the same product. I haven't closely followed any of the incidents and although there are likely good reasons not to like Facebook, I don't feel like its focus has changed much. Anyways, my intention wasn't to defend FB but just to illustrate how quickly narratives change.
No, understood, I wasn't saying you were defending Facebook, but I really do think its core purpose has changed. Even the way it functions is pretty drastically different than its initial offering.
I only used Facebook when it very first started and my University was added to the list of included Universities, and I didn't use it much since I didn't really trust it. Many years later (just last year), after leaving a job and my employees wanting to keep in touch with me, I started to occasionally log in just to check for messages, and it was a completely different place as far as I could remember. Nothing seemed familiar anymore, the "clean myspace clone" aspect was pretty much gone, since the social part was pretty much gone in favor of the feed. The rest is still there, but it seems like an afterthought now as opposed to what gets shown on the feed.
The biggest difference from 12 years ago is invisible though- it's the news feed curation.
It used to be that you'd see the latest things from your friends and pages. Now you see what Facebook thinks you'll want to see from your friends and what some pages paid to advertise at you.
If you haven't liked something by that person in a while, you're unlikely to see anything by them if you only log in once every few months. And if you thought you'd see updates from a page after subscribing... well, you will if the page keeps paying for adverts.
Disclaimer: I don't use Facebook anymore, if I'm wrong please correct me!
They removed a repo because it was named "X for retards" instead of "X for dummies". All of the FORKS of that repo got nuked without notice. This is because the notice went to the original owner of the repo, not the owners of the forks.
I was about to ask the same and the replies aren't clarifying the extent of the sentiment. The black bar is the only signal I've seen that they're 'going dark'.
I also didn't realize that AirBnb is less liked. I still believe in the AirBnb/Uber model if not the current implementations. (btw, Uber is the AirBnb of transportation right, AirBnb started the model?)
> startups that at first everybody is in love with. The startup explodes and grows to become widely successful and morphs from startup to corporation. An incident happens, and then everybody jumps on the bandwagon and bashes them mercifully. Boycotts insue. Clone competitors pop up proclaiming to be "not-evil" and the cycle starts again.
Apple may be the only company which seems to be perpetually stuck in that cycle and still going strong. Even Steve Jobs apparently said something along the lines of "we are the biggest startup on the planet."
The trick is that when a company turns out to have major issues, go looking for the people who predicted it. Go looking for the people who never gave them a pass. Go looking for those who have been critical of these companies from the start.
Then see what other companies they are critical of and try figure out why the industry fails to be critical of them early.
I never liked Uber. Buy a nice four door vechicle, pay for maintenance, wear and tear, and make us money. Excuse me, so many of us have a late model four door vechicle collecting dust---just waiting to "Get our side hussle on,". This company, from the earliest days, just reminded me of the terrible jobs that were created in the last eight years. (Yes--the tech jobs are great, but they will go away eventually.)
AirBnB was cute to begin with, until neighbors decided to open hotels.
I like new technology, and new companies, but treat All employees fairly? And just because you have the money for the best Lawyers/political muscle; try to follow laws that could put the average Joe in jail. I couldn't just decide to start a taxi service, or rent rooms out daily? Just crazy?
I'm all for change, but these companies are exploiting for essentially a small group of people. Wait until all the bugs are worked out if their systems--bye, bye Mr.Imbeingpaidwelltoprogram. "Oh--we will always have a spot for you, but at a minimum wage.
A little while ago I looked at micro services and the Uber story stuck in my mind (April 2016 numbers):
- 2000 engineers
- 1000 services
- 8000 git repositories
I can understand fast growth but on the services and git repository side considering that most of the engineers are new it struck me not as fast but more as out of control growth.
At the beginning you may not control so much but you hire people that are disciplined. Later one needs a certain amount of structure.
The stories from the legal front, financial front and handling of public relations are very consistent with what I observed on the technical side.
Since some are saying "this is just one incidence" recall that there was a second blown red light around the same time in SF, and all this just on the first day of the program. There are also potentially other incidents that did not, by chance, have a bystander recording. Their autonomous driving has many years left in the development. I just hope they aren't pushing it on us now because they are pacing 3 billion in losses/ year.
It's almost as if the software is buggy and they need to test it and fix the bugs. Which is why they have the human driver there to make sure things don't get drastically out of control.
If your software is that bad (can't recognize a red light), you should probably be testing in a simulator with pre-recorded video, not throwing your garbage code on the road where it can cause an accident.
I think I have personally ran three or four red lights or stop signs accidentally in my 18 years or so of driving. The fact that a self-driving car running a red light makes the news is exciting to me. We will be much safer with computers driving.
They've worked so hard to present themselves as a honest, law-abiding company. Where could all these negative stories be coming from? It is really surprising that this is happening!
But seriously, what surprises me is that they've basked in negative attention for so long, with so little effort to spin things their way and haven't been suffering any visible consequences in terms of hiring and funding. I'm not sure if that's just because everyone loves a winner, or if it's a measure of how annoyed everyone was at the pre-Uber state of taxis.
Most of the criticism of Uber boils down to calling them a bunch of "bros" or saying that they push back against legislation that you just admitted has had a positive effect on the world.
I don't really expect the whole gig driving thing to go away but I do expect a lot of people who have grown accustomed to depending on VC-subsidized ride hail services to support their car-less urban lifestyles are in for a rude awakening.
The Supreme Court doesn't feel that the Tenth Amendment is real, so clearly there's some truth to it. There are powers NOT enumerated in the Constitution that the federal government legislated into existence.
I'm unsure what the problem is with the federal government legislating laws into existence. That's sort of their job. The Constituion isn't the end all be all law of the land.
> The Constituion isn't the end all be all law of the land.
But the Constitution says it is. If one law says "you can't do that" and Congress does THAT THING, what is the point? The Supreme Court just says "living document" and lets Congress do whatever.
> I'm unsure what the problem is with the federal government legislating laws into existence.
They're legislating things that Constitution says they don't have the power to do.
* trying to dredge up dirt on journalist for being negative towards your company
* having the ceo say an incident involving a uber driver choking a passenger "never happened"
* offering customers rides with "hot chicks" in france
* kalanik boasting that he should have called the company "boober" from all the women he gets
* having a "god view" that employees used to spy on exes, politicians and celebrities. they paid a measly $20k fine and supposedly continue to still allow employees to use god view.
keep in mind the executive who tried to find dirt on journalists to blackmail them wasn't even fired by kalanik.
BUT YES. YES. the press! the media! they're all out to get Uber.
* trying to dredge up dirt on journalist for being negative towards your company
* having the ceo say an incident involving a uber driver choking a passenger "never happened"
* offering customers rides with "hot chicks" in france
* kalanik boasting that he should have called the company "boober" from all the women he gets
* having a "god view" that employees used to spy on exes, politicians and celebrities. they paid a measly $20k fine and supposedly continue to still allow employees to use god view.
.
keep in mind the executive who tried to find dirt on journalists to blackmail them wasn't even fired by kalanik.
BUT YES. YES. the press! the media! they're all out to get Uber.
(Looks like bullet-point lists need a double linebreak before they're formatted properly)
- It was "human error" that the programmers who designed the self-driving AI failed to properly implement red-light detection and braking.
- It was also "human error" that the human driver in the front seat failed to notice the red lights and stop the car.
Uber's statement is effectively true, if you hideously twist the meaning of words. I fully believe that's what they did in their statement. There's similar wordplay for the word "natural", i.e. claiming "All pollution is natural", because humans are part of nature and everything we do is natural, so all consequences of our actions are also "natural". Deceiving yet ultimately, effective.
[1] Uber says self-driving car ran red light due to “human error” https://techcrunch.com/2016/12/14/uber-looking-into-incident...