Tesla, as usual, is very generous in providing us with a lot to talk about, especially when it comes to its Level 2 known driver assistance system, confused, as an autopilot and / or Full Self Driving (FSD). Yesterday there was an accident of a Tesla with autopilot that hit a police car, and now a video of a unit largely assisted by autopilot across Oakland is spinning, generating a lot of attention due to the often confusing and / or just poor decisions the car makes. It’s weird, though, that it’s the system performance shit that can help people use it safely.
It all comes from the hand of a letter from the National Transportation Security Board (NTSB) to the U.S. Department of Transportation (USDOT) regarding the National Road Safety Administration’s (NHTSA) “Advanced Notice of Proposed Standards Decision” (ANPRM), where the NTSB is effectively saying what the fuck should we do about autonomy vehicle testing (AV) on the public road.
From this letter:
Because NHTSA has not set any requirements, manufacturers can operate and test vehicles virtually anywhere, even if the location exceeds the limitations of the AV control system. For example, Tesla has recently released a beta version of its Level 2 autopilot system, which is described as full autopilot capability. By releasing the system, Tesla tests on the public road a highly automated AV technology but with limited monitoring or information requirements.
While Tesla includes a disclaimer that “currently enabled features require active driver supervision and do not make the vehicle autonomous,” NHTSA’s practical approach to overseeing AV testing poses a potential risk to to motorists and other road users.
Right now, the NTSB / NHTSA / USDOT letter does not propose any solution, but provides something we have been seeing for years: Tesla and other companies are doing beta testing of car software on the public road, surrounded by ‘other drivers and pedestrians who have not consented to be part of any test and, in this beta software test, accidents can be literal.
All this gives a good context for the video of the autopilot in Oakland, the highlights of which can be seen in this tweet
G / O Media may receive a commission
… and the full 13-and-a-half minute video can be viewed here:
There’s a lot in this video that’s worth watching, if you’re interested in Tesla’s autopilot / FSD system. This video uses what I think is the latest beta version of FSD, version 8.2, of which there are many other driving videos available online.
There is no doubt that the system is technologically impressive; doing any of this is a colossal success and Tesla engineers should be proud.
At the same time, however, he does not come close to being as good as a human driver, at least in many contexts and, yes, as a level 2 semi-Autonomous system, requires a driver to be alert and willing to take on at any time, a task in which humans are notoriously bad and why do i think cap The L2 system is intrinsically defective.
While many FSD videos show the system used on highways, where the overall driving environment is much more predictable and easier to navigate, this video is interesting precisely because urban driving has a level of difficulty. much higher.
It’s also interesting because the passenger seat boy is such a constant and infallible apologist, to the point that if Tesla attacked and mowed down a litter of kittens, he would praise him for his excellent ability to track down a small target. .
In the course of the Oakland drive there are many places where the Tesla works very well. There are also places where you make really terrible decisions, drive down the incoming traffic lane or turn the wrong way down a one-way street or surround yourself like a drunken robot or cut off curbs or just stop, for no apparent reason, right at the middle of the road.
In fact, the video is usefully divided into chapters based on these interesting events:
0:00Introduction
0:42Double parked cars (1)
1:15Pedestrian to Pedestrian
1:47Cross solid lines
2:05Detachment
2:15China Town
3:13Driver avoidance
3:48Left without protection (1)
4:23Turn right onto the wrong lane
5:02Near the Head On On incident
5:37Drunk performance
6:08Left without protection (2)
6:46Detachment
7:09Do not turn on the red
7:26“Grab right away”
8:09Wrong lane; Behind the parked cars
8:41Double parked truck
9:08Bus lane only
9:39Close call (brake)
10:04Turn left; Blocked lane
10:39Wrong way !!!
10:49Double parked cars (2)
11:13Delay of the stop sign
11:36Hesitant left
11:59Near collision (1)
12:20Near the collision (2)
12:42Close call (wall / close)
12:59Verbal review of Drive / Beta
This reads like the song list from a very weird concept album.
Nothing in this video, however impressive it is objectively, says that this machine works better than a human being. If a human being did the things seen here, you would be asking out loud what the hell was happening to them over and over again.
Some situations are clearly things that have not been programmed to understand the software, such as how cars parked with hazard lights on are obstacles that must be driven carefully. Other situations are the result of the system misinterpreting camera data, or overcompensating, or simply having difficulty processing its environment.
Some of the video’s defenses help raise the most important issues:
The argument that there are many, many more human accidents on a given day is very misleading. Of course, there are many more, but there are also many more humans driving cars, and even if the numbers were the same, no vehicle manufacturer would try to sell shitty human drivers.
Also, the reminders that FSB is a beta version only serve to remind us of that NTSB letter with all the acronyms: if we leave companies? auto beta testdrive a car software in public without any supervision?
Tesla’s FSB is still no safer than a normal human driver, which is why videos like this, which show many worrying FSD driving events, are so important and could save lives. These videos erode some confidence in FSD, which is exactly what should happen if this beta software is to be tested safely.
Blind faith in any L2 system is how you end up crashing and maybe dead. Because L2 systems give little no warning when they need humans to take over and it is likely that a person who is not trusted at the wheel is ready to take control.
Nor am I the only one who suggests it:
The paradox here is that the better a level 2 system is made, the more likely it is that people behind the wheel will trust it, which means they will pay less attention, making them less able to take the control when the system actually needs them.
That’s why most it crashes with Autopilot it happens on highways, where a combination of generally good autopilot performance and high speeds results in poor driver attention and less reaction time, which can lead to disaster.
All level 2 systems – not just autopilot – suffers from this and is therefore all rubbish.
While this video clearly shows that FSD’s basic driving skills still need work, Tesla’s focus should not be on this, but on finding safe and manageable error migration procedures, so no immediate driver attention is required.
Until then, the best case for safe use of autopilot, FSD, SuperCruise or any other level 2 system is to watch all these videos of the systems that are screwing up, lose some confidence and stay tense and alert when the machine is driving.
I know this is not what anyone flight of autonomous vehicles, but the truth is that they are not yet done. It’s time to accept it and treat them that way if we really want to progress.
Defending yourself and trying to drive a hooded shit machine doesn’t help anyone.
So, if you like your Tesla and you like autopilot and FSD, watch this video carefully. He values the good parts, but accepts the bad parts. Don’t try to excuse yourself. Watch, learn, and keep that shit in your head when you’re sitting behind the wheel, you’re not really driving.
It’s not fun, but this stage of any technology like this always requires work, and work isn’t always fun.