Lets assume that a human driver would fall for it, for sake of argument.
Would that make it a good idea to potentially run over a kid just because a human would have as well, when we have a decent option to do better than human senses?
What makes you assume that a vision based system performs worse than the average human? Or that it can’t be 20 times safer?
I think the main reason to go vision-only is the software complexity of merging mixed sensor data. Radar or Lidar alone also have their limitations.
I wish it was a different company or that Musk would sell Tesla. But I think they are the closest to reaching full autonomy. Let’s see how it goes when FSD launches this year.
Somehow other car companies are managing to merge data from multiple sources fine. Tesla even used to do it, but stopped to shave a few dollars in their costs.
In terms of assuming there would be safety concerns, well this video clearly demonstrates that adding lidar avoids three scenarios, at least two of them realistic. As I said my standard is not “human driver” but safest options as demonstrated.
One, I don’t know if ‘autonomous no matter what’ is an important enough goal versus ADAS, but for another, the gold standard in the industry except Tesla is vehicle mounted LIDAR, with investments to bring down the tech price.
Merging data from different sources was never claimed by anyone to be too hard a problem, again, even Tesla used to and decided to downgrade their capabilities for cost. “It’s just not worth it” is a strange take on a video demonstrating quite clearly the better data from LIDAR than you can possibly get from cameras and the benefit of avoiding collisions, collisions that kill thousands a year. Even the relatively “won’t turn on unless things are perfect” autopilot has killed quite a few people, and incurred hundreds of accidents beyond that.
Autopilot is not FSD, but these scenarios are supposed to be within the capabilities of autopilot to react. There’s no indication that FSD is better equipped to handle these sorts of scenarios than autopilot. Many of the autopilot scenarios are the car plowing into a static obstacle head on. Yes the drivers should have been paying attention, but again, the point is autopilot even with all the updates simply fails to accurately model the environment even for what is should be considering easy.
In terms of comparative systems, I frankly don’t know. No one has a launched offering, and we only know Tesla’s as well as we do because they opt to use random drivers on public roads as guinea pigs, which isn’t great. But again, this video demonstrated “easy mode” scenarios where the Tesla failed and another car succeeded. But all that’s beside the point, it’s not like radar and lidar would preclude fsd either way. The video makes clear the theory and reality of better sensing technology and it can only improve the safety of a system. FSD with added radar and lidar would have greater capacity for safety than FSD with just cameras. The lidar might be forgiven for cheap cars historically, but the radar is bonkers to remove as those are put on some pretty low end cars. No one else wants to risk FSD like capability without lidar because they see it as too risky. It’s not that tesla knows some magic to make cameras safe, they just are willing to inflict bigger risk, and willing to try to argue “humans are deadly too” whereas competition doesn’t even want to try that debate.
The main problem in my mind with purely vision based FSD is that it just isn’t as smart as a real human. A real human can reason about what they see, detect inconsistencies that are too abstract for current ML algorithms to see, and act appropriately in never before seen circumstances. A real human wouldn’t drive full speed through very low visibility areas. They can use context to reason about a situation. Current ML algorithms can’t do any of that, they can’t reason. As such they are inherently incapable of using the same sensors (cameras/eyes) to the same effect. Lidar is extremely useful because it helps get a bit better of a picture that cameras can’t reliably provide. I’m still not sure that even with lidar you can make a fully safe FSD car, but it definitely will help.
The assumption that ML lacks reasoning is outdated. While it doesn’t “think” like a human, it learns from more scenarios than any human ever could. A vision-based system can, in principle, surpass human performance, as it has in other domains (e.g., AlphaGo, GPT, computer vision in medical imaging).
The real question isn’t whether vision-based ML can replace humans—it’s when it will reach the level where it’s unequivocally safer.
Lets assume that a human driver would fall for it, for sake of argument.
Would that make it a good idea to potentially run over a kid just because a human would have as well, when we have a decent option to do better than human senses?
What makes you assume that a vision based system performs worse than the average human? Or that it can’t be 20 times safer?
I think the main reason to go vision-only is the software complexity of merging mixed sensor data. Radar or Lidar alone also have their limitations.
I wish it was a different company or that Musk would sell Tesla. But I think they are the closest to reaching full autonomy. Let’s see how it goes when FSD launches this year.
FSD is launching this year??! Where have I heard that before?
Somehow other car companies are managing to merge data from multiple sources fine. Tesla even used to do it, but stopped to shave a few dollars in their costs.
In terms of assuming there would be safety concerns, well this video clearly demonstrates that adding lidar avoids three scenarios, at least two of them realistic. As I said my standard is not “human driver” but safest options as demonstrated.
Which other system can drive autonomous in potentially any environment without relying on map data?
If merging data from different sensors increases complexity by factor 5, it’s just not worth it.
One, I don’t know if ‘autonomous no matter what’ is an important enough goal versus ADAS, but for another, the gold standard in the industry except Tesla is vehicle mounted LIDAR, with investments to bring down the tech price.
Merging data from different sources was never claimed by anyone to be too hard a problem, again, even Tesla used to and decided to downgrade their capabilities for cost. “It’s just not worth it” is a strange take on a video demonstrating quite clearly the better data from LIDAR than you can possibly get from cameras and the benefit of avoiding collisions, collisions that kill thousands a year. Even the relatively “won’t turn on unless things are perfect” autopilot has killed quite a few people, and incurred hundreds of accidents beyond that.
Autopilot is not FSD and I bet many of the deaths were caused by inattentive drivers.
Which other system has a similar architecture and similar potential?
Autopilot is not FSD, but these scenarios are supposed to be within the capabilities of autopilot to react. There’s no indication that FSD is better equipped to handle these sorts of scenarios than autopilot. Many of the autopilot scenarios are the car plowing into a static obstacle head on. Yes the drivers should have been paying attention, but again, the point is autopilot even with all the updates simply fails to accurately model the environment even for what is should be considering easy.
In terms of comparative systems, I frankly don’t know. No one has a launched offering, and we only know Tesla’s as well as we do because they opt to use random drivers on public roads as guinea pigs, which isn’t great. But again, this video demonstrated “easy mode” scenarios where the Tesla failed and another car succeeded. But all that’s beside the point, it’s not like radar and lidar would preclude fsd either way. The video makes clear the theory and reality of better sensing technology and it can only improve the safety of a system. FSD with added radar and lidar would have greater capacity for safety than FSD with just cameras. The lidar might be forgiven for cheap cars historically, but the radar is bonkers to remove as those are put on some pretty low end cars. No one else wants to risk FSD like capability without lidar because they see it as too risky. It’s not that tesla knows some magic to make cameras safe, they just are willing to inflict bigger risk, and willing to try to argue “humans are deadly too” whereas competition doesn’t even want to try that debate.
The main problem in my mind with purely vision based FSD is that it just isn’t as smart as a real human. A real human can reason about what they see, detect inconsistencies that are too abstract for current ML algorithms to see, and act appropriately in never before seen circumstances. A real human wouldn’t drive full speed through very low visibility areas. They can use context to reason about a situation. Current ML algorithms can’t do any of that, they can’t reason. As such they are inherently incapable of using the same sensors (cameras/eyes) to the same effect. Lidar is extremely useful because it helps get a bit better of a picture that cameras can’t reliably provide. I’m still not sure that even with lidar you can make a fully safe FSD car, but it definitely will help.
The assumption that ML lacks reasoning is outdated. While it doesn’t “think” like a human, it learns from more scenarios than any human ever could. A vision-based system can, in principle, surpass human performance, as it has in other domains (e.g., AlphaGo, GPT, computer vision in medical imaging).
The real question isn’t whether vision-based ML can replace humans—it’s when it will reach the level where it’s unequivocally safer.