Even with LIDAR there are just too many edge cases for me to ever trust a self driving car that uses current-day computing technology. Just a few situations I’ve been in that I think a FSD system would have trouble with:
I pulled up at a red light where a construction crew was working on the side of the road. They had a police detail with them. As I was was watching the red light the cop walked up to my passenger side and yelled “Go!” at me. Since I was looking at the light I didn’t see him trying to wave me through the intersection. How would a car know to drive through a red light if a cop was there telling you to?
I’ve seen cars drive the wrong way down a one way street because the far end was blocked due to construction and backtracking was the only way out. (Residents were told to drive out the wrong way) Would a self driving car just drive down to the construction site and wait for hours for them to finish?
I’ve seen more than one GPS want to route cars improperly. In some cases it thinks a practically impassible dirt track is a paved road. In other cases I’ve seen chains and concrete barriers block intersections that cities/towns have determined traffic shouldn’t be going through.
Temporary detour or road closure signs?
We are having record amounts of rain where I live and we’ve seen roads covered by significant flooding that makes them unsafe to drive on. Often there aren’t any warning signs or barricades for a day or so after the rain stops. Would an FSD car recognize a flooded out road and turn around, or drive into the water at full speed?
Musk’s vision is (was?) to eventually turn Tesla’s into driverless robo-taxis. At one point he even said he could see regular Tesla owners letting their cars drive around like automated Ubers, making money for them, instead of sitting idle in garages.
Or there are better other ways to tell a FSD car that the road is closed. We could use QR code or something like that which includes info about blockade, where you can drive around it, and how long it will stay blocked.
A FSD should be connected enough to call home and give info to the servers, those then update the other FSD cars, et voila tadaa.
Why should it get obscured? Just plug that giant QR code there. You can create QR codes, where you only need to see very little of the square to get all the info in the QR code. I don’t feel like obscuring would be any problem.
Wow, you solved one of the really easy self driving problems (sign recognizion) with a more complicated solution.
Sign recognition and traffic light recognition are available in a lot of cars. Detecting that the dark lump on the road is just a shadow and not slamming the brakes automatically is the hard part.
Or that the white sky is in fact not a white sky but the sideways view of a semi trailer.
(The latter issue is why relying on multiple sensors camera+radar+ultrasound in the case of my car’s emergency brake system and drive assistant is always a lot better - each sensor on its own has its failure modes)
Umm, we had it about blocked streets not normal sign detection
But if those would be standardized to designs good “readable” for cam/pc (like QR codes), the car would never recognize e.g. a “S” for a “5”. Due to high contrast, one could assure that it still works by night.
But honestly I would prefer, if manually cars are completely banned from roads and only FSD cars are allowed. The streets would then be made for FSD cars (instead for human(“monkeys” drivers) and about all problems are solved.
In huge parts of the world these are standardized and readable as well - where Vienna Convention signs are being used. (Those with have more pictograms).
Detours are also labelled by standard signs in a pre-determined font with a specific reflectivity. Easy for a car to recognize.
And even without text recognition is one of the really easy parts of self driving. I’ve done development on document recognition for random bank statements, builinng plans and legal documents - all the paperwork for financing a house - before. Those documents come in in various fonts, layouts differently formatted etc.
Mud for one. Trees and bushes for another. Strong wind for a third. All of those things already obscure signs or make them very hard for humans to read, let alone a computer.
Well FSD is supposed to be Level 5 according to the marketing and description when it went on sale. Of course, we know Tesla’s lawyers told California that they have nothing more than Level 2, have not timeline to begin building anything beyond Level 2, and that the entire house of cards hinges on courts and regulators continuing to turn a blind eye.
I don’t know why people are so quick to defend the need of LIDAR when it’s clear the challenges in self driving are not with data acquisition.
Sure, there are a few corner cases that it would perform better than visual cameras, but a new array of sensors won’t solve self driving. Similarly, the lack of LIDAR does not forbid self driving, otherwise we wouldn’t be able to drive either.
challenges in self driving are not with data acquisition.
What?!?! Of course it is.
We can already run all this shit through a simulator and it works great, but that’s because the computer knows the exact position, orientation, velocity of every object in a scene.
In the real world, the underlying problem is the computer doesn’t know what’s around it, and what those things around doing or going to do.
It’s 100% a data acquisition problem.
Source? I do autonomous vehicle control for a living. In environments much more complicated than a paved road with accepted set rules.
You’re confusing data acquisition with interpretation. A LIDAR won’t label the data for your AD system and won’t add much to an existing array of visible spectrum cameras.
You say the underlying problem is that the computer doesn’t know what’s around it. But its surroundings are reliably captured by functional sensors. Therefore it’s not a matter of acquisition, but processing of the data.
won’t add much to an existing array of visible spectrum cameras.
You do realize LIDAR is just a camera, but has an accurate distance per pixel right?
It absolutely adds everything.
But its surroundings are reliably captured by functional sensors
No it’s not. That’s the point. LIDAR is the functional sensor required.
You can not rely on stereoscopic camera’s.
The resolution of distance is not there.
It’s not there for humans.
It’s not there for the simple reason of physics.
Unless you spread those camera’s out to a width that’s impractical, and even then it STILL wouldn’t be as accurate as LIDAR.
You are more then welcome to try it yourself.
You can be even as stupid as Elon and dump money and rep into thinking that it’s easier or cheaper without LIDAR.
It doesn’t work, and it’ll never work as good as a LIDAR system.
Stereoscopic Camera’s will always be more expensive than LIDAR from a computational standpoint.
AI will do a hell of a lot better recognizing things via a LIDAR Camera than a Stereoscopic Camera.
This assumes depth information is required for self driving, I think this is where we disagree. Tesla is able to reconstruct its surroundings from visual data only. In biology, most animals don’t have explicit depth information and are still able to navigate in their environments. Requiring LIDAR is a crutch.
I disagree with you, I don’t think visual camera’s alone are up to the task. There was an instance of a Tesla in auto pilot mode driving at night with the driver being drunk. This took place in Texas on the high way, the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing as a stationary obstacle. Instead it didn’t realize there was a car in the way around 1 second before the 55 mph impact, and it turned of autopilot that 1 second before.
Having multiple layers of sensors, some being good at actually sensing a stationary obstacle, plus accurate range finding, plus visual analysis to pick out people and animal, thats the way to go.
Visual range only cameras were just reported to have a harder time recognizing people of color and children.
the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing
If the obstacle was visible in the footage, the incident could have been avoided with visible spectrum cameras alone. Once again, a problem with the data processing, not acquisition.
If we’re talking about the safety of the driver and people around them, why not both types of sensors? LIDAR has things it excels at, and visual spectrum cameras have things they do well too. That way the data processing side has more things to rely on, instead of all the eggs in one basket.
Yes, self driving is not computationally solved at all. But the reason people defend LIDAR is that visible light cameras are very bad at depth estimation. Even with paralax, a lot of software has a very hard time accurately calculating distance and motion.
K and KA band are used for blind spot monitoring and would make radar detectors go nuts until filtering got worked out, cars that use Lidar will set them off as well though they’re more rare still
Do you have lidar on your head? No, yet you’re able to drive with just two cameras on your face. So no lidar isn’t required. Not that driving in a very dynamic world isn’t very difficult for computers to do, it’s not a matter of if, it’s just a matter of time.
Would lidar allow “super human” driving abilities? Like seeing through fog and in every direction in the dark, sure. But it’s not required for the job at hand.
You have eyes that are way more amazing than any cameras that are used in self driving, with stereoscopic vision, on a movable platform, and most importantly, controlled via a biological brain with millions of years of evolution behind it.
I’m sorry, you can’t attach a couple cameras to a processor, add some neural nets, and think it’s anything close to your brain and eyes.
I have no idea what sense of balance has to do with driving a car and even deaf people can get a driver’s license but okay. How’s this an argument for LIDAR again? It does not have anything to do with either of those things.
That’s like asking what the human equivalent of a GPU is. There isn’t one nor would there be because humans and computers are fundamentally different things.
No it isn’t. Anywhere in the world the vast majority of crashes are caused by negligence, speeding, distraction, all factors that can be avoided without increasing our depth perception accuracy.
I remember watching a video talking about is there a camera that can see as well as a human eye. The resolution was there are cameras that see close but not as well and they are very big and expensive and the human brain filters much of it without you realizing. I think it could be done with a camera or two but I think we are not close to the technology for the near future.
A lot of LIDAR fans here for some reason, but you’re absolutely right.
There’s just not a good amount of evidence pointing that accurate depth perception only obtained through LIDAR is required for self driving, and it also won’t solve the complex navigation of a real world scenario. A set of visible spectrum cameras over time can reconstruct a 3D environment well enough for navigation and it’s quite literally what Tesla’s FSD does.
I don’t know why someone would still say it’s not possible when we already have an example running in production.
“But Tesla FSD has a high disengagement rate” - for now, yes. But these scenarios are more often possible to be solved by high definition maps than by LIDAR. For anyone that disagrees, go to youtube, choose a recent video of Tesla’s FSD and try to find a scenario where a disengagement would have been avoided by LIDAR only.
There are many parts missing for a complete autonomous driving experience. LIDAR is not one of them.
Without LIDAR, this is a fool’s endeavor.
I wish this was talked about every single time the subject came up.
Responsible, technologically progressive companies have been developing excellent, safe, self-driving car technology for decades now.
Elon Musk is eviscerating the reputation of automated vehicles with his idiocy and arrogance. They don’t all suck, but Tesla sure sucks.
Even with LIDAR there are just too many edge cases for me to ever trust a self driving car that uses current-day computing technology. Just a few situations I’ve been in that I think a FSD system would have trouble with:
I pulled up at a red light where a construction crew was working on the side of the road. They had a police detail with them. As I was was watching the red light the cop walked up to my passenger side and yelled “Go!” at me. Since I was looking at the light I didn’t see him trying to wave me through the intersection. How would a car know to drive through a red light if a cop was there telling you to?
I’ve seen cars drive the wrong way down a one way street because the far end was blocked due to construction and backtracking was the only way out. (Residents were told to drive out the wrong way) Would a self driving car just drive down to the construction site and wait for hours for them to finish?
I’ve seen more than one GPS want to route cars improperly. In some cases it thinks a practically impassible dirt track is a paved road. In other cases I’ve seen chains and concrete barriers block intersections that cities/towns have determined traffic shouldn’t be going through.
Temporary detour or road closure signs?
We are having record amounts of rain where I live and we’ve seen roads covered by significant flooding that makes them unsafe to drive on. Often there aren’t any warning signs or barricades for a day or so after the rain stops. Would an FSD car recognize a flooded out road and turn around, or drive into the water at full speed?
In my opinion, FSD isn’t attempting to solve any of those problems. Those will require human intervention for the foreseeable future.
Musk’s vision is (was?) to eventually turn Tesla’s into driverless robo-taxis. At one point he even said he could see regular Tesla owners letting their cars drive around like automated Ubers, making money for them, instead of sitting idle in garages.
Musk is an idiot
Or there are better other ways to tell a FSD car that the road is closed. We could use QR code or something like that which includes info about blockade, where you can drive around it, and how long it will stay blocked. A FSD should be connected enough to call home and give info to the servers, those then update the other FSD cars, et voila tadaa.
Sure. A QR code. That couldn’t possibly get obscured.
Why should it get obscured? Just plug that giant QR code there. You can create QR codes, where you only need to see very little of the square to get all the info in the QR code. I don’t feel like obscuring would be any problem.
Wow, you solved one of the really easy self driving problems (sign recognizion) with a more complicated solution.
Sign recognition and traffic light recognition are available in a lot of cars. Detecting that the dark lump on the road is just a shadow and not slamming the brakes automatically is the hard part.
Or that the white sky is in fact not a white sky but the sideways view of a semi trailer.
(The latter issue is why relying on multiple sensors camera+radar+ultrasound in the case of my car’s emergency brake system and drive assistant is always a lot better - each sensor on its own has its failure modes)
Umm, we had it about blocked streets not normal sign detection But if those would be standardized to designs good “readable” for cam/pc (like QR codes), the car would never recognize e.g. a “S” for a “5”. Due to high contrast, one could assure that it still works by night. But honestly I would prefer, if manually cars are completely banned from roads and only FSD cars are allowed. The streets would then be made for FSD cars (instead for human(“monkeys” drivers) and about all problems are solved.
In huge parts of the world these are standardized and readable as well - where Vienna Convention signs are being used. (Those with have more pictograms).
Detours are also labelled by standard signs in a pre-determined font with a specific reflectivity. Easy for a car to recognize.
And even without text recognition is one of the really easy parts of self driving. I’ve done development on document recognition for random bank statements, builinng plans and legal documents - all the paperwork for financing a house - before. Those documents come in in various fonts, layouts differently formatted etc.
Mud for one. Trees and bushes for another. Strong wind for a third. All of those things already obscure signs or make them very hard for humans to read, let alone a computer.
Well FSD is supposed to be Level 5 according to the marketing and description when it went on sale. Of course, we know Tesla’s lawyers told California that they have nothing more than Level 2, have not timeline to begin building anything beyond Level 2, and that the entire house of cards hinges on courts and regulators continuing to turn a blind eye.
Just like that cheaper non-lidar Roomba with room mapping technology, it will get lost.
I don’t know why people are so quick to defend the need of LIDAR when it’s clear the challenges in self driving are not with data acquisition.
Sure, there are a few corner cases that it would perform better than visual cameras, but a new array of sensors won’t solve self driving. Similarly, the lack of LIDAR does not forbid self driving, otherwise we wouldn’t be able to drive either.
challenges in self driving are not with data acquisition.
What?!?! Of course it is.
We can already run all this shit through a simulator and it works great, but that’s because the computer knows the exact position, orientation, velocity of every object in a scene.
In the real world, the underlying problem is the computer doesn’t know what’s around it, and what those things around doing or going to do.
It’s 100% a data acquisition problem.
Source? I do autonomous vehicle control for a living. In environments much more complicated than a paved road with accepted set rules.
You’re confusing data acquisition with interpretation. A LIDAR won’t label the data for your AD system and won’t add much to an existing array of visible spectrum cameras.
You say the underlying problem is that the computer doesn’t know what’s around it. But its surroundings are reliably captured by functional sensors. Therefore it’s not a matter of acquisition, but processing of the data.
won’t add much to an existing array of visible spectrum cameras.
You do realize LIDAR is just a camera, but has an accurate distance per pixel right?
It absolutely adds everything.
But its surroundings are reliably captured by functional sensors
No it’s not. That’s the point. LIDAR is the functional sensor required.
You can not rely on stereoscopic camera’s.
The resolution of distance is not there.
It’s not there for humans.
It’s not there for the simple reason of physics.
Unless you spread those camera’s out to a width that’s impractical, and even then it STILL wouldn’t be as accurate as LIDAR.
You are more then welcome to try it yourself.
You can be even as stupid as Elon and dump money and rep into thinking that it’s easier or cheaper without LIDAR.
It doesn’t work, and it’ll never work as good as a LIDAR system.
Stereoscopic Camera’s will always be more expensive than LIDAR from a computational standpoint.
AI will do a hell of a lot better recognizing things via a LIDAR Camera than a Stereoscopic Camera.
This assumes depth information is required for self driving, I think this is where we disagree. Tesla is able to reconstruct its surroundings from visual data only. In biology, most animals don’t have explicit depth information and are still able to navigate in their environments. Requiring LIDAR is a crutch.
I disagree with you, I don’t think visual camera’s alone are up to the task. There was an instance of a Tesla in auto pilot mode driving at night with the driver being drunk. This took place in Texas on the high way, the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing as a stationary obstacle. Instead it didn’t realize there was a car in the way around 1 second before the 55 mph impact, and it turned of autopilot that 1 second before.
Having multiple layers of sensors, some being good at actually sensing a stationary obstacle, plus accurate range finding, plus visual analysis to pick out people and animal, thats the way to go.
Visual range only cameras were just reported to have a harder time recognizing people of color and children.
If the obstacle was visible in the footage, the incident could have been avoided with visible spectrum cameras alone. Once again, a problem with the data processing, not acquisition.
If we’re talking about the safety of the driver and people around them, why not both types of sensors? LIDAR has things it excels at, and visual spectrum cameras have things they do well too. That way the data processing side has more things to rely on, instead of all the eggs in one basket.
Yes, self driving is not computationally solved at all. But the reason people defend LIDAR is that visible light cameras are very bad at depth estimation. Even with paralax, a lot of software has a very hard time accurately calculating distance and motion.
Dont let them know about that I don’t want my radar detector flipping out for laser lol
what?
K and KA band are used for blind spot monitoring and would make radar detectors go nuts until filtering got worked out, cars that use Lidar will set them off as well though they’re more rare still
What?
What does Radar have anything to do with Lidar?
They are completely in different EM spectrums.
Also modern LIDAR is keyed in a way that LIDAR systems can’t interfere with other LIDAR systems.
Maybe because I dunno, it’s a detector? I’d love to try to explain it further but it seems like you’re being intentionally oblivious so why bother lol
Do you have lidar on your head? No, yet you’re able to drive with just two cameras on your face. So no lidar isn’t required. Not that driving in a very dynamic world isn’t very difficult for computers to do, it’s not a matter of if, it’s just a matter of time.
Would lidar allow “super human” driving abilities? Like seeing through fog and in every direction in the dark, sure. But it’s not required for the job at hand.
You have eyes that are way more amazing than any cameras that are used in self driving, with stereoscopic vision, on a movable platform, and most importantly, controlled via a biological brain with millions of years of evolution behind it.
I’m sorry, you can’t attach a couple cameras to a processor, add some neural nets, and think it’s anything close to your brain and eyes.
And also, cameras don’t work that great at night. Lidar would provide better data.
Humans don’t drive on sight alone.
Uhhhh… What the fuck else are the rest of you using?!
Senses to support your sight when driving? Hearing and Balance come to mind, in that order of importance as supporting senses.
I have no idea what sense of balance has to do with driving a car and even deaf people can get a driver’s license but okay. How’s this an argument for LIDAR again? It does not have anything to do with either of those things.
One obvious sense is hearing, as in hearing things like sirens to move out of the way.
My probing cane.
FSD
What’s the human equivalent for lidar then?
Sound? Though I guess all the fancy expensive cars remove this feedback
That’s like asking what the human equivalent of a GPU is. There isn’t one nor would there be because humans and computers are fundamentally different things.
Do you have lidar on your head?
Nope,
And that’s exactly why humans crash. Constantly.
Even when paying attention.
They don’t have resolution in depth perception, nor the FOV.
No it isn’t. Anywhere in the world the vast majority of crashes are caused by negligence, speeding, distraction, all factors that can be avoided without increasing our depth perception accuracy.
I remember watching a video talking about is there a camera that can see as well as a human eye. The resolution was there are cameras that see close but not as well and they are very big and expensive and the human brain filters much of it without you realizing. I think it could be done with a camera or two but I think we are not close to the technology for the near future.
deleted by creator
Do you have CCDs in your head? No? This argument is always so broken it’s insane to see it still typed out as anything but sarcasm.
A lot of LIDAR fans here for some reason, but you’re absolutely right.
There’s just not a good amount of evidence pointing that accurate depth perception only obtained through LIDAR is required for self driving, and it also won’t solve the complex navigation of a real world scenario. A set of visible spectrum cameras over time can reconstruct a 3D environment well enough for navigation and it’s quite literally what Tesla’s FSD does.
I don’t know why someone would still say it’s not possible when we already have an example running in production.
“But Tesla FSD has a high disengagement rate” - for now, yes. But these scenarios are more often possible to be solved by high definition maps than by LIDAR. For anyone that disagrees, go to youtube, choose a recent video of Tesla’s FSD and try to find a scenario where a disengagement would have been avoided by LIDAR only.
There are many parts missing for a complete autonomous driving experience. LIDAR is not one of them.