• @[email protected]
    link
    fedilink
    English
    -10
    edit-2
    2 months ago

    No, and neither are your eyes, but you can still see the world in 3D.

    You can use normal cameras to create 3D images by placing two cameras next to each other and creating a stereogram. Alternatively, you can do this with just one camera by taking a photo, moving it slightly, and then taking another photo - exactly what cameras in a moving vehicle are doing all the time. Objects closer to the camera move differently than the background. If you have a billboard with a person on it, the background in that picture moves differently relative to the person than the background behind an actual person would.

    • @Buffalox
      link
      English
      82 months ago

      neither are your eyes

      That’s a grossly misleading statement.
      We definitely use 2 eyes to achieve a 3D image with depth perception.

      So the question is obviously whether Tesla does the same with their Camera AI for FSD.

      IDK if they do, but if they do, they apparently do it poorly. Because FSD has a history of driving into things that are obviously (for a human) in front of it.

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      Talk about making a difficult problem (self-driving) more difficult to solve by solving another hard problem.

      • @[email protected]
        link
        fedilink
        English
        0
        edit-2
        2 months ago

        Just slapping on a lidar doesn’t simply solve that issue for you either. Making out individual objects from the point cloud data is equally difficult plus you’re then having to deal with cameras too because Waymo has both. I don’t see how you imagine having Lidar and cameras would be easier to deal with than just cameras.

        Also. Tesla already has more or less solved this issue. FSD works just fine with cameras only and new HW4 models have radar too.