Alphabet-owned Waymo unveiled its sixth-generation Driver system on Monday with a more efficient sensor setup. Despite having a reduced camera and LiDAR sensor count from the current platform, the self-driving ride’s new setup allegedly maintains safety levels. Once it’s ready for public rides, it will coexist with the current-gen lineup.

CNBC reports that the new system is built into Geely Zeekr electric vehicles. Waymo first said it would work with the Chinese EV maker in late 2021. The new platform’s rides are boxier than the current-gen lineup, built on Jaguar I-PACE SUVs. The Zeekr-built sixth-gen fleet is reportedly better for accessibility, including a lower step, higher ceiling and more legroom — with roughly the same overall footprint as the Jaguar-based lineup.

The sixth-gen Waymo Driver reduced its camera count from 29 to 13 and its LiDAR sensors from five to four. Alphabet says they work together with overlapping fields of view and safety-focused redundancies that let it perform better in various weather conditions. The company claims the new platform’s field of view extends up to 500 meters (1,640 feet) in daytime and nighttime and “a range of” weather conditions.

  • @IphtashuFitz
    link
    English
    14 months ago

    Well that’s the thing about edge cases - by definition they haven’t been explicitly taken into account by the programming in these cars. It is literally impossible to define them all, program responses for them, and test those situations in real-world situations. For a self driving car to handle real-world edge cases it needs to be able to identify when one is happening and very quickly determine a safe response to it.

    These cars may already be safer than drunk/drowsy drivers in optimal situations, but even a drowsy driver will likely respond safely if they encounter an unusual situation that they’ve never seen before. At the very least they’d likely slow down or stop until they can assess the situation and figure out to proceed. Self driving cars also need to be able to recognize completely new/unexpected situations and figure out how to proceed safely. I don’t think they will be able to do that without some level of human intervention until true AI exists, and we’re still many decades away from that.