Another highly-dangerous comparison between two incomparable different systems.
The Waymo vehicle is a Level 4-capable vehicle with no human driver fallback requirement.
The Tesla vehicle is a Level 2-capable vehicle with a human driver fallback requirement at all times - which effectively leaves the Tesla vehicle the same human driver control responsibilities as a vehicle without automation.
The risk profiles and validation traits between these two vehicles are, therefore, incomparable different.
Ok, besides that.
First off, there are no guarantees that this human driver did not run the route several times in their FSD Beta-active vehicle and then selected the most “visually performant” one.
In fact, since these are safety-critical systems, we must assume that was done.
Secondly, given the reliability demands of a Level 4-capable vehicle (as is ignorantly claimed, implicitly, by the human driver featured in this clip)… a single video (or any amount of videos) are simply too puny to establish anything quantifiable.
I did not watch the whole clip, but in at least one spot the human driver apparently interacts with the surrounding vehicles in order to allow the FSD Beta-active vehicle the space to exit a blocked lane:
And, here, the FSD Beta-active vehicle’s planner appeared to intend on illegally taking a left-hand turn from a forward/right-turn only lane. It seemingly did not actually perform the illegal turn, but aside from the human driver’s negligence in allowing the vehicle to even “try it”, this is an “unseen” issue that is simply hand-waved away.
What if, say, the FSD Beta-active vehicle was in the exact same lane but the traffic configuration/volume was slightly different?
Would the FSD Beta-active vehicle have attempted to execute the turn illegally?
No one can say, including the human driver and Tesla - which is unacceptable for the obligations of a safety-critical system.