Being cautious about my vast ignorance of most of what goes on behind the curtain when training AI, my understanding is that the leading technologies are trained on what could be represented as “the collected sum of human knowledge since the dawn of the digital age,” including all the years prior for which we have analog conversions.

I admit having to stop just short of characterizing that sum as a “waste bin” given that a huge percentage of what is available for consumption is essentially garbage, though a subset of that is probably useful to large language models.

That said, we’ve seen headlines about GPT sermons (which I think is just awesome). What I’m interested in is this: as AI becomes steadily more advanced and the models abstracted, the results becoming ever more convincing, is there an expectation that their reasoning will be sufficiently fuzzy as to allow things like superstitious thought to be factored in as relevant in real decision making?

Probably, someone has written about this. A quick search doesn’t turn up much of interest, but I’m curious to see what this community has thought along these lines or whether I’m not consuming media from the right sources to have seen any content of interest myself.