The ubiquity of audio commutation technologies, particularly telephone, radio, and TV, have had a significant affect on language. They further spread English around the world making it more accessible and more necessary for lower social and economic classes, they led to the blending of dialects and the death of some smaller regional dialects. They enabled the rapid adoption of new words and concepts.

How will LLMs affect language? Will they further cement English as the world’s dominant language or lead to the adoption of a new lingua franca? Will they be able to adapt to differences in dialects or will they force us to further consolidate how we speak? What about programming languages? Will the model best able to generate usable code determine what language or languages will be used in the future? Thoughts and beliefs generally follow language, at least on the social scale, how will LLM’s affects on language affect how we think and act? What we believe?

  • @elshandra
    link
    3
    edit-2
    8 months ago

    I’m going to take the time to illustrate here, how I can see LLMs affecting human speech through existing applications and technologies that are (or could) be made both available and popular enough to achieve this. We’re far enough down the comment chain I can reply to myself now right?

    So, we can all agree that people are increasingly using LLMs in the form of chatgpt and the like, to acquire knowledge/information. The same way as they would use a search engine to follow a link to that knowledge.

    Speech-to-text has been a thing for at least 3 decades (yeah it was pretty hopeless once, but not so much now). So let’s not argue about speech vs text. People already talk to Google and siri and whoever else to this end, llms. Pale have their responses read out via tts.

    I remember being blown away watching a blind sysadmin interacting with a Linux shell via tts at rates I couldn’t even understand the words in 1998. How far we’ve come. I digress, so.

    We’ve all experienced trouble getting the information we’re looking for even with all these tools. Because there’s so much information, and it can be very difficult to find the needle in the haystack. So we constantly have to refine our queries either to be more specific, or exclude relationships to other information.

    This in turn, causes us to think about the words we were using to get the results we want, more frequently because otherwise we spend too much time on recursion.

    In turn, the more we do this, and are trained to do this, the more it will bleed into human communication.

    Now look, there is absolutely a lot of hopium smoking going on here, but damn, this could have everlasting impact on verbal communication. If technology can train people - through inaccurate/incorrect results to think about the communication going out when they speak, we could drastically reduce the amount of miscommunication between people by that alone.

    Imagine:

    get me a chair

    wheels out an office chair from the study

    no I meant a chair for at the kitchen table

    Vs

    get me a chair for at the kitchen table

    You can apply the same thing to human prompted image generation and video generation.

    Now… We don’t need llms to do this, or know this. But we are never going to achieve this without a third party - the “llm”, and whatever it’s plugged into - because the human recipient will usually be more capable of translating these variances, or employ other contexts not as accessible via a single output as speech or text.

    But if machines train us to communicate out better (more accurately, precisely and/or concisely), that is an effect I can’t welcome enough.

    Realistically, the machines will learn to deal with us being dumb, before we adapt.

    e: formatting.

    • HelloThere
      link
      fedilink
      38 months ago

      My question is simple.

      Given humans have not already achieved this clarity of communication, when we are social animals, have been utterly dependant on each other for the entire existence of our species, the importance of communication was literally a matter of life and death, and for the vast majority of that time we only communicated through speech (written word dates to approx 4k BCE)…then why would an LLM, or any human-machine interface for that matter achieve this as a side effect of usage?

      I fully accept that people, everyone, can be trained in precise speech, but we aren’t talking about purposeful training here.

      • @elshandra
        link
        28 months ago

        Let’s not argue about the potential of “any human-machine interface”, because nobody knows how far that can go. We have an idea, but there’s still way too much we don’t understand.

        You’re right, humans never have and never will alone. It’s a long shot, and as I said is pretty unlikely because the models will just get better at compensating. But I imagine if people were interacting with llms regularly - vocally - they would soon get tired of extended conversations to get what they want, and repeat training in forming those questions to an llm would maybe in turn reflect in their human interactions.