…care to contribute a link to their favorite site for an AI activity? I’d be really interested in seeing what’s out there, but the field is moving and growing so fast and search engines suck so hard that I know I’m missing out.

Cure my FOMO please!

  • @Zeth0s
    link
    1
    edit-2
    8 months ago

    No, what you describe is a basic decision tree. Let’s say the simplest possible ML algorithm, but it is not used as is in practice anywhere. Usually you find “forests” of more complex trees, and they cannot be used for generation, but are very powerful for labeling or regression (eli5 predict some number).

    Generative models are based on multiple transformations of images or sentences in extremely complex, nested chains of vector functions, that can extract relevant information (such as concepts, conceptual similarities, and so on).

    In practice (eli5), input is transformed in a vector and passed to a complex chain of vector multiplications and simple mathematical transformations until you get an output that in the vast majority of cases is original, i.e. not present in the training data. Non original outputs are possible in case of few “issues” in the training dataset or training process (unless explicitly asked).

    In our brain there are no if/else, but electrical signals modulated and transformed, which is conceptually more similar to the generative models than to a decision tree.

    In practice however our brain works very differently than generative models

    • @[email protected]
      link
      fedilink
      18 months ago

      I’m gonna be honest: I’m still rather confused. While I do now understand that perhaps our brains work differently than typical neural networks (or at least generative neural networks?), I do not yet comprehend how. But your explanation is a starting point. Thanks for that.

      • @Zeth0s
        link
        1
        edit-2
        8 months ago

        In the easiest example of a neuron in a artificial neural network, you take an image, you multiply every pixel by some weight, and you apply a very simple non linear transformation at the end. Any transformation is fine, but usually they are pretty trivial. Then you mix and match these neurons to create a neural network. The more complex the task, the more additional operations are added.

        In our brain, a neuron binds some neurotransmitters that trigger a electrical signal, this electrical signal is modulated and finally triggers the release of a certain quantity of certain neurotransmitters on the other extreme of the neuron. Detailed, quantitative mechanisms are still not known. These neurons are put together in an extremely complex neural network, details of which are still unknown.

        Artificial neural network started as an extremely coarse simulation of real neural networks. Just toy models to explain the concept. Since then, they diverged, evolving in a direction completely unrelated to real neural network, becoming their own thing.