Breakthrough Technique: Meta-learning for Compositionality

Original :
https://www.nature.com/articles/s41586-023-06668-3

Vulgarization :
https://scitechdaily.com/the-future-of-machine-learning-a-new-breakthrough-technique/

How MLC Works
In exploring the possibility of bolstering compositional learning in neural networks, the researchers created MLC, a novel learning procedure in which a neural network is continuously updated to improve its skills over a series of episodes. In an episode, MLC receives a new word and is asked to use it compositionally—for instance, to take the word “jump” and then create new word combinations, such as “jump twice” or “jump around right twice.” MLC then receives a new episode that features a different word, and so on, each time improving the network’s compositional skills.

  • @TropicalDingdong
    link
    English
    31 year ago

    Traditional deep neural network’s training requires millions of example and so, despite its great success, is immensely inefficient.

    Is this a limited advancement in training techniques? Right now I’m working on several types of image classification models. How would this be able to help me?

    • @A_AOP
      link
      English
      21 year ago

      Sorry, I just read a lot… but I don’t work in this field.

      • ripcord
        link
        fedilink
        21 year ago

        If you don’t really understand it, why do you believe this is so big?

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          Admittedly, they were quoting someone else in the message you responded to. That may have been edited after the fact, but the person they’re quoting did in fact say those words (“this is big”).

          It was I who couldn’t read, as that is not what happened.

        • @A_AOP
          link
          English
          1
          edit-2
          1 year ago

          I am not sure what “image classification models” incompasses. I would have to read more and understand and I don’t have enough time and energy.
          Yet in the past I have read and understand a few books about neural networks and this new article in nature is something else : it’s clear when reading it.
          ( also to @[email protected] )

          • @TropicalDingdong
            link
            English
            11 year ago

            I mean is this any different than standard gradient descent with something like Adam as optimiser.

            That’s my assumption based on the headline. But the quick skim I gave the article seemed to only discuss it in the context of NLP. Not exactly my field of study.