I’m sure there are some AI peeps here. Neural networks scale with size because the number of combinations of parameter values that work for a given task scales exponentially (or, even better, factorially if that’s a word???) with the network size. How can such a network be properly aligned when even humans, the most advanced natural neural nets, are not aligned? What can we realistically hope for?

Here’s what I mean by alignment:

  • Ability to specify a loss function that humanity wants
  • Some strict or statistical guarantees on the deviation from that loss function as well as potentially unaccounted side effects
  • @[email protected]
    link
    fedilink
    31 year ago

    Some of the human-alignment projects

    And some look like “I flip shit bigger, align with me or I will flip your shit”

    • @Eylrid
      link
      11 year ago

      The fear of general super AI is that it will have the power to be the biggest shit flipper ever.