• @[email protected]
    link
    fedilink
    47 months ago

    I mean that’s a problem, but it’s distinct from the word “intelligence”.

    An intelligent dog can’t classify a logic problem either, but we’re still happy to call them intelligent.

    • @raspberriesareyummy
      link
      17 months ago

      With regards to the dog & my description of intelligence, you are wrong: Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That’s true intelligence.

      When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it’s just that the functionality will be very limited and pretty much appear useless.

      Think of it this way: deterministic algorithm -> has concepts and causal relations (but no consciousness, obviously), results are predictable (deterministic) and can be explained deep learning / neural networks -> does not implicitly have concepts nor causal relations, results are statistical (based on previous result observations) and can not be explained -> there’s actually a whole sector of science looking into how to model such systems way to a solution Addition: the input / output filters of pattern recognition systems are typically fed through quasi-deterministic algorithms to “smoothen” the results (make output more grammatically correct, filter words, translate languages)

      If you took enough deterministic algorithms, typically tailored to very specific problems & their solutions, and were able to use those as building blocks for a larger system that is able to understand a larger part of the environment, then you would get something resembling AI. Such a system could be tested (verified) on sample data, but it should not require training on data.

      Example: You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.