AI is More Instinct than Intelligence

Lately I've been spending a lot of time with machine learning, neural nets, and the question of extracting and communicating their thinking so that humans could review their conclusions and/or learn from them. It is a hardly surprising observation that what these models do is primarily pattern matching: a system that assesses whether a bank transaction is fraudulent will flag up a transaction because in some very complicated ways it is similar to other transactions that were results of fraud. Even non-supervised learning, which autonomously finds patterns in data, is doing just that: finding patterns.

If so, then everything these systems do is more like what we do by instinct, and not what we do via high-level reasoning which we traditionally call intelligence. In fact, I think this is partly why these systems appear so magical: from reading handwriting to self-driving cars, they do things we don't know how we do ourselves. They are getting pretty good at things we can learn to do by instinct.

(Classic AI / ML did in fact concern itself with symbolic calculations and reasoning, but the statistical models that are becoming so powerful today meant a shift from reasoning to instinctual decisions.)

This, in turn, is why it's so difficult to understand how an AI model arrives at a conclusion: it does so based on patterns and similarity, like our amygdala, and does not complement this with some kind of abstract reasoning. Even if it would simply rationalise a decision already made based on its instinct (like probably how most humans arrive at "rational" decisions), adding this rational layer would be truly amazing, as it would allow us to communicate with an AI system and peek into its thought processes.

Popular Posts