Monday 16 January 2017

Definition of AI, and why I lean towards acting rationality.


What is intelligence? It's a pretty basic question, and which answer you give determines a lot about how you go about building an AI and what that AI will eventually look like if you succeed. Definitions vary across two axis: whether you care about how it acts or how it thinks, and whether you want it to be human, or to be rational

Systems that think like humans.Systems that think rationally.
Systems that act like humansSystems that act rationally

Defining AI by how human it is in any respect seems stupid. Just because something does not resemble us, does not mean it is not intelligent. Humans are a subset of intelligence, not the other way around. The question then becomes, is action or thought important? I'm not sure, but I tend towards thought. After all, a giant lookup table which gave the same outputs as a human brain is not alive or intelligent. That's because intelligence is not only about the results a black box generates, but also about what happens in the box.

Then again, there's a difference between intelligence and sentience. Maybe something that thinks in very strange ways which we cannot recognise, yet can seemingly make decisions, is rational. Maybe the true theoretical definition of AI is thinking rationally, but the practical definition we should use is acting rationally, because our ability to determine what is and is not though, let alone rational thought, is hopelessly limited and flawed.


No comments:

Post a Comment