Science

Truly Intelligent AI Could Play by the Rules, No Matter How Strange

Illustration of two robots standing across from each other at a table while playing chess

Truly Intelligent AI Could Play by the Rules, No Matter How Strange

To build safe but powerful AI models, start by testing their ability to play games on the fly

A proposed game-playing challenge would evaluate AIs on how well they can adapt to and follow new rules.

Tic-tac-toe is about as simple as games get—but as Scientific American’s legendary contributor Martin Gardner pointed out almost 70 years ago, it has complex variations and strategic aspects. They range from “reverse” games—where the first player to make three in a row loses—to three-dimensional versions played on cubes and beyond. Gardner’s games, even if they boggle a typical human mind, might point us to a way to make artificial intelligence more humanlike.

That’s because games in their endless variety—with rules that must be imagined, understood and followed—are part of what makes us human. Navigating rules is also a key challenge for AI models as they start to approximate human thought. And as things stand, it’s a challenge where most of these models fall short.

That’s a big deal because if there’s a path to artificial general intelligence, the ultimate goal of machine-learning and AI research, it can only come through building AIs that are capable of interpreting, adapting to and rigidly following the rules we set for them.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


To drive the development of such AI, we must develop a new test—let’s call it the Gardner test—in which an AI is surprised with the rules of a game and is then expected to play by those rules without human intervention. One simple way to achieve the surprise is to disclose the rules only when the game begins.

The Gardner test, with apologies to the Turing test, is inspired by and builds on the pioneering work in AI on general game playing (GGP), a field largely shaped by Stanford University professor Michael Genesereth. In GGP competitions, AIs running on standard laptops face off against other AIs in games whose rules—written in a formal mathematical language—are revealed only at the start. The test proposed here advances a new frontier: accepting game rules expressed in a natural language such as English. Once a distant…

Click Here to Read the Full Original Article at Scientific American Content: Global…