Skip to content
MIT Better World

Prof. Joshua Tenenbaum is helping to launch the MIT Intelligence Initiative, an unparalleled, multidisciplinary quest to reveal just how intelligence works.

PHOTO: LEN RUBENSTEIN

By Lauren Clark

His mother was an educational psychologist who studied cognition; his father was an expert in artificial intelligence. Now, as an MIT professor of brain and cognitive sciences, Tenenbaum is exploring the underpinnings of human intelligence with an eye toward developing smarter machines.

“Those of us who are studying intelligence — how the mind works — and those studying how to build intelligent machines actually have a lot in common,”says Tenenbaum, who is helping to launch the Intelligence Initiative at MIT, an unparalleled, multidisciplinary quest to answer the cosmic question of just how intelligence works.

Researchers in multiple fields that MIT pioneered or advanced — artificial intelligence, computer science, cognitive science, linguistics, and neuroscience, among others — are working toward one of the initiative’s primary goals: embedding in computers the agile, adaptable nature of human knowledge about the world — “the knowledge that any five-year-old has,” says Tenenbaum. “If we could get computers to be as smart as a five-year-old child, that would be remarkable.”

Tenenbaum builds computer models to emulate a key function of intelligence: inductive inference. This feat of logic, which we often perform unconsciously, takes us beyond our immediate knowledge about the world to conclusions about the unknown. Consider that by the time they are two or three years old, children are able to find patterns in information with little evidence to guide them. After seeing just a couple of examples of dogs, for instance, a toddler can judge whether or not any new creature is a dog and usually will be correct. By contrast, even the most intelligent of machines cannot easily learn a new concept from a few examples and apply it to new cases — to differentiate a dog from, say, a horse, a cat, or a bear.

“Throughout cognition, there are instances where we seem to know more than we have any reasonable right to know about the world, where we arrive at generalizations that go far beyond our sparse, noisy, limited experience. How are inductive leaps possible? There must be some other knowledge in our heads, some more abstract knowledge that guides and constrains our inferences,” says Tenenbaum.

The inductive ability that humans take for granted has eluded artificial intelligence. When given a set of data, such as descriptions of animal species, computers don’t know where to start unless they’re programmed to look for a certain structure, such as the classic “family tree” that illustrates biological relationships between species.

Together with a recent MIT Ph.D. graduate, Charles Kemp, now a professor at Carnegie Mellon University, Tenenbaum developed a computer model that overcomes this limitation. Their model combines powerful statistical techniques for making inferences based on one or two examples with classic artificial intelligence methods for representing knowledge about the world. It can determine which organizational structure — hierarchy, linear order, tree, cluster, and others — best represents a particular set of data. For example, given data on animal species, the model infers that a tree structure best explains how they are related, but given data on U.S. Supreme Court decisions, it automatically arranges the court justices in a linear order where one end is conservative and the other is liberal.

This technology might not only shed light on how the human brain discovers patterns, it could be a boon to machine intelligence.

“In trying to come up with better computational models of how human learning works, we invent technology that can then be applied in machine settings,” says Tenenbaum. “We want our computers to not just memorize facts but give us useful new knowledge we didn’t have before.”

Explore Topics