Next-gen AI: AGI research areas

June 1, 2018 by Axo Sal

The holy grail of AI is Artificial General Intelligence or AGI for short. An AI capable of learning and adapting to any given task. Without big amounts of labeled data, an inability to generalize learned skill as it is today and with common sense that people possess. In this series called “Next-gen AI” we will try to answer the following questions:

What would the building blocks of AGI be? How would it work? How does our brain work? What are the research areas?

We’ll also try to share our perspectives and what we at SoftRobot think needs to be done to reach AGI.

In this one, we will cover the research areas.

But let’s start with a recent example of an advancement. It is an AI that can have internal representations, imagination, dreaming capabilities similar to ours from which AI can train for the future events.

And now as promised Research Areas for AGI.

Research areas

Common sense, Intuition, Reasoning

While artificial intelligence, machine & deep learning has brought really good results in various niched tasks, current AI space is lacking common sense. Simple things like showing a picture of the regular doorway, a ball and an elephant, and asking what would fit through the doorway can leave the AI of today stunned.

Allen Institute for Artificial Intelligence (AI2)is working on this common sense problem with a project Alexandria. It tries to bring common sense to AI by letting many people answer common sense questions and record answers but also by analyzing texts and images.

 

http://allenai.org/alexandria/

Aristo is another of their projects. It is about machine reading and reasoning, currently, all about science. AI that reads, learns, and reasons about science. It can answer questions and give explanation to its reasoning and why it chose a specific answer. It can even understand diagrams.

Some of the interesting citations from the website explaining Aristo:

“Answering science questions requires a vast amount of scientific and commonsense knowledge about the world. Besides working directly with science texts (textbooks, Wikipedia pages, science articles) we are also constructing semi-structured knowledge bases of using state-of-the-art extraction techniques.”

“Aristo is to not just retrieve answers from texts, but also reason about new situations and construct answers to novel questions, requiring advances in both representation and inference technologies”

http://allenai.org/aristo/

Other NLP demos from AI2 to check out:

http://demo.allennlp.org/machine-comprehension

Knowledge graphs

Knowledge graphs are one of the pieces that can be needed for AI capable of general common sense reasoning and things in the world relate to each other, From more known projects like Google’s knowledge graph to lesser known projects such as http://www.cyc.com/. Graph databases can be used to develop your own specific knowledge graphs. Ontotext is one of many companies developing such graph DBs https://ontotext.com/products/ontotext-platform/. Such platforms can be used for AIs to store their knowledge representations in a connected relationship format and keep them continuously updated.

Turing Test

Turing test is a way to see whether an AI is as good as humans in a conversation, so that the human on the other side doesn’t recognize that she/he has been conversing with an AI and not a human being.

Google Assistant and its Duplex technology is one conversational AI that is able to make calls on the behalf of the person, asking it to, for instance, book a table in a restaurant or the time in a hair salon. The demos were flawless and AI could continue the conversation even in tricky situations and eventually get done what needed to be done. Such systems are some of the recent developments where common sense reasoning appears, at least for the people on the other side of the phone call.

Turing test passed?

Another test that is out there is Winograd Schema Challenge —an improved Turing test, is one of the ways to test common sense reasoning capabilities of AIs.

Neuroscience

If our brain is the highest level of intelligence we know, then it would be a common sense to study it closely and see what we can replicate into AI. Neuroscience is a study of the nervous system which the brain is a prominent part of.

“There are several areas of interest: neurobiology looks at the chemistry of cells and their interactions; cognitive neuroscience looks at how the brain supports psychological processes; and computational neuroscience aims to create computer models of the brain to test theories. Questions could include anything from why certain proteins appear in neurons to how the brain supports consciousness.” from:

 

Implement more human-like-brain models with all of the things that brain neurons and the brain as the whole has would guide us towards AGI. Where for instance different parts of the artificial brain are responsible for the respective task as in our brains etc.

There is a possibility of using AI itself to help develop AGI. To use AI to study the brain to drive AGI development forward.

Fact: There are over 90 billion neurons in our brains and about 1000 times more synapses (connections between the neurons). Given that, we could potentially store an infinite amount of information. Because a memory is a particular pattern of connections between neurons across the brain. Source: https://www.youtube.com/watch?v=EklA4lAkkoA

Cognitive Science

Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology, sociology, and biology.

Hardware

Majority of the research is conducted on the software side, but what about hardware? What if there are ways to replicate our brains in the form of hardware too (?)

Application-specific integrated circuits (ASICs)

TPUs — Tensor Processing Units – Hardware optimized for tensor (multidimensional array of numbers) computations — wiring up transistors in parallel, similar to parallel connections of neurons in the human brain. TPUs are available today to rent from the Google Cloud

 

One of the good architectures for graph computations would be to put processing units and RAM closely coupled together for faster read and writes. (https://www.youtube.com/watch?v=nbUg9IuIs_8&t=30m10s)

New paradigms / architectures of computing

Neuromorphic computing is about emulating the biological brain’s neural networks.

Photonic/Optical Chips that are powered by photons i.e. light for the transaction of data instead of electrons as of today. Photons are faster which means faster processing of the information which is good for AI. Optical RAM is also one of the advancements.

But what can be even faster is Quantum Computing as Quantum entanglement moves faster than light (http://bigthink.com/dr-kakus-universe/what-travels-faster-than-the-speed-of-light). But also because of the parallelism that comes with it.

Human Biology, DNA — our priors

What is already in us when we are born, that allows us to learn, adapt and connect concepts? We are somehow preconfigured for such tasks. Studying it might give us a better idea of the underlying principle that allows intelligence to emerge.


Further resources

MIT AGI

The future of AI, AGI playlist created by us

 

About SoftRobot:

SoftRobot is a Swedish AI company headquartered in Uppsala. We are strong believers in artificial intelligence and its power to create better habits in the world.

That’s what we’re trying to help businesses with, creating better workflow habits that enhance productivity with the help of machine & deep learning.

Our first product is Aiida — a teachable AI for the document workflow. Enterprises always have many files, documents, PDFs and invoices to handle. Manual auditing, approval, data extraction and data entry can be a time-consuming process. That’s where Aiida can help and automate processes to create better, more productive habits for your business.

For more info on how we can help your business, visit: https://www.softrobot.io/