What is artificial general intelligence?
Artificial general intelligence (AGI) is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for.
Current artificial intelligence (AI) technologies all function within a set of pre-determined parameters. For example, AI models trained in image recognition and generation cannot build websites. AGI is a theoretical pursuit to develop AI systems that possess autonomous self-control, a reasonable degree of self-understanding, and the ability to learn new skills. It can solve complex problems in settings and contexts that were not taught to it at the time of its creation. AGI with human abilities remains a theoretical concept and research goal.
What is the difference between artificial intelligence and artificial general intelligence?
Over the decades, AI researchers have charted several milestones that significantly advanced machine intelligence — even to degrees that mimic human intelligence in specific tasks. For example, AI summarizers use machine learning (ML) models to extract important points from documents and generate an understandable summary. AI is thus a computer science discipline that enables software to solve novel and difficult tasks with human-level performance.
In contrast, an AGI system can solve problems in various domains, like a human being, without manual intervention. Instead of being limited to a specific scope, AGI can self-teach and solve problems it was never trained for. AGI is thus a theoretical representation of a complete artificial intelligence that solves complex tasks with generalized human cognitive abilities.
Some computer scientists believe that AGI is a hypothetical computer program with human comprehension and cognitive capabilities. AI systems can learn to handle unfamiliar tasks without additional training in such theories. Alternately, AI systems that we use today require substantial training before they can handle related tasks within the same domain. For example, you must fine-tune a pre-trained large language model (LLM) with medical datasets before it can operate consistently as a medical chatbot.
Strong AI compared with weak AI
Strong AI is full artificial intelligence, or AGI, capable of performing tasks with human cognitive levels despite having little background knowledge. Science fiction often depicts strong AI as a thinking machine with human comprehension not confined to domain limitations.
In contrast, weak AI or narrow AI are AI systems limited to computing specifications, algorithms, and specific tasks they are designed for. For example, previous AI models have limited memories and only rely on real-time data to make decisions. Even emerging generative AI applications with better memory retention are considered weak AI because they cannot be repurposed for other domains.
What are the theoretical approaches to artificial general intelligence research?
Achieving AGI requires a broader spectrum of technologies, data, and interconnectivity than what powers AI models today. Creativity, perception, learning, and memory are essential to create AI that mimics complex human behavior. AI experts have proposed several methods to drive AGI research.
Symbolic
The symbolic approach assumes that computer systems can develop AGI by representing human thoughts with expanding logic networks. The logic network symbolizes physical objects with an if-else logic, allowing the AI system to interpret ideas at a higher thinking level. However, symbolic representation cannot replicate subtle cognitive abilities at the lower level, such as perception.
Connectionist
The connectionist (or emergentist) approach focuses on replicating the human brain structure with neural-network architecture. Brain neurons can alter their transmission paths as humans interact with external stimuli. Scientists hope AI models adopting this sub-symbolic approach can replicate human-like intelligence and demonstrate low-level cognitive capabilities. Large language models are an example of AI that uses the connectionist method to understand natural languages.
Universalists
Researchers taking the universalist approach focus on addressing the AGI complexities at the calculation level. They attempt to formulate theoretical solutions that they can repurpose into practical AGI systems.
Whole organism architecture
The whole organism architecture approach involves integrating AI models with a physical representation of the human body. Scientists supporting this theory believe AGI is only achievable when the system learns from physical interactions.
Hybrid
The hybrid approach studies symbolic and sub-symbolic methods of representing human thoughts to achieve results beyond a single approach. AI researchers may attempt to assimilate different known principles and methods to develop AGI.
What are the technologies driving artificial general intelligence research?
AGI remains a distant goal for researchers. Efforts to build AGI systems are ongoing and encouraged by emerging developments. The following sections describe emerging technologies.
Deep learning
Deep learning is an AI discipline that focuses on training neural networks with multiple hidden layers to extract and understand complex relationships from raw data. AI experts use deep learning to build systems capable of understanding text, audio, images, video, and other information types. For example, developers use Amazon SageMaker to build lightweight deep learning models for the Internet of Things (IoT) and mobile devices.
Generative AI
Generative artificial intelligence (generative AI) is a subset of deep learning wherein an AI system can produce unique and realistic content from learned knowledge. Generative AI models train with massive datasets, which enables them to respond to human queries with text, audio, or visuals that naturally resemble human creations. For example, LLMs from AI21 Labs, Anthropic, Cohere, and Meta are generative AI algorithms that organizations can use to solve complex tasks. Software teams use Amazon Bedrock to deploy these models quickly on the cloud without provisioning servers.
NLP
Natural language processing (NLP) is a branch of AI that allows computer systems to understand and generate human language. NLP systems use computational linguistics and machine learning technologies to turn language data into simple representations called tokens and understand their contextual relationship. For example, Amazon Lex is an NLP engine that allows organizations to build conversational chatbots.
Computer vision
Computer vision is a technology that allows systems to extract, analyze, and comprehend spatial information from visual data. Self-driving cars use computer vision models to analyze real-time feeds from cameras and navigate the vehicle safely away from obstacles. Deep learning technologies allow computer vision systems to automate large-scale object recognition, classification, monitoring, and other image-processing tasks. For example, engineers use Amazon Rekognition to automate image analysis for various computer vision applications.
Robotics
Robotics is an engineering discipline wherein organizations can build mechanical systems that automatically perform physical maneuvers. In AGI, robotics systems allow machine intelligence to manifest physically. It is pivotal for introducing the sensory perception and physical manipulation capabilities that AGI systems require. For example, embedding a robotic arm with AGI may allow the arm to sense, grasp, and peel oranges as humans do. When researching AGI, engineering teams use AWS RoboMaker to simulate robotic systems virtually before assembling them.
What are the challenges in artificial general intelligence research?
Computer scientists face some of the following challenges in developing AGI.
Make connections
Current AI models are limited to their specific domain and cannot make connections between domains. However, humans can apply the knowledge and experience from one domain to another. For example, educational theories are applied in game design to create engaging learning experiences. Humans can also adapt what they learn from theoretical education to real-life situations. However, deep learning models require substantial training with specific datasets to work reliably with unfamiliar data.
Emotional intelligence
Deep learning models hint at the possibility of AGI, but have yet to demonstrate the authentic creativity that humans possess. Creativity requires emotional thinking, which neural network architecture can’t replicate yet. For example, humans respond to a conversation based on what they sense emotionally, but NLP models generate text output based on the linguistic datasets and patterns they train on.
Sensory perception
AGI requires AI systems to interact physically with the external environment. Besides robotics abilities, the system must perceive the world as humans do. Existing computer technologies need further advancement before they can differentiate shapes, colors, taste, smell, and sound accurately like humans.
How can AWS help with your AI and AGI efforts?
AWS provides managed artificial intelligence services that help you train, deploy, and scale generative AI applications. Organizations use our AI tools and foundational models to innovate AI systems with their own data for personalized use cases.
- Amazon Bedrock is a fully-managed service wherein developers can use API calls to access generative AI models they deploy. You can select, customize, train, and deploy industry-leading foundational models on Bedrock to work with proprietary data.
- Amazon SageMaker Jumpstart helps software teams accelerate AI development by building, training, and deploying foundational models in a machine-learning hub.
- Use Amazon Elastic Compute Cloud UltraClusters to power your generative AI workloads with supercomputing GPUs to process massive datasets with low latency.
more source: