A short introduction to Artificial Intelligence

The study of Artificial intelligence is understanding and building intelligent agents. In other words, making machines that compute how to…

A short introduction to Artificial Intelligence
Photo by Sufyan on Unsplash

The study of Artificial intelligence is understanding and building intelligent agents. In other words, making machines that compute how to act effectively and safely. We can call these machines as Agents

When it comes to building AI agents, there are four potential goals / approaches

  1. Thinking Humanly (Cognitive Modeling) : Determining how humans think
  2. Thinking Rationally (Laws of Thought) : Use ideal way of reasoning. This can be rather difficult due to problems with formalizing informal knowledge, and can be computationally intensive.
  3. Acting Humanly (Turing Test) : Machines that exhibit equivalent behavior to human. Interrogator cannot distinguish human from computer. However, mimicking human behavior is not always intelligence
  4. Acting Rationally (Rational Agents) : Agent acts to achieve the best expected outcome. Rational action maximizes expected performance value

Intelligent Agents

What is an intelligent agent?

In simple terms, an intelligent agent perceives it’s environment via sensors and acts upon environment via actuators.

We write agent programs that implement agent functions. So what exactly is an agent function?

Agents perceive its environment via sensors, so we can call a complete history of such perceptions as a percept sequence. So an agent function is a function that maps percept sequence to an action

So, as we saw earlier, acting rationally means achieving best performance. So how can we actually measure the agents performance?

That’s where we use Performance Measures

We use performance measures to evaluate the successfulness of agent’s behavior. We also identify this measure as Consequentialism

As a summary an agent needs four components for it to work, we call them PEAS (Performance Measure, Environment, Actuators, Sensors).

Also when considering such a Task Environment, we need to be aware of its properties

  • Fully vs Partially Observable
  • Single vs Multi Agent
  • Deterministic vs Nondeterministic vs Stochastic
  • Episodic vs Sequential
  • Dynamic vs Static
  • Discrete vs Continuous

Now that we have a proper understanding about the task environment, let’s look at different types of agent programs that we can make

  • Simple Reflex Agents : Action depends only on current precepts
  • Model based Reflex Agents : Maintains internal state based on precept history. Requires transition model (how the world works), Requires Sensor model (state reflected in percepts)
  • Goal based Agents : use goals to achieve actions
  • Utility based Agents : use utility functions (degree of happiness), used when goals are inadequate
  • Learning Agents :
    - Performance Element (Decides actions)
    - Critic (Evaluates performance, feedback)
    - Learning element (Modifies performance element)
    - Problem Generator (Suggests exploratory actions)

Also when it comes to representing different states, we face different levels of complexities.

  • Atomic (state is a black box, no internal structure)
  • Factored (state is a set of variables)
  • Structured (state includes objects, attributes, relationships)

Problem solving agents that use planning considers sequence of actions (paths) to a goal state.

The process of problem solving involves a few steps

  1. Goal Formation
  2. Problem Formation (States and Actions)
  3. Search (Simulates actions to find solution path)
  4. Execution (Executes actions)

A typical search problem consists of:

  • State space (A set of possible states)
  • Initial state and Goal state
  • Actions (for a state s)
  • Transition Model (Result of action a in state s)
  • Action cost function
  • Solution (path from initial to goal)
  • Optimal Solution (Lowest path cost)

We can measure the performance of a problem solving algorithm using:

  • Completeness (Guaranteed to find solution?)
  • Cost Optimality (Lowest path cost?)
  • Time Complexity
  • Space Complexity

When it comes to search algorithms, there are two types of them: Uninformed and Informed search algorithms.

Uninformed Search Algorithms :

  • Breadth First Search (BFS)
    - Complete, optimal for unit costs
    - Exponential space complexity
  • Uniform cost Search (Dijkstra’s)
    - Expand node with lowest path cost
    - Optimal for general action costs
  • Depth first Search (DFS)
    - Not complete, not optimal
    - Linear space complexity
  • Iterative Deepening search (Combines DFS and BFS)
  • Bidirectional Search (Expands two frontiers)

Informed Search Algorithms : Uses heuristic function (estimate cost to goal)

  • Greedy Best first search
    - Expand nodes with min h(n)
    - Not optimal, efficient
    - Can get stuck in dead ends/loops
  • A* search
    - Expand nodes with min f(n) = g(n) + h(n)
    - Complete and optimal if h(n) is consistent
    - Bad space complexity
  • Iterative deepening A* search
    - Address space complexity
  • Beam Search
    - Limit on frontier size
    - Incomplete, Suboptimal

Local Search and Optimization

In local search, we use single current sate and move only to neighbors. There’s no path tracking or systematic exploration. Local search uses less space and suitable for large or continuous state spaces.

We search for the best state by optimizing Objective Function F(x). A successor is a state with single element changed

When it comes to local search there are different methods we can use:

  1. Hill Climbing (Greedy Local Search)
    - Generate successor states and pick the best successor
    - Can get stuck on local maximum
    - difficult to navigate sequence of local maxima
    - Plateau — Random walk, flat evaluation function
    - Types : Stochastic HC, First Choice HC, Random restart HC
  2. Simulated Annealing
    - Combines hill climbing with Randomness
    - Allows downhill moves with probabiity e^(ΔE/T)
    - T — Temperature decrease over time
    - Wanders early, focus later
  3. Local Beam Search
    - Keeps track of k randomly generated states
    - Generate all successors of k states
    - Select best k successors for next step
    - Stochastic beam search — Choose k with probability proportional to value
  4. Genetic (Evolutionary) Algorithms
    - Variant of Stochastic beam search
    - Successor generated by combining two parent states (Crossover)
    - State represented as a string
    - Uses selection, crossover, mutation
    - Fitness function (Evaluation function)

Local Agents and Knowledge Representation

Knowledge based agents maintain a representation of the world (known as Knowledge Base). They infer new representations and Use representation to decide action

Knowledge Base: A set of sentences in a representation language. Basic facts are represented by Axioms.
- Actions: Tell (add info), ASK (retrieve info)
Both TELL/ASK involve inference

There are some words that we need to learn when it comes to logic

  • Syntax (correct structure of sentences)
  • Semantics (Define truth of sentences wrt models/possible worlds)
  • Logical Entailment (α ⊨ β: if α is true, β is true)
  • Logical Inference (Derive conclusions, KB ⊢ α)
  • Sound and Complete inference algorithms
  • Validity (True in all models/Tautology)
  • Satisfiability (True in some model)
  • Monotonicity (Entailed sentences only increase)

Propositional Logic

  • Symbols stand for propositions (True or False)
  • Atomic Sentences (Single symbol)
  • Complex Sentences use Connectives (¬, ∧, ∨, ⇒, ⇔)
  • Inference via model checking (Enumerate all models)
  • Inference via Theorem proving (Apply inference rules)
  • Resolution rule uses CNF (Conjunction of Clauses)

First Order Logic

  • Deals with Objects and Relations (Resembles natural language)
  • Ontological Commitment (Objects and relations among them)
  • Consists of basic elements like:
    - Constants (Objects)
    - Predicates (Relations, Properties)
    - Functions
    - Variables (x,y)
    - Connectives
    - Equality
    - Quantifiers (Universal ∀, Existential ∃)
  • Terms: Logical expressions referring to objects (e.g. LeftLeg(John))
  • Quantifier Duality (∀x P ⇔ ¬∃x ¬P)
  • Inference via ASK (Queries, substitution/binding list)

Ontological Engineering

  • Properly represent concepts in agents environment
  • Describe general concepts (Upper Ontology)