Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
CSSE 5600/6600: Artificial Intelligence
• Optimization Problem Setting
• Hill Climbing
• Neighborhood
• Local Optima
• Improvement
• Simulated Annealing
slide 2
Optimization problems
• Previously we want a path from start to goal
Uninformed search: g(s): Iterative Deepening
Informed search: g(s)+h(s): A*
• Now a different setting:
Each state s has a score f(s) that we can compute
The goal is to find the state with the highest score, or
a reasonably high score
Do not care about the path
This is an optimization problem
Enumerating the states is intractable
Even previous search algorithms are too expensive
slide 3
Examples
• N-queen: f(s) = number of conflicting queens
in state s
Note we want s with the lowest score f(s)=0. The techniques
are the same. Low or high should be obvious from context.
slide 4
Examples
• N-queen: f(s) = number of conflicting queens
in state s
• Traveling salesperson problem (TSP)
Visit each city once, return to first city
State = order of cities, f(s) = total mileage
Note we want s with the lowest score f(s)=0. The techniques
are the same. Low or high should be obvious from context.
slide 5
Examples
• N-queen: f(s) = number of conflicting queens
in state s
• Traveling salesperson problem (TSP)
Visit each city once, return to first city
State = order of cities, f(s) = total mileage
• Boolean satisfiability (e.g., 3-SAT)
State = assignment to variables
f(s) = # satisfied clauses
Note we want s with the lowest score f(s)=0. The techniques
are the same. Low or high should be obvious from context.
A ∨ ¬B ∨ C
¬A ∨ C ∨ D
B ∨ D ∨ ¬E
¬C ∨ ¬ D ∨ ¬ E
¬A ∨ ¬ C ∨ E
Content
• Optimization Problem Setting
• Hill Climbing
• Neighborhood
• Local Optima
• Improvement
• Simulated Annealing
slide 6
1. HILL CLIMBING
slide 7
Hill climbing
• Very simple idea: Start from some state s,
Move to a neighbor t with better score. Repeat.
• Question: what’s a neighbor?
You have to define that!
The neighborhood of a state is the set of neighbors
Also called ‘move set’
Similar to successor function
slide 8
Neighbors: N-queen
• Example: N-queen (one queen per column). One
possibility:
…
s
f(s)=1
Neighborhood
of s
slide 9
Neighbors: N-queen
• Example: N-queen (one queen per column). One
possibility:
Pick the right-most most-conflicting column;
Move the queen in that column vertically to a
different location.
…
s
f(s)=1
Neighborhood
of s
f=1
f=2
tie breaking more promising?
slide 10
Neighbors: TSP
• state: A-B-C-D-E-F-G-H-A
• f = length of tour
slide 11
Neighbors: TSP
• state: A-B-C-D-E-F-G-H-A
• f = length of tour
• One possibility: 2-change
A-B-C-D-E-F-G-H-A
A-E-D-C-B-F-G-H-A
flip
slide 12
Neighbors: SAT
• State: (A=T, B=F, C=T, D=T, E=T)
• f = number of satisfied clauses
• Neighbor:
A ∨ ¬B ∨ C
¬A ∨ C ∨ D
B ∨ D ∨ ¬E
¬C ∨ ¬ D ∨ ¬ E
¬A ∨ ¬ C ∨ E
slide 13
Neighbors: SAT
• State: (A=T, B=F, C=T, D=T, E=T)
• f = number of satisfied clauses
• Neighbor: flip the assignment of one variable
A ∨ ¬B ∨ C
¬A ∨ C ∨ D
B ∨ D ∨ ¬E
¬C ∨ ¬ D ∨ ¬ E
¬A ∨ ¬ C ∨ E
(A=F, B=F, C=T, D=T, E=T)
(A=T, B=T, C=T, D=T, E=T)
(A=T, B=F, C=F, D=T, E=T)
(A=T, B=F, C=T, D=F, E=T)
(A=T, B=F, C=T, D=T, E=F)
slide 14
Hill climbing
• Question: What’s a neighbor?
(vaguely) Problems tend to have structures. A small
change produces a neighboring state.
The neighborhood must be small enough for
efficiency
Designing the neighborhood is critical. This is the
real ingenuity – not the decision to use hill climbing.
• Question: Pick which neighbor?
• Question: What if no neighbor is better than the
current state?
slide 15
Hill climbing
• Question: What’s a neighbor?
(vaguely) Problems tend to have structures. A small
change produces a neighboring state.
The neighborhood must be small enough for
efficiency
Designing the neighborhood is critical. This is the
real ingenuity – not the decision to use hill climbing.
• Question: Pick which neighbor? The best one (greedy)
• Question: What if no neighbor is better than the
current state? Stop. (Doh!)
slide 16
Hill climbing algorithm
1. Pick initial state s
2. Pick t in neighbors(s) with the largest f(t)
3. IF f(t) ≤ f(s) THEN stop, return s
4. s = t. GOTO 2.
• Not the most sophisticated algorithm in the world.
• Very greedy.
• Easily stuck.
slide 17
Hill climbing algorithm
1. Pick initial state s
2. Pick t in neighbors(s) with the largest f(t)
3. IF f(t) ≤ f(s) THEN stop, return s
4. s = t. GOTO 2.
• Not the most sophisticated algorithm in the world.
• Very greedy.
• Easily stuck.
your enemy:
local
optima
slide 18
Local optima in hill climbing
• Useful conceptual picture: f surface = ‘hills’ in state
space
• But we can’t see the landscape all at once. Only see
the neighborhood. Climb in fog.
state
f
Global optimum,
where we want to be
state
f
fog
slide 19
Local optima in hill climbing
• Local optima (there can be many!)
• Plateaux
Declare top-
of-the-world?
state
f
state
f
Where shall I go?
slide 20
Local optima in hill climbing
• Local optima (there can be many!)
• Plateaus
fog
Declare top of
the world?
state
f
state
f
fog
Where shall I go?
The rest of the lecture is
about
Escaping
local optima
slide 21
Not every local minimum should be escaped
slide 22
Variation 1: hill climbing with random
restarts
• Very simple modification
1. When stuck, pick a random new start, run basic
hill climbing from there.
2. Repeat this k times.
3. Return the best of the k local optima.
• Can be very effective
• Should be tried whenever hill climbing is used
slide 23
Variations 2 and 3 of hill climbing
• Question: How do we make hill climbing less greedy?
Stochastic hill climbing
• Randomly select among better neighbors
• The better, the more likely
• Pros / cons compared with basic hill climbing?
slide 24
Variations 2 and 3 of hill climbing
• Question: How do we make hill climbing less greedy?
Stochastic hill climbing
• Randomly select among better neighbors
• The better, the more likely
• Pros / cons compared with basic hill climbing?
• Question: What if the neighborhood is too large to
enumerate? (e.g. N-queen if we need to pick both the
column and the move within it)
slide 25
Variations 2 and 3 of hill climbing
• Question: How do we make hill climbing less greedy?
Stochastic hill climbing
• Randomly select among better neighbors
• The better, the more likely
• Pros / cons compared with basic hill climbing?
• Question: What if the neighborhood is too large to
enumerate? (e.g. N-queen if we need to pick both the
column and the move within it)
First-choice hill climbing
• Randomly generate neighbors, one at a time
• If better, take the move
• Pros / cons compared with basic hill climbing?
Algorithms