Outline Informed Search Hill Climbing Search Greedy Best First Search A* Search
Informed Search Heuristic (informed) search -> explore the node that is most “likely” to be the nearest to a goal state. There is no guarantee that the heuristic provided most “likely” node will get you closer to a goal state than any other. Add domain-specific information to select the best path along which to continue searching Define a heuristic function, h(n), that estimates the “goodness” of a node n. Specifically, h(n) = estimated cost (or distance) of minimal cost path from n to a goal state. The heuristic function is an estimate, based on domain- specific information that is computable from the current state description, of how close we are to a goal
Hill Climbing Search If there exists a successor s for the current state n such that ◦ h(s) < h(n) ◦ h(s) <= h(t) for all the successors t of n, then move from n to s. Otherwise, halt at n. Looks one step ahead to determine if any successor is better than the current state; if there is, move to the best successor. Similar to Greedy search in that it uses h, but does not allow backtracking or jumping to an alternative path since it doesn’t “remember” where it has been.
Hill Climbing Search (cont.) Problem : may get stuck into local minima or in local maxima
6 Hill climbing example Hill climbing example start goal -5 h = -3 h = -2 h = -1 h = 0 h = f(n) = -(number of tiles out of place)
7 Local Maximum Problem start
8 Local Minimum Problem Local Minimum Problem start -4 h = -2 h =
9 Robot Navigation f(N) = h(N) = straight distance to the goal Local-minimum problem
Greedy Best First Search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., h SLD (n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal
Romania with step costs in km
Greedy best-first search example
Properties of greedy best first search Complete? No – can get stuck in loops, e.g., Iasi Neamt Iasi Neamt Time? O(b m ), but a good heuristic can give dramatic improvement Space? O(b m ) -- keeps all nodes in memory Optimal? No
17 Robot Navigation
18 Robot Navigation f(N) = h(N), with h(N) = Manhattan distance to the goal
19 Robot Navigation f(N) = h(N), with h(N) = Manhattan distance to the goal 7 0 What happened???
20 Admissible heuristic Let h*(N) be the true cost of the optimal path from N to a goal node Heuristic h(N) is admissible if: 0 h(N) h*(N) An admissible heuristic is always optimistic
21 A* Search Evaluation function: f(N) = g(N) + h(N) where: ◦ g(N) is the cost of the best path found so far to N ◦ h(N) is an admissible heuristic Then, best-first search with this evaluation function is called A* search Important AI algorithm developed by Fikes and Nilsson in early 70s. Originally used in Shakey robot.
22 Robot Navigation f(N) = g(N)+h(N), with h(N) = Manhattan distance to goal and g(N) = the cost of the best path found so far between the initial node and N
Soal searching (a dan b) S G Selesaikan pathfinding maze berikut ini menggunakan : a.BFS b.DFS c.Greedy Search d.Algoritma A* Keterangan: Gunakan Manhattan distance untuk menghitung H(N) dan G(N) S = Start Node G = Goal Node
Persiapan Pekan Depan 1. Buat 4 – 5 orang => 8 kel 2. Masing-masing kelompok dibagi untuk mempelajari algoritma pencarian : a. greedy BFS= 2 kel b. A*= 2 kel c. Minimax= 2 kel d. Alpha Beta Prunning= 2 kel Tugas masing-masing kelompok mempelajari : - pengertian - algoritma - penyelesaian soal searching (salah satu saja sesuai tugas kelompoknya) - contoh coding (sunnah) dibuat file presentasinya, diburn jadi 1 cd (sekelas) disertai referensi yang digunakan 3. Pekan depan = presentasi. Dengan aturan sbb : 1. masing-masing kelompok mengirim 2 wakilnya untuk presentasi ke kelompok yang lain. (gantian) 2. masing-masing kelompok mengirim 1 wakil yang lain lagi, untuk presentasi di depan (per algoritma pencarian)