Presentasi sedang didownload. Silahkan tunggu

Presentasi sedang didownload. Silahkan tunggu

Artificial Neural Network (Back-Propagation Neural Network) Yusuf Hendrawan, STP., M.App.Life Sc., Ph.D.

Presentasi serupa


Presentasi berjudul: "Artificial Neural Network (Back-Propagation Neural Network) Yusuf Hendrawan, STP., M.App.Life Sc., Ph.D."— Transcript presentasi:

1 Artificial Neural Network (Back-Propagation Neural Network) Yusuf Hendrawan, STP., M.App.Life Sc., Ph.D

2 Neurons Biologica l Artificial

3 A typical AI agent

4 Neural Network Layers Each layer receives its inputs from the previous layer and forwards its outputs to the next layer

5 Multilayer feed forward network It contains one or more hidden layers (hidden neurons). “Hidden” refers to the part of the neural network is not seen directly from either input or output of the network. The function of hidden neuron is to intervene between input and output. By adding one or more hidden layers, the network is able to extract higher- order statistics from input

6 Neural Network Learning Back-Propagation Algorithm: function BACK-PROP-LEARNING(examples, network) returns a neural network inputs: examples, a set of examples, each with input vector x and output vector y network, a multilayer network with L layers, weights W j,i, activation function g repeat for each e in examples do for each node j in the input layer do a j  x j [e] for l = 2 to M do in i   j W j,i a j a i   g(in i ) for each node i in the output layer do  j  g ’ (in j )  i W ji  i for l = M – 1 to 1 do for each node j in layer l do  j  g’(in j )  i W j,i  i for each node i in layer l + 1 do W j,i  W j,i +  x a j x  i until some stopping criterion is satisfied return NEURAL-NET-HYPOTHESIS(network) [Russell, Norvig] Fig Pg. 746

7 Back-Propagation Illustration ARTIFICIAL NEURAL NETWORKS Colin Fahey's Guide (Book CD)

8 X1 X2 Y VoWo Z1 Z2 Z3 Z4 Input (X)HiddenOutput (Y)

9 X1X Input (X)Output / Target (T) T Jumlah Neuron pada Input Layer2 Jumlah Neuron pada Hidden Layer4 Jumlah Neuron pada Output Layer1 Learning rate (α)0.1 Momentum (m)0.9 Target Error0.01 Maximum Iteration1000

10 Bobot Awal Input ke Hidden V11 = 0.75V21 = 0.35 V12 = 0.54V22 = 0.64 V13 = 0.44V23 = 0.05 V14 = 0.32V24 = 0.81 Bias ke Hidden Vo 11 = 0.07Vo 21 = 0.12 Vo 12 = 0.91Vo 22 = 0.23 Vo 13 = 0.45Vo 23 = 0.85 Vo 14 = 0.25Vo 24 = 0.09 Bobot Awal Hidden ke Output W1 = 0.04 W2 = 0.95 W3 = 0.33 W4 = 0.17 Bias ke Output Wo 1 = 0.66 Wo 2 = 0.56 Wo 3 = 0.73 Wo 4 = 0.01

11 Menghitung Zin & Z dari input ke hidden Z in(1) = (X1 * V11) + (X2 * V21) = (0.3 * 0.75) + (0.4 * 0.35) = Z in(2) = (X1 * V12) + (X2 * V22) = (0.3 * 0.54) + (0.4 * 0.64) = Z in(3) = (X1 * V13) + (X2 * V23) = (0.3 * 0.44) + (0.4 * 0.05) = Z in(4) = (X1 * V14) + (X2 * V24) = (0.3 * 0.32) + (0.4 * 0.81) = 0.42

12 Menghitung Yin & Y dari hidden ke output Y in = (Z (1) * W1) + (Z (2) * W2) + (Z (3) * W3) + (Z (4) * W4) = (0.57 * 0.04) + (0.603 * 0.95) + (0.538 * 0.33) + (0.603 * 0.17) = Menghitung dev antara Y dengan output nyata dev = (T - Y) * Y * (1 - Y) = (0.1 – 0.706) * * (1 – 0.706) = Menghitung selisih selisih = T - Y=

13 Back-Propagation Menghitung d in dari output ke hidden d in(1) = (dev * W1) = ( * 0.04) = d in(2) = (dev * W2) = ( * 0.95) = d in(3) = (dev * W3) = ( * 0.33) = d in(4) = (dev * W4) = ( * 0.17) = Menghitung d d (1) = (d in(1) * Z (1) * (1 - Z (1) ) = ( * * (1 – 0.575) = d (2) = (d in(2) * Z (2) * (1 - Z (2) ) = ( * * (1 – 0.603) = d (3) = (d in(3) * Z (3) * (1 - Z (3) ) = ( * * (1 – 0.538) = d (4) = (d in(4) * Z (4) * (1 - Z (4) ) = ( * * (1 – 0.603) =

14 Mengkoreksi bobot (W) dan bias (Wo) W1 = W1 + (α * dev * Z (1) ) + (m * Wo (1) ) = (0.1 * * 0.575) + (0.9 * 0.66) = W2 = W2 + (α * dev * Z (2) ) + (m * Wo (2) ) = (0.1 * * 0.603) + (0.9 * 0.56) = 1.45 W3 = W3 + (α * dev * Z (3) ) + (m * Wo (3) ) = (0.1 * * 0.538) + (0.9 * 0.73) = 0.98 W4 = W4 + (α * dev * Z (4) ) + (m * Wo (4) ) = (0.1 * * 0.603) + (0.9 * 0.01) = Wo 1 = (α * Z (1) ) + (m * Wo (1) ) = (0.1 * 0.575) + (0.9 * 0.66) = 0.65 Wo 2 = (α * Z (2) ) + (m * Wo (2) ) = (0.1 * 0.603) + (0.9 * 0.56) = Wo 3 = (α * Z (3) ) + (m * Wo (3) ) = (0.1 * 0.538) + (0.9 * 0.73) = 0.71 Wo 4 = (α * Z (4) ) + (m * Wo (4) ) = (0.1 * 0.603) + (0.9 * 0.01) =

15 Mengkoreksi bobot (V) dan bias (Vo) V11 = V11 + (α * d (1) * X1 ) + (m * Vo (11) ) = (0.1 * * 0.3) + (0.9 * 0.07) = V12 = V12 + (α * d (2) * X1 ) + (m * Vo (12) ) = (0.1 * * 0.3) + (0.9 * 0.91) = V13 = V13 + (α * d (3) * X1 ) + (m * Vo (13) ) = (0.1 * * 0.3) + (0.9 * 0.45) = V14 = V14 + (α * d (4) * X1 ) + (m * Vo (14) ) = (0.1 * * 0.3) + (0.9 * 0.25) = V21 = V21 + (α * d (1) * X2 ) + (m * Vo (21) ) = (0.1 * * 0.4) + (0.9 * 0.12) = V22 = V22 + (α * d (2) * X2 ) + (m * Vo (22) ) = (0.1 * * 0.4) + (0.9 * 0.23) = V23 = V23 + (α * d (3) * X2 ) + (m * Vo (23) ) = (0.1 * * 0.4) + (0.9 * 0.85) = V24 = V24 + (α * d (4) * X2 ) + (m * Vo (24) ) = (0.1 * * 0.4) + (0.9 * 0.09) =

16 Mengkoreksi bobot (V) dan bias (Vo) Vo 11 = (α * d (1) * X1 ) + (m * Vo 11 ) = (0.1 * *0.3)+(0.9*0.07) = Vo 12 = (α * d (2) * X1 ) + (m * Vo 12 ) = (0.1 * *0.3)+(0.9*0.91) = Vo 13 = (α * d (3) * X1 ) + (m * Vo 13 ) = (0.1 * *0.3)+(0.9*0.45) = Vo 14 = (α * d (4) * X1 ) + (m * Vo 14 ) = (0.1 * *0.3)+(0.9*0.25) = Vo 21 = (α * d (1) * X2 ) + (m * Vo 21 ) = (0.1 * *0.4)+(0.9*0.12) = Vo 22 = (α * d (2) * X2 ) + (m * Vo 22 ) = (0.1 * *0.4)+(0.9*0.23) = Vo 23 = (α * d (3) * X2 ) + (m * Vo 23 ) = (0.1 * *0.4)+(0.9*0.85) = Vo 24 = (α * d (4) * X2 ) + (m * Vo 24 ) = (0.1 * *0.4)+(0.9*0.09) =


Download ppt "Artificial Neural Network (Back-Propagation Neural Network) Yusuf Hendrawan, STP., M.App.Life Sc., Ph.D."

Presentasi serupa


Iklan oleh Google