Artificial Neural Network (Back-Propagation Neural Network)

Slides:



Advertisements
Presentasi serupa
MLP Feed-Forward Back Propagation Neural Net
Advertisements

Introduction to The Design & Analysis of Algorithms
Yanu Perwira Adi Putra Bagus Prabandaru
Tim Machine Learning PENS-ITS
Aplikasi Matlab untuk Jaringan Syaraf Tiruan
Dr. Benyamin Kusumoputro
Supervised Learning Process dengan Backpropagation of Error
LABOR MARKET Kuliah 12. THE LABOR MARKET..1  When firms respond to an increase in demand by stepping up production : Higher production requires an increase.
JST BACK PROPAGATION.
Rosenblatt 1962 Minsky – Papert 1969
Backpropagation neural net
Jaringan Hopfield Nurochman.
Jaringan Syaraf Tiruan
Informatics Theory & Programming (ITP) Informatics Eng. Dept. – IT Telkom.
%Program Hebb AND Hasil (Contoh Soal 1.5)
MULTILAYER PERCEPTRON
Back-Propagation Pertemuan 5
1 Pertemuan 2 SINGLE DAN MULTILAYER NETWORK Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
Jaringan Syaraf Tiruan (JST) stiki. ac
1 Pertemuan 26 NEURO FUZZY SYSTEM Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
JST BACK PROPAGATION.
Layer Data Link Pertemuan 13 Matakuliah: H0484/Jaringan Komputer Tahun: 2007.
Perceptron.
Jarringan Syaraf Tiruan
1 Pertemuan 15 Modelling Page Replacement Algorithm Matakuliah: T0316/sistem Operasi Tahun: 2005 Versi/Revisi: 5.
1 Pertemuan 8 JARINGAN COMPETITIVE Matakuliah: H0434/Jaringan Syaraf Tiruan Tahun: 2005 Versi: 1.
1 Pertemuan 15 Game Playing Matakuliah: T0264/Intelijensia Semu Tahun: Juli 2006 Versi: 2/1.
1 HAMPIRAN NUMERIK SOLUSI PERSAMAAN LANJAR Pertemuan 5 Matakuliah: K0342 / Metode Numerik I Tahun: 2006 TIK:Mahasiswa dapat meghitung nilai hampiran numerik.
Konsep Neural Network Learning Pertemuan 3
Expectation Maximization. Coin flipping experiment  Diberikan koin A dan B dengan nilai bias A dan B yang belum diketahui  Koin A akan memunculkan head.
Week 2 Hebbian & Perceptron (Eka Rahayu S., M. Kom.)
BACK PROPAGATION.
KECERDASAN BUATAN PERTEMUAN 11.
Pertemuan 12 ARTIFICIAL NEURAL NETWORKS (ANN) - JARINGAN SYARAF TIRUAN - Betha Nurina Sari, M.Kom.
JST (Jaringan Syaraf Tiruan)
Week 3 BackPropagation (Eka Rahayu S., M. Kom.)
Person 8.
Jaringan Syaraf Tiruan
Arsitektur Neural Network Pertemuan 2
Evolving ANN Dr. Suyanto, S.T., M.Sc. HP/WA:
Aplikasi Kecerdasan Komputasional
Neuro-Fuzzy Dr. Suyanto, S.T., M.Sc. HP/WA:
Jaringan Syaraf Tiruan Artificial Neural Networks (ANN)
Jaringan Syaraf Tiruan
Jaringan Syaraf Tiruan (Artificial Neural Networks)
KECERDASAN BUATAN PERTEMUAN 11.
Jaringan Syaraf Tiruan (JST)
Jaringan Syaraf Tiruan (Artificial Neural Networks)
MLP Feed-Forward Back Propagation Neural Net
Artificial Intelligence (AI)
Recursive function.
Jaringan Syaraf Tiruan Artificial Neural Networks (ANN)
Struktur Jaringan Syaraf Tiruan
D. Backpropagation Pembelajaran terawasi dan biasa digunakan perceptron dengan banyak lapisan untuk mengubah bobot-bobot yang terhubung dengan neuron-neuron.
Artificial Neural Network
Neural Network.
JARINGAN SYARAF TIRUAN
PRESENT WORTH ANALYSIS
Pertemuan 12 ARTIFICIAL NEURAL NETWORKS (ANN) - JARINGAN SYARAF TIRUAN - Betha Nurina Sari, M.Kom.
Jaringan Syaraf Tiruan
Database User Account.
Self-Organizing Network Model (SOM) Pertemuan 10
Metode Data Mining “ Self-Organizing Map [SOM] ” Taskum Setiadi ADVANCE MACHINE LEARNING STMIK Nusa Mandiri Jakarta2016 ADVANCE MACHINE LEARNING.
JARINGAN SYARAF TIRUAN
Simultaneous Linear Equations
Neural Network 3T Artificial Intelligence Jaringan Saraf Tiruan (Artificial Neural Network)
BAB 9 TEORI PRODUKSI. 2 Introduction Our focus is the supply side. The theory of the firm will address: How a firm makes cost-minimizing production decisions.
Teori Bahasa Otomata (1)
Simple Networks Jaringan Sederhana Machine Learning Team PENS - ITS 2006 Modification By Agus SBN.
Prediksi Data Historis Saham PT.Bank Rakyat Indonesia Tbk (BBRI) Menggunakan Model Algoritma Artificial Neural Network Kebumen, 07 September 2019 Saifuddin.
Transcript presentasi:

Artificial Neural Network (Back-Propagation Neural Network) Yusuf Hendrawan, STP., M.App.Life Sc., Ph.D

Neurons Biological Artificial http://faculty.washington.edu/chudler/color/pic1an.gif Artificial http://research.yale.edu/ysm/images/78.2/articles-neural-neuron.jpg

A typical AI agent

Neural Network Layers Each layer receives its inputs from the previous layer and forwards its outputs to the next layer http://smig.usgs.gov/SMIG/features_0902/tualatin_ann.fig3.gif

Multilayer feed forward network It contains one or more hidden layers (hidden neurons). “Hidden” refers to the part of the neural network is not seen directly from either input or output of the network . The function of hidden neuron is to intervene between input and output. By adding one or more hidden layers, the network is able to extract higher- order statistics from input

Neural Network Learning Back-Propagation Algorithm: function BACK-PROP-LEARNING(examples, network) returns a neural network inputs: examples, a set of examples, each with input vector x and output vector y network, a multilayer network with L layers, weights Wj,i , activation function g repeat for each e in examples do for each node j in the input layer do aj ‰ xj[e] for l = 2 to M do ini ‰ åj Wj,i aj ai ‰ g(ini) for each node i in the output layer do Dj ‰ g’(inj) åi Wji Di for l = M – 1 to 1 do for each node j in layer l do Dj ‰ g’(inj) åi Wj,i Di for each node i in layer l + 1 do Wj,i ‰ Wj,i + a x aj x Di until some stopping criterion is satisfied return NEURAL-NET-HYPOTHESIS(network) [Russell, Norvig] Fig. 20.25 Pg. 746

Back-Propagation Illustration ARTIFICIAL NEURAL NETWORKS Colin Fahey's Guide (Book CD)

Input (X) Hidden Output (Y) Z1 X1 Z2 Y Z3 X2 Z4 Vo Wo

Input (X) Output / Target (T) X1 X2 0.3 0.4 0.5 0.6 0.2 0.7 T 0.1 0.8 0.4 0.5 Jumlah Neuron pada Input Layer 2 Jumlah Neuron pada Hidden Layer 4 Jumlah Neuron pada Output Layer 1 Learning rate (α) 0.1 Momentum (m) 0.9 Target Error 0.01 Maximum Iteration 1000

Bobot Awal Input ke Hidden Bias ke Hidden V11 = 0.75 V21 = 0.35 V12 = 0.54 V22 = 0.64 V13 = 0.44 V23 = 0.05 V14 = 0.32 V24 = 0.81 Vo11 = 0.07 Vo21 = 0.12 Vo12 = 0.91 Vo22 = 0.23 Vo13 = 0.45 Vo23 = 0.85 Vo14 = 0.25 Vo24 = 0.09 Bobot Awal Hidden ke Output Bias ke Output W1 = 0.04 W2 = 0.95 W3 = 0.33 W4 = 0.17 Wo1 = 0.66 Wo2 = 0.56 Wo3 = 0.73 Wo4 = 0.01

Menghitung Zin & Z dari input ke hidden Zin(1) = (X1 * V11) + (X2 * V21) = (0.3 * 0.75) + (0.4 * 0.35) = 0.302 Zin(2) = (X1 * V12) + (X2 * V22) = (0.3 * 0.54) + (0.4 * 0.64) = 0.418 Zin(3) = (X1 * V13) + (X2 * V23) = (0.3 * 0.44) + (0.4 * 0.05) = 0.152 Zin(4) = (X1 * V14) + (X2 * V24) = (0.3 * 0.32) + (0.4 * 0.81) = 0.42

Menghitung Yin & Y dari hidden ke output Yin = (Z(1) * W1) + (Z(2) * W2) + (Z(3) * W3) + (Z(4) * W4) = (0.57 * 0.04) + (0.603 * 0.95) + (0.538 * 0.33) + (0.603 * 0.17) = 0.876 Menghitung dev antara Y dengan output nyata dev = (T - Y) * Y * (1 - Y) = (0.1 – 0.706) * 0.706 * (1 – 0.706) = -0.126 Menghitung selisih selisih = T - Y= -0.606

Back-Propagation Menghitung din dari output ke hidden din(1) = (dev * W1) = (-0.126 * 0.04) = -0.00504 din(2) = (dev * W2) = (-0.126 * 0.95) = -0.1197 din(3) = (dev * W3) = (-0.126 * 0.33) = -0.04158 din(4) = (dev * W4) = (-0.126 * 0.17) = -0.02142 Menghitung d d (1) = (din(1) * Z(1) * (1 - Z(1) ) = (-0.00504 * 0.575 * (1 – 0.575) = -0.00123 d (2) = (din(2) * Z(2) * (1 - Z(2) ) = (-0.1197 * 0.603 * (1 – 0.603) = -0.02865 d (3) = (din(3) * Z(3) * (1 - Z(3) ) = (-0.04158 * 0.538 * (1 – 0.538) = -0.01033 d (4) = (din(4) * Z(4) * (1 - Z(4) ) = (-0.02142 * 0.603 * (1 – 0.603) = -0.00512

Mengkoreksi bobot (W) dan bias (Wo) W1 = W1 + (α * dev * Z(1) ) + (m * Wo(1)) = 0.04 + (0.1 * -0.126 * 0.575) + (0.9 * 0.66) = 0.627 W2 = W2 + (α * dev * Z(2) ) + (m * Wo(2)) = 0.95 + (0.1 * -0.126 * 0.603) + (0.9 * 0.56) = 1.45 W3 = W3 + (α * dev * Z(3) ) + (m * Wo(3)) = 0.33 + (0.1 * -0.126 * 0.538) + (0.9 * 0.73) = 0.98 W4 = W4 + (α * dev * Z(4) ) + (m * Wo(4)) = 0.17 + (0.1 * -0.126 * 0.603) + (0.9 * 0.01) = 0.171 Wo1 = (α * Z(1) ) + (m * Wo(1)) = (0.1 * 0.575) + (0.9 * 0.66) = 0.65 Wo2 = (α * Z(2) ) + (m * Wo(2)) = (0.1 * 0.603) + (0.9 * 0.56) = 0.564 Wo3 = (α * Z(3) ) + (m * Wo(3)) = (0.1 * 0.538) + (0.9 * 0.73) = 0.71 Wo4 = (α * Z(4) ) + (m * Wo(4)) = (0.1 * 0.603) + (0.9 * 0.01) = 0.0693

Mengkoreksi bobot (V) dan bias (Vo) V11 = V11 + (α * d (1) * X1 ) + (m * Vo(11)) = 0.75 + (0.1 * -0.00123 * 0.3) + (0.9 * 0.07) = 0.8129 V12 = V12 + (α * d (2) * X1 ) + (m * Vo(12)) = 0.54 + (0.1 * -0.02865 * 0.3) + (0.9 * 0.91) = 1.3581 V13 = V13 + (α * d (3) * X1 ) + (m * Vo(13)) = 0.44 + (0.1 * -0.01033 * 0.3) + (0.9 * 0.45) = 0.8446 V14 = V14 + (α * d (4) * X1 ) + (m * Vo(14)) = 0.32 + (0.1 * -0.00512 * 0.3) + (0.9 * 0.25) = 0.5448 V21 = V21 + (α * d (1) * X2 ) + (m * Vo(21)) = 0.35 + (0.1 * -0.00123 * 0.4) + (0.9 * 0.12) = 0.4579 V22 = V22 + (α * d (2) * X2 ) + (m * Vo(22)) = 0.64 + (0.1 * -0.02865 * 0.4) + (0.9 * 0.23) = 0.8458 V23 = V23 + (α * d (3) * X2 ) + (m * Vo(23)) = 0.05 + (0.1 * -0.01033 * 0.4) + (0.9 * 0.85) = 0.8145 V24 = V24 + (α * d (4) * X2 ) + (m * Vo(24)) = 0.81 + (0.1 * -0.00512 * 0.4) + (0.9 * 0.09) = 0.8907

Mengkoreksi bobot (V) dan bias (Vo) Vo11 = (α * d (1) * X1 ) + (m * Vo11) = (0.1 * -0.00123*0.3)+(0.9*0.07) = 0.0629 Vo12 = (α * d (2) * X1 ) + (m * Vo12) = (0.1 * -0.02865*0.3)+(0.9*0.91) = 0.8181 Vo13 = (α * d (3) * X1 ) + (m * Vo13) = (0.1 * -0.01033*0.3)+(0.9*0.45) = 0.4046 Vo14 = (α * d (4) * X1 ) + (m * Vo14) = (0.1 * -0.00512*0.3)+(0.9*0.25) = 0.2248 Vo21 = (α * d (1) * X2 ) + (m * Vo21) = (0.1 * -0.00123*0.4)+(0.9*0.12) = 0.1079 Vo22 = (α * d (2) * X2 ) + (m * Vo22) = (0.1 * -0.02865*0.4)+(0.9*0.23) = 0.2058 Vo23 = (α * d (3) * X2 ) + (m * Vo23) = (0.1 * -0.01033*0.4)+(0.9*0.85) = 0.7645 Vo24 = (α * d (4) * X2 ) + (m * Vo24) = (0.1 * -0.00512*0.4)+(0.9*0.09) = 0.0807