Presentasi sedang didownload. Silahkan tunggu

Presentasi sedang didownload. Silahkan tunggu

Data Mining: 4. Algoritma Klasifikasi Romi Satria Wahono WA/SMS: +6281586220090 1.

Presentasi serupa

Presentasi berjudul: "Data Mining: 4. Algoritma Klasifikasi Romi Satria Wahono WA/SMS: +6281586220090 1."— Transcript presentasi:

1 Data Mining: 4. Algoritma Klasifikasi Romi Satria Wahono WA/SMS: +6281586220090 1

2 Romi Satria Wahono SD Sompok Semarang (1987) SMPN 8 Semarang (1990) SMA Taruna Nusantara Magelang (1993) B.Eng, M.Eng and Ph.D in Software Engineering from Saitama University Japan (1994-2004) Universiti Teknikal Malaysia Melaka (2014) Research Interests: Software Engineering, Machine Learning Founder dan Koordinator IlmuKomputer.Com Peneliti LIPI (2004-2007) Founder dan CEO PT Brainmatics Cipta Informatika 2

3 Course Outline 1. Pengantar Data Mining 2. Proses Data Mining 3. Persiapan Data 4. Algoritma Klasifikasi 5. Algoritma Klastering 6. Algoritma Asosiasi 7. Algoritma Estimasi 3

4 4. Algoritma Klasifikasi 4.1 Decision Tree Induction 4.2 Bayesian Classification 4.3 Neural Network 4.4 Model Evaluation and Selection 4.5 Techniques to Improve Classification Accuracy: Ensemble Methods 4

5 4.1 Decision Tree 5

6 Algorithm for Decision Tree Induction Basic algorithm (a greedy algorithm) 1.Tree is constructed in a top-down recursive divide-and- conquer manner 2.At start, all the training examples are at the root 3.Attributes are categorical (if continuous-valued, they are discretized in advance) 4.Examples are partitioned recursively based on selected attributes 5.Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain, gain ratio, gini index) Conditions for stopping partitioning All samples for a given node belong to the same class There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf There are no samples left 6

7 Brief Review of Entropy m = 2 7

8 Attribute Selection Measure: Information Gain (ID3/C4.5) Select the attribute with the highest information gain Let p i be the probability that an arbitrary tuple in D belongs to class C i, estimated by | C i, D|/|D| Expected information (entropy) needed to classify a tuple in D: Information needed (after using A to split D into v partitions) to classify D: Information gained by branching on attribute A 8

9 Attribute Selection: Information Gain Class P: buys_computer = “yes” Class N: buys_computer = “no” means “age <=30” has 5 out of 14 samples, with 2 yes’es and 3 no’s. Hence Similarly, 9

10 Computing Information-Gain for Continuous-Valued Attributes Let attribute A be a continuous-valued attribute Must determine the best split point for A Sort the value A in increasing order Typically, the midpoint between each pair of adjacent values is considered as a possible split point (a i +a i+1 )/2 is the midpoint between the values of a i and a i+1 The point with the minimum expected information requirement for A is selected as the split-point for A Split: D 1 is the set of tuples in D satisfying A ≤ split-point, and D 2 is the set of tuples in D satisfying A > split-point 10

11 Tahapan Algoritma Decision Tree 1.Siapkan data training 2.Pilih atribut sebagai akar 3.Buat cabang untuk tiap-tiap nilai 4.Ulangi proses untuk setiap cabang sampai semua kasus pada cabang memiliki kelas yg sama 11

12 1. Siapkan data training 12

13 2. Pilih atribut sebagai akar Untuk memilih atribut akar, didasarkan pada nilai Gain tertinggi dari atribut-atribut yang ada. Untuk mendapatkan nilai Gain, harus ditentukan terlebih dahulu nilai Entropy Rumus Entropy: S = Himpunan Kasus n = Jumlah Partisi S pi = Proporsi dari Si terhadap S Rumus Gain: S= Himpunan Kasus A= Atribut n= Jumlah Partisi Atribut A | Si | = Jumlah Kasus pada partisi ke-i | S | = Jumlah Kasus dalam S 13

14 Perhitungan Entropy dan Gain Akar 14

15 Penghitungan Entropy Akar Entropy Total Entropy (Outlook) Entropy (Temperature) Entropy (Humidity) Entropy (Windy) 15

16 Penghitungan Entropy Akar NODEATRIBUT JML KASUS (S) YA (Si) TIDAK (Si) ENTROPYGAIN 1TOTAL 141040,86312 OUTLOOK CLOUDY 4400 RAINY 5410,72193 SUNNY 5230,97095 TEMPERATURE COOL 4040 HOT 4221 MILD 6240,91830 HUMADITY HIGH 7430,98523 NORMAL 7700 WINDY FALSE 8260,81128 TRUE 6420,91830 16

17 Penghitungan Gain Akar 17

18 Penghitungan Gain Akar NODEATRIBUT JML KASUS (S) YA (Si) TIDAK (Si) ENTROPYGAIN 1TOTAL 141040,86312 OUTLOOK 0,25852 CLOUDY 4400 RAINY 5410,72193 SUNNY 5230,97095 TEMPERATURE 0,18385 COOL 4040 HOT 4221 MILD 6240,91830 HUMADITY 0,37051 HIGH 7430,98523 NORMAL 7700 WINDY 0,00598 FALSE 8260,81128 TRUE 6420,91830 18

19 Gain Tertinggi Sebagai Akar Dari hasil pada Node 1, dapat diketahui bahwa atribut dengan Gain tertinggi adalah HUMIDITY yaitu sebesar 0.37051 Dengan demikian HUMIDITY dapat menjadi node akar Ada 2 nilai atribut dari HUMIDITY yaitu HIGH dan NORMAL. Dari kedua nilai atribut tersebut, nilai atribut NORMAL sudah mengklasifikasikan kasus menjadi 1 yaitu keputusan-nya Yes, sehingga tidak perlu dilakukan perhitungan lebih lanjut Tetapi untuk nilai atribut HIGH masih perlu dilakukan perhitungan lagi 1. HUMIDITY 1.1 ????? Yes High Normal 19

20 2. Buat cabang untuk tiap-tiap nilai Untuk memudahkan, dataset di filter dengan mengambil data yang memiliki kelembaban HUMADITY=HIGH untuk membuat table Node 1.1 OUTLOOK TEMPERATUREHUMIDITYWINDYPLAY SunnyHotHighFALSENo SunnyHotHighTRUENo CloudyHotHighFALSEYes RainyMildHighFALSEYes SunnyMildHighFALSENo CloudyMildHighTRUEYes RainyMildHighTRUENo 20

21 Perhitungan Entropi Dan Gain Cabang NODEATRIBUT JML KASUS (S) YA (Si) TIDAK (Si) ENTROPYGAIN 1.1HUMADITY 7340,98523 OUTLOOK 0,69951 CLOUDY 2200 RAINY 2111 SUNNY 3030 TEMPERATURE 0,02024 COOL 0000 HOT 3120,91830 MILD 4221 WINDY 0,02024 FALSE 4221 TRUE 3120,91830 21

22 Gain Tertinggi Sebagai Node 1.1 Dari hasil pada Tabel Node 1.1, dapat diketahui bahwa atribut dengan Gain tertinggi adalah OUTLOOK yaitu sebesar 0.69951 Dengan demikian OUTLOOK dapat menjadi node kedua Artibut CLOUDY = YES dan SUNNY= NO sudah mengklasifikasikan kasus menjadi 1 keputusan, sehingga tidak perlu dilakukan perhitungan lebih lanjut Tetapi untuk nilai atribut RAINY masih perlu dilakukan perhitungan lagi 1. HUMIDITY 1.1 OUTLOOK Yes High Normal YesNo 1.1.2 ????? Cloudy Rainy Sunny 22

23 3. Ulangi proses untuk setiap cabang sampai semua kasus pada cabang memiliki kelas yg sama OUTLOOK TEMPERATUREHUMIDITYWINDYPLAY RainyMildHighFALSEYes RainyMildHighTRUENo NODEATRIBUT JML KASUS (S) YA (Si)TIDAK (Si)ENTROPYGAIN 1.2 HUMADITY HIGH & OUTLOOK RAINY 2111 TEMPERATURE 0 COOL 0000 HOT 0000 MILD 2111 WINDY 1 FALSE 1100 TRUE 1010 23

24 Gain Tertinggi Sebagai Node 1.1.2 Dari tabel, Gain Tertinggi adalah WINDY dan menjadi node cabang dari atribut RAINY Karena semua kasus sudah masuk dalam kelas Jadi, pohon keputusan pada Gambar merupakan pohon keputusan terakhir yang terbentuk 1. HUMIDIT Y 1.1 OUTLOOK Yes High Normal YesNo 1.1.2 WINDY Cloudy Rainy Sunny YesNo FalseTrue 24

25 Decision Tree Induction: An Example Training data set: Buys_computer ageincomestudentcredit_ratingbuys_computer <=30highnofairno <=30highnoexcellentno 31…40highnofairyes >40mediumnofairyes >40lowyesfairyes >40lowyesexcellentno 31…40lowyesexcellentyes <=30mediumnofairno <=30lowyesfairyes >40mediumyesfairyes <=30mediumyesexcellentyes 31…40mediumnoexcellentyes 31…40highyesfairyes >40mediumnoexcellentno 25

26 Gain Ratio for Attribute Selection (C4.5) Information gain measure is biased towards attributes with a large number of values C4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain) GainRatio(A) = Gain(A)/SplitInfo(A) Ex. gain_ratio(income) = 0.029/1.557 = 0.019 The attribute with the maximum gain ratio is selected as the splitting attribute 26

27 Gini Index (CART) If a data set D contains examples from n classes, gini index, gini(D) is defined as where p j is the relative frequency of class j in D If a data set D is split on A into two subsets D 1 and D 2, the gini index gini(D) is defined as Reduction in Impurity: The attribute provides the smallest gini split (D) (or the largest reduction in impurity) is chosen to split the node (need to enumerate all the possible splitting points for each attribute) 27

28 Computation of Gini Index Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no” Suppose the attribute income partitions D into 10 in D 1 : {low, medium} and 4 in D 2 Gini {low,high} is 0.458; Gini {medium,high} is 0.450. Thus, split on the {low,medium} (and {high}) since it has the lowest Gini index All attributes are assumed continuous-valued May need other tools, e.g., clustering, to get the possible split values Can be modified for categorical attributes 28

29 Comparing Attribute Selection Measures The three measures, in general, return good results but Information gain: biased towards multivalued attributes Gain ratio: tends to prefer unbalanced splits in which one partition is much smaller than the others Gini index: biased to multivalued attributes has difficulty when # of classes is large tends to favor tests that result in equal-sized partitions and purity in both partitions 29

30 Other Attribute Selection Measures CHAID: a popular decision tree algorithm, measure based on χ 2 test for independence C-SEP: performs better than info. gain and gini index in certain cases G-statistic: has a close approximation to χ 2 distribution MDL (Minimal Description Length) principle (i.e., the simplest solution is preferred): The best tree as the one that requires the fewest # of bits to both (1) encode the tree, and (2) encode the exceptions to the tree Multivariate splits (partition based on multiple variable combinations) CART: finds multivariate splits based on a linear comb. of attrs. Which attribute selection measure is the best? Most give good results, none is significantly superior than others 30

31 Overfitting and Tree Pruning Overfitting: An induced tree may overfit the training data Too many branches, some may reflect anomalies due to noise or outliers Poor accuracy for unseen samples Two approaches to avoid overfitting 1.Prepruning: Halt tree construction early ̵ do not split a node if this would result in the goodness measure falling below a threshold Difficult to choose an appropriate threshold 2.Postpruning: Remove branches from a “fully grown” tree -get a sequence of progressively pruned trees Use a set of data different from the training data to decide which is the “best pruned tree” 31

32 32 Pruning

33 Why is decision tree induction popular? Relatively faster learning speed (than other classification methods) Convertible to simple and easy to understand classification rules Can use SQL queries for accessing databases Comparable classification accuracy with other methods 33

34 Latihan Lakukan eksperimen mengikuti buku Matthew North (Data Mining for the Masses) Chapter Ten (Decision Tree) Analisis jenis decision tree apa saja yang digunakan dan mengapa perlu dilakukan pada dataset tersebut 34

35 4.2 Bayesian Classification 35

36 Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities Foundation: Based on Bayes’ Theorem. Performance: A simple Bayesian classifier, naïve Bayesian classifier, has comparable performance with decision tree and selected neural network classifiers Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct — prior knowledge can be combined with observed data Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured 36

37 Bayes’ Theorem: Basics Total probability Theorem: Bayes’ Theorem: Let X be a data sample (“evidence”): class label is unknown Let H be a hypothesis that X belongs to class C Classification is to determine P(H|X), (i.e., posteriori probability): the probability that the hypothesis holds given the observed data sample X P(H) (prior probability): the initial probability E.g., X will buy computer, regardless of age, income, … P(X): probability that sample data is observed P(X|H) (likelihood): the probability of observing the sample X, given that the hypothesis holds E.g., Given that X will buy computer, the prob. that X is 31..40, medium income 37

38 Prediction Based on Bayes’ Theorem Given training data X, posteriori probability of a hypothesis H, P(H|X), follows the Bayes’ theorem Informally, this can be viewed as posteriori = likelihood x prior/evidence Predicts X belongs to C i iff the probability P(C i |X) is the highest among all the P(C k |X) for all the k classes Practical difficulty: It requires initial knowledge of many probabilities, involving significant computational cost 38

39 Classification is to Derive the Maximum Posteriori Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n-D attribute vector X = (x 1, x 2, …, x n ) Suppose there are m classes C 1, C 2, …, C m. Classification is to derive the maximum posteriori, i.e., the maximal P(C i |X) This can be derived from Bayes’ theorem Since P(X) is constant for all classes, only needs to be maximized 39

40 Naïve Bayes Classifier A simplified assumption: attributes are conditionally independent (i.e., no dependence relation between attributes): This greatly reduces the computation cost: Only counts the class distribution If A k is categorical, P(x k |C i ) is the # of tuples in C i having value x k for A k divided by |C i, D | (# of tuples of C i in D) If A k is continous-valued, P(x k |C i ) is usually computed based on Gaussian distribution with a mean μ and standard deviation σ and P(x k |C i ) is 40

41 Naïve Bayes Classifier: Training Dataset Class: C 1 :buys_computer = ‘yes’ C 2 :buys_computer = ‘no’ Data to be classified: X = (age <=30, income = medium, student = yes, credit_rating = fair) X  buy computer? ageincomestudentcredit_ratingbuys_computer <=30highnofairno <=30highnoexcellentno 31…40highnofairyes >40mediumnofairyes >40lowyesfairyes >40lowyesexcellentno 31…40lowyesexcellentyes <=30mediumnofairno <=30lowyesfairyes >40mediumyesfairyes <=30mediumyesexcellentyes 31…40mediumNoexcellentyes 31…40highYesfairyes >40mediumNoexcellentno 41

42 Naïve Bayes Classifier: An Example P(C i ): P(buys_computer = “yes”) = 9/14 = 0.643 P(buys_computer = “no”) = 5/14= 0.357 Compute P(X|C i ) for each class P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222 P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6 P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444 P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667 P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667 P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4 X = (age <= 30, income = medium, student = yes, credit_rating = fair) P(X|C i ) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044 P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019 P(X|C i )*P(C i ) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028 P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007 Therefore, X belongs to class (“buys_computer = yes”) ageincomestudentcredit_ratingbuys_computer <=30highnofairno <=30highnoexcellentno 31…40highnofairyes >40mediumnofairyes >40lowyesfairyes >40lowyesexcellentno 31…40lowyesexcellentyes <=30mediumnofairno <=30lowyesfairyes >40mediumyesfairyes <=30mediumyesexcellentyes 31…40mediumnoexcellentyes 31…40highyesfairyes >40mediumnoexcellentno 42

43 Tahapan Algoritma Naïve Bayes 1.Baca Data Training 2.Hitung jumlah class 3.Hitung jumlah kasus yang sama dengan class yang sama 4.Kalikan semua nilai hasil sesuai dengan data X yang dicari class-nya 43

44 1. Baca Data Training 44

45 Teorema Bayes X  Data dengan class yang belum diketahui H  Hipotesis data X yang merupakan suatu class yang lebih spesifik P (H|X)  Probabilitas hipotesis H berdasarkan kondisi X (posteriori probability) P (H)  Probabilitas hipotesis H (prior probability) P (X|H)  Probabilitas X berdasarkan kondisi pada hipotesis H P (X)  Probabilitas X 45

46 2. Hitung jumlah class/label Terdapat 2 class dari data training tersebut, yaitu: C1 (Class 1)  Play = yes  9 record C2 (Class 2)  Play = no  5 record Total = 14 record Maka: P (C1) = 9/14 = 0.642857143 P (C2) = 5/14 = 0.357142857 Pertanyaan: Data X = (outlook=rainy, temperature=cool, humidity=high, windy=true) Main golf atau tidak? 46

47 3. Hitung jumlah kasus yang sama dengan class yang sama Untuk P(Ci) yaitu P(C1) dan P(C2) sudah diketahui hasilnya di langkah sebelumnya. Selanjutnya Hitung P(X|Ci) untuk i = 1 dan 2 P(outlook=“sunny”|play=“yes”)=2/9=0.222222222 P(outlook=“sunny”|play=“no”)=3/5=0.6 P(outlook=“overcast”|play=“yes”)=4/9=0.444444444 P(outlook=“overcast”|play=“no”)=0/5=0 P(outlook=“rainy”|play=“yes”)=3/9=0.333333333 P(outlook=“rainy”|play=“no”)=2/5=0.4 47

48 3. Hitung jumlah kasus yang sama dengan class yang sama Jika semua atribut dihitung, maka didapat hasil akhirnya seperti berikut ini: AtributeParameterNoYes Outlookvalue=sunny0.60.2222222222222222 Outlookvalue=cloudy0.00.4444444444444444 Outlookvalue=rainy0.40.3333333333333333 Temperaturevalue=hot0.40.2222222222222222 Temperaturevalue=mild0.40.4444444444444444 Temperaturevalue=cool0.20.3333333333333333 Humidityvalue=high0.80.3333333333333333 Humidityvalue=normal0.20.6666666666666666 Windyvalue=false0.40.6666666666666666 Windyvalue=true0.60.3333333333333333 48

49 4. Kalikan semua nilai hasil sesuai dengan data X yang dicari class-nya Pertanyaan: Data X = (outlook=rainy, temperature=cool, humidity=high, windy=true) Main Golf atau tidak? Kalikan semua nilai hasil dari data X P(X|play=“yes”) = 0.333333333* 0.333333333* 0.333333333*0.333333333 = 0.012345679 P(X|play=“no”) = 0.4*0.2*0.8*0.6=0.0384 P(X|play=“yes”)*P(C1) = 0.012345679*0.642857143 = 0.007936508 P(X|play=“no”)*P(C2) = 0.0384*0.357142857 = 0.013714286 Nilai “no” lebih besar dari nilai “yes” maka class dari data X tersebut adalah “No” 49

50 Avoiding the Zero-Probability Problem Naïve Bayesian prediction requires each conditional prob. be non-zero. Otherwise, the predicted prob. will be zero Ex. Suppose a dataset with 1000 tuples, income=low (0), income= medium (990), and income = high (10) Use Laplacian correction (or Laplacian estimator) Adding 1 to each case Prob(income = low) = 1/1003 Prob(income = medium) = 991/1003 Prob(income = high) = 11/1003 The “corrected” prob. estimates are close to their “uncorrected” counterparts 50

51 Naïve Bayes Classifier: Comments Advantages Easy to implement Good results obtained in most of the cases Disadvantages Assumption: class conditional independence, therefore loss of accuracy Practically, dependencies exist among variables, e.g.: Hospitals Patients Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc. Dependencies among these cannot be modeled by Naïve Bayes Classifier How to deal with these dependencies? Bayesian Belief Networks 51

52 4.3 Neural Network 52

53 Neural Network Neural Network adalah suatu model yang dibuat untuk meniru fungsi belajar yang dimiliki otak manusia atau jaringan dari sekelompok unit pemroses kecil yang dimodelkan berdasarkan jaringan saraf manusia 53

54 Neural Network Model Perceptron adalah model jaringan yang terdiri dari beberapa unit masukan (ditambah dengan sebuah bias), dan memiliki sebuah unit keluaran Fungsi aktivasi bukan hanya merupakan fungsi biner (0,1) melainkan bipolar (1,0,-1) Untuk suatu harga threshold ѳ yang ditentukan: 1Jika net > ѳ F (net) = 0Jika – ѳ ≤ net ≤ ѳ -1Jika net < - ѳ 54

55 Fungsi Aktivasi Macam fungsi aktivasi yang dipakai untuk mengaktifkan net diberbagai jenis neural network: 1.Aktivasi linear, Rumus: y = sign(v) = v 2.Aktivasi step, Rumus: 3.Aktivasi sigmoid biner, Rumus: 4.Aktivasi sigmoid bipolar, Rumus: 55

56 Tahapan Algoritma Perceptron 1.Inisialisasi semua bobot dan bias (umumnya wi = b = 0) 2.Selama ada element vektor masukan yang respon unit keluarannya tidak sama dengan target, lakukan: 2.1 Set aktivasi unit masukan xi = Si (i = 1,...,n) 2.2 Hitung respon unit keluaran: net = + b 1Jika net > ѳ F (net) = 0Jika – ѳ ≤ net ≤ ѳ -1Jika net < - ѳ 2.3 Perbaiki bobot pola yang mengadung kesalahan menurut persamaan: Wi (baru) = wi (lama) + ∆w (i = 1,...,n) dengan ∆w = α t xi B (baru) = b(lama) + ∆ b dengan ∆b = α t Dimana: α = Laju pembelajaran (Learning rate) yang ditentukan ѳ = Threshold yang ditentukan t = Target 2.4 Ulangi iterasi sampai perubahan bobot (∆wn = 0) tidak ada 56

57 Studi Kasus Diketahui sebuah dataset kelulusan berdasarkan IPK untuk program S1: Jika ada mahasiswa IPK 2.85 dan masih semester 1, maka masuk ke kedalam manakah status tersebut ? StatusIPKSemester Lulus2.92.91 Tidak Lulus2.83 Tidak Lulus2.35 Tidak lulus2.72.76 57

58 1: Inisialisasi Bobot Inisialisasi Bobot dan bias awal: b = 0 dan bias = 1 tX1X2 12,91 2.83 2.35 2,76 58

59 2.1: Set aktivasi unit masukan Treshold (batasan), θ = 0, yang artinya : 1Jika net > 0 F (net) = 0Jika net = 0 -1Jika net < 0 59

60 2.2 - 2.3 Hitung Respon dan Perbaiki Bobot Hitung Response Keluaran iterasi 1 Perbaiki bobot pola yang mengandung kesalahan MASUKANTARGET y=PERUBAHAN BOBOTBOBOT BARU X1X21tNETf(NET)∆W1∆W2∆bW1W2b INISIALISASI000 2,911100 11 71 2,8318,121-2,8-30,140 2,3510,231-2,3-5-2,2 2,761-5,94000-2,2 60

61 2.4 Ulangi iterasi sampai perubahan bobot (∆wn = 0) tidak ada (Iterasi 2) Hitung Response Keluaran iterasi 2 Perbaiki bobot pola yang mengandung kesalahan MASUKANTARGET y=PERUBAHAN BOBOTBOBOT BARU X1X21tNETf(NET)∆W1∆W2∆bW1W2b INISIALISASI-2,2 2,9111-8,382,9110,700 2,8311,961-2,8-3-2,1-3 2,351-20,83000-2,1-3 2,761-24,67000-2,1-3 61

62 2.4 Ulangi iterasi sampai perubahan bobot (∆wn = 0) tidak ada (Iterasi 3) Hitung Response Keluaran iterasi 3 Perbaiki bobot pola yang mengandung kesalahan Untuk data IPK memiliki pola 0.8 x - 2 y = 0 dapat dihitung prediksinya menggunakan bobot yang terakhir didapat: V = X1*W1 + X2*W2 = 0,8 * 2,85 - 2*1 = 2,28 -2 = 0,28 Y = sign(V) = sign(0,28) = 1 (Lulus) MASUKANTARGET y=PERUBAHAN BOBOTBOBOT BARU X1X21tNETf(NET)∆W1∆W2∆bW1W2b INISIALISASI-2,1-3 2,9111-10,092,9110,8-20 2,831-3,760000,8-20 2,351-8,160000,8-20 2,761-9,840000,8-20 62

63 4.4 Model Evaluation and Selection 63

64 Model Evaluation and Selection Evaluation metrics: How can we measure accuracy? Other metrics to consider? Use validation test set of class-labeled tuples instead of training set when assessing accuracy Methods for estimating a classifier’s accuracy: Holdout method, random subsampling Cross-validation Bootstrap Comparing classifiers: Confidence intervals Cost-benefit analysis and ROC Curves 64

65 Classifier Evaluation Metrics: Confusion Matrix Given m classes, an entry, CM i,j in a confusion matrix indicates # of tuples in class i that were labeled by the classifier as class j May have extra rows/columns to provide totals Actual class\Predicted class buy_computer = yes buy_computer = no Total buy_computer = yes6954467000 buy_computer = no41225883000 Total7366263410000 Actual class\Predicted classC1C1 ¬ C 1 C1C1 True Positives (TP)False Negatives (FN) ¬ C 1 False Positives (FP)True Negatives (TN) 65

66 Classifier Evaluation Metrics: Accuracy, Error Rate, Sensitivity and Specificity Classifier Accuracy or recognition rate: percentage of test set tuples that are correctly classified Accuracy = (TP + TN)/All Error rate: 1 – accuracy, or Error rate = (FP + FN)/All Class Imbalance Problem: One class may be rare, e.g. fraud, or HIV-positive Significant majority of the negative class and minority of the positive class Sensitivity: True Positive recognition rate Sensitivity = TP/P Specificity: True Negative recognition rate Specificity = TN/N A\PC¬C CTPFNP ¬CFPTNN P’N’All 66

67 Classifier Evaluation Metrics: Precision and Recall, and F- measures Precision: exactness – what % of tuples that the classifier labeled as positive are actually positive Recall: completeness – what % of positive tuples did the classifier label as positive? Perfect score is 1.0 Inverse relationship between precision & recall F measure (F1 or F-score): harmonic mean of precision and recall, Fß: weighted measure of precision and recall assigns ß times as much weight to recall as to precision 67

68 Classifier Evaluation Metrics: Example Precision = 90/230 = 39.13% Recall = 90/300 = 30.00% Actual Class\Predicted classcancer = yescancer = noTotalRecognition(%) cancer = yes9021030030.00 (sensitivity cancer = no1409560970098.56 (specificity) Total23097701000096.40 (accuracy) 68

69 Evaluating Classifier Accuracy: Holdout & Cross-Validation Methods Holdout method Given data is randomly partitioned into two independent sets Training set (e.g., 2/3) for model construction Test set (e.g., 1/3) for accuracy estimation Random sampling: a variation of holdout Repeat holdout k times, accuracy = avg. of the accuracies obtained Cross-validation (k-fold, where k = 10 is most popular) Randomly partition the data into k mutually exclusive subsets, each approximately equal size At i-th iteration, use D i as test set and others as training set Leave-one-out: k folds where k = # of tuples, for small sized data *Stratified cross-validation*: folds are stratified so that class dist. in each fold is approx. the same as that in the initial data 69

70 Evaluating Classifier Accuracy: Bootstrap Bootstrap Works well with small data sets Samples the given training tuples uniformly with replacement, i.e., each time a tuple is selected, it is equally likely to be selected again and re- added to the training set Several bootstrap methods, and a common one is.632 boostrap 1.A data set with d tuples is sampled d times, with replacement, resulting in a training set of d samples 2.The data tuples that did not make it into the training set end up forming the test set. About 63.2% of the original data end up in the bootstrap, and the remaining 36.8% form the test set (since (1 – 1/d) d ≈ e -1 = 0.368) 3.Repeat the sampling procedure k times, overall accuracy of the model: 70

71 Estimating Confidence Intervals: Classifier Models M 1 vs. M 2 Suppose we have two classifiers, M 1 and M 2, which one is better? Use 10-fold cross-validation to obtain and These mean error rates are just estimates of error on the true population of future data cases What if the difference between the two error rates is just attributed to chance? Use a test of statistical significance Obtain confidence limits for our error estimates 71

72 Estimating Confidence Intervals: Null Hypothesis 1.Perform 10-fold cross-validation 2.Assume samples follow a t distribution with k–1 degrees of freedom (here, k=10) 3.Use t-test (or Student’s t-test) 4.Null Hypothesis: M 1 & M 2 are the same 5.If we can reject null hypothesis, then 1.we conclude that the difference between M 1 & M 2 is statistically significant 2.Chose model with lower error rate 72

73 Estimating Confidence Intervals: t- test If only 1 test set available: pairwise comparison For i th round of 10-fold cross-validation, the same cross partitioning is used to obtain err(M 1 ) i and err(M 2 ) i Average over 10 rounds to get t-test computes t-statistic with k-1 degrees of freedom: If two test sets available: use non-paired t-test where where k 1 & k 2 are # of cross-validation samples used for M 1 & M 2, resp. 73

74 Symmetric Significance level, e.g., sig = 0.05 or 5% means M 1 & M 2 are significantly different for 95% of population Confidence limit, z = sig/2 Estimating Confidence Intervals: Table for t-distribution 74

75 Estimating Confidence Intervals: Statistical Significance Are M 1 & M 2 significantly different? 1.Compute t. Select significance level (e.g. sig = 5%) 2.Consult table for t-distribution: Find t value corresponding to k-1 degrees of freedom (here, 9) 3.t-distribution is symmetric: typically upper % points of distribution shown → look up value for confidence limit z=sig/2 (here, 0.025) 4.If t > z or t < -z, then t value lies in rejection region: 1.Reject null hypothesis that mean error rates of M 1 & M 2 are same 2.Conclude: statistically significant difference between M 1 & M 2 5.Otherwise, conclude that any difference is chance 75

76 Model Selection: ROC Curves ROC (Receiver Operating Characteristics) curves: for visual comparison of classification models Originated from signal detection theory Shows the trade-off between the true positive rate and the false positive rate The area under the ROC curve is a measure of the accuracy of the model Rank the test tuples in decreasing order: the one that is most likely to belong to the positive class appears at the top of the list The closer to the diagonal line (i.e., the closer the area is to 0.5), the less accurate is the model Vertical axis represents the true positive rate Horizontal axis rep. the false positive rate The plot also shows a diagonal line A model with perfect accuracy will have an area of 1.0 76

77 Issues Affecting Model Selection Accuracy classifier accuracy: predicting class label Speed time to construct the model (training time) time to use the model (classification/prediction time) Robustness: handling noise and missing values Scalability: efficiency in disk-resident databases Interpretability understanding and insight provided by the model Other measures, e.g., goodness of rules, such as decision tree size or compactness of classification rules 77

78 4.5 Techniques to Improve Classification Accuracy: Ensemble Methods 78

79 Ensemble Methods: Increasing the Accuracy Ensemble methods Use a combination of models to increase accuracy Combine a series of k learned models, M1, M2, …, Mk, with the aim of creating an improved model M* Popular ensemble methods Bagging: averaging the prediction over a collection of classifiers Boosting: weighted vote with a collection of classifiers Ensemble: combining a set of heterogeneous classifiers 79

80 Bagging: Boostrap Aggregation Analogy: Diagnosis based on multiple doctors’ majority vote Training Given a set D of d tuples, at each iteration i, a training set D i of d tuples is sampled with replacement from D (i.e., bootstrap) A classifier model M i is learned for each training set D i Classification: classify an unknown sample X Each classifier M i returns its class prediction The bagged classifier M* counts the votes and assigns the class with the most votes to X Prediction: can be applied to the prediction of continuous values by taking the average value of each prediction for a given test tuple Accuracy Often significantly better than a single classifier derived from D For noise data: not considerably worse, more robust Proved improved accuracy in prediction 80

81 Boosting Analogy: Consult several doctors, based on a combination of weighted diagnoses—weight assigned based on the previous diagnosis accuracy How boosting works? 1.Weights are assigned to each training tuple 2.A series of k classifiers is iteratively learned 3.After a classifier M i is learned, the weights are updated to allow the subsequent classifier, M i+1, to pay more attention to the training tuples that were misclassified by M i 4.The final M* combines the votes of each individual classifier, where the weight of each classifier's vote is a function of its accuracy Boosting algorithm can be extended for numeric prediction Comparing with bagging: Boosting tends to have greater accuracy, but it also risks overfitting the model to misclassified data 81

82 Adaboost (Freund and Schapire, 1997) 1.Given a set of d class-labeled tuples, (X 1, y 1 ), …, (X d, y d ) 2.Initially, all the weights of tuples are set the same (1/d) 3.Generate k classifiers in k rounds. At round i, 1.Tuples from D are sampled (with replacement) to form a training set D i of the same size 2.Each tuple’s chance of being selected is based on its weight 3.A classification model M i is derived from D i 4.Its error rate is calculated using D i as a test set 5.If a tuple is misclassified, its weight is increased, o.w. it is decreased 4.Error rate: err(X j ) is the misclassification error of tuple X j. Classifier M i error rate is the sum of the weights of the misclassified tuples: 5.The weight of classifier M i ’s vote is 82

83 Random Forest ( Breiman 2001) Random Forest : Each classifier in the ensemble is a decision tree classifier and is generated using a random selection of attributes at each node to determine the split During classification, each tree votes and the most popular class is returned Two Methods to construct Random Forest: 1.Forest-RI (random input selection): Randomly select, at each node, F attributes as candidates for the split at the node. The CART methodology is used to grow the trees to maximum size 2.Forest-RC (random linear combinations): Creates new attributes (or features) that are a linear combination of the existing attributes (reduces the correlation between individual classifiers) Comparable in accuracy to Adaboost, but more robust to errors and outliers Insensitive to the number of attributes selected for consideration at each split, and faster than bagging or boosting 83

84 Classification of Class- Imbalanced Data Sets Class-imbalance problem: Rare positive example but numerous negative ones, e.g., medical diagnosis, fraud, oil-spill, fault, etc. Traditional methods assume a balanced distribution of classes and equal error costs: not suitable for class-imbalanced data Typical methods for imbalance data in 2-class classification: 1.Oversampling: re-sampling of data from positive class 2.Under-sampling: randomly eliminate tuples from negative class 3.Threshold-moving: moves the decision threshold, t, so that the rare class tuples are easier to classify, and hence, less chance of costly false negative errors 4.Ensemble techniques: Ensemble multiple classifiers introduced above Still difficult for class imbalance problem on multiclass tasks 84

85 Rangkuman Classification is a form of data analysis that extracts models describing important data classes Effective and scalable methods have been developed for decision tree induction, Naive Bayesian classification, rule-based classification, and many other classification methods Evaluation metrics include: accuracy, sensitivity, specificity, precision, recall, F measure, and Fß measure Stratified k-fold cross-validation is recommended for accuracy estimation. Bagging and boosting can be used to increase overall accuracy by learning and combining a series of individual models 85

86 Rangkuman Significance tests and ROC curves are useful for model selection. There have been numerous comparisons of the different classification methods; the matter remains a research topic No single method has been found to be superior over all others for all data sets Issues such as accuracy, training time, robustness, scalability, and interpretability must be considered and can involve trade-offs, further complicating the quest for an overall superior method 86

87 Referensi 1.Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques Third Edition, Elsevier, 2012 2.Ian H. Witten, Frank Eibe, Mark A. Hall, Data mining: Practical Machine Learning Tools and Techniques 3rd Edition, Elsevier, 2011 3.Markus Hofmann and Ralf Klinkenberg, RapidMiner: Data Mining Use Cases and Business Analytics Applications, CRC Press Taylor & Francis Group, 2014 4.Daniel T. Larose, Discovering Knowledge in Data: an Introduction to Data Mining, John Wiley & Sons, 2005 5.Ethem Alpaydin, Introduction to Machine Learning, 3rd ed., MIT Press, 2014 6.Florin Gorunescu, Data Mining: Concepts, Models and Techniques, Springer, 2011 7.Oded Maimon and Lior Rokach, Data Mining and Knowledge Discovery Handbook Second Edition, Springer, 2010 8.Warren Liao and Evangelos Triantaphyllou (eds.), Recent Advances in Data Mining of Enterprise Data: Algorithms and Applications, World Scientific, 2007 87

Download ppt "Data Mining: 4. Algoritma Klasifikasi Romi Satria Wahono WA/SMS: +6281586220090 1."

Presentasi serupa

Iklan oleh Google