Upload presentasi
Presentasi sedang didownload. Silahkan tunggu
1
Menetapkan tujuan evaluasi
5. Menggunakan hasil evaluasi 4. Analisis data 3. Koleksi data 2. Memilih disain evaluasi 1. Pertanyaan: mengapa suatu program perlu dievaluasi ?
2
1. Tiga Pertanyaan dasar:
Are decisions pending about continuation, modification, or termination? Is there considerable support for the program by influential interest groups that would make termination highly unlikely? Can the results of the evaluation influence decisions about the program ? Can the evaluation be done in time to be useful ? Are the data available now? How long will it take to collect the data needed to answer key evaluation questions? Does the program consume a large amount of resources? Is program performance marginal? Are there problems with program delivery? Is program delivery highly inefficient? Is this a pilot program with presumed potential for expansion? Is the program significant enough to merit evaluation ?
3
2. Disain evaluasi Balancing the cost with usefullness of evaluation Conceptualizing how the questions are framed and strategies devised to address the question Each design illuminates an important aspects of program reality
4
2. Memilih disain evaluasi dan/ monitoting
Logic modeling. Evaluability assessment. Implementation evaluation. Ongoing performance monitoring. Quasi-experimental designs: before-after. Randomized experiments, which provide the strongest evidence of program impact. Meta-analyses, systematic reviews, and research syntheses, which estimate program impacts using as data the results of past evaluations.
5
Long term outcome (problem)
2.1. Logic modelling The logic model serves as a useful advance organizer for designing evaluation and performance measurement, focusing on the important elements of the program and identifying what evaluation questions should be asked and why and what measures of performance are key. resources Activities Outputs Short term outcome Intermediate Outcome Long term outcome (problem) how why RESULTS FROM PROGRAM PROGRAM DELIVERED customers
6
2.2. evaluability assessment
A form of market research that assesses the demand for information that might come from various possible evaluations, assesses the feasibility of various evaluations, and helps match evaluation supply with demand by helping select designs for evaluations that are feasible, relevant, and useful. PROGRAM GOALS PERFORMANCE INDICATORS DATA SOURCES evaluators
7
2.2. Evaluability assessment
Four propositions: Program goals and priority information needs are well defined, Program goals are plausible, Relevant performance data can be obtained at reasonable cost, Intended users of the evaluation results have agreed on how they will use the information,
8
2.2. Evaluability assessement
Evaluability assessment begins the evaluation planning process by carrying out a preliminary evaluation of the program design. Evaluability assessment: Compares and contrasts the expectations and assumptions of those who have the most important influence over the program Compares those expectations with the reality of program activities underway and program outcomes that are occurring Determines whether relevant program performance information is likely to be obtainable at reasonable cost Explores which of the evaluations that could be conducted would be most useful
9
2.2. Evaluability assessement
KEY STEPS: STEP 1: Involve Intended Users Step 2: Clarify Program Intent Step 3: Explore Program Reality Step 4: Reach Agreement on Any Needed Changes in the Program Design Step 5: Explore Alternative Evaluation Designs Step 6: Agree on Evaluation Priorities and Intended Uses of Evaluation Information
10
2.3. Implementation evaluation
“BLACK BOX” “TRANSPARENT BOX” INPUT – OUTPUT BEFORE - AFTER
11
THE TRANSPARENT BOX PARADIGM
12
2.3. Implementation evaluation
Some of the most common methods: formative evaluation, process evaluation, descriptive evaluation, performance monitoring, and implementation analysis
13
2.3. Implementation evaluation
Four Stages: Stage 1: Assess need and feasibility. Stage 2: Plan and design the program. Stage 3: Deliver the program. Stage 4: Improve the program.
14
2.3. Implementation evaluation
EXAMPLES: Implementation research review Key informant interviews about implementation factors STAGE 1: Assess Need and Feasibility of Program • Program logic models • Program templates • Outcomes hierarchies STAGE 2: Plan and Design the Program • Coverage analysis • Component analysis • Program records • Case studies Stage 3: Deliver the Program STAGE 4: Improve the program • Service delivery pathways • Client feedback
15
2.4. Ongoing performance monitoring.
Performance monitoring systems are designed to track selected measures of program, agency, or system performance at regular time intervals and report them to managers and other specified audiences on an ongoing basis. TYPES OF PERFORMANCE MEASURES: PRODUCTIVITY SERVICE QUALITY RESOURCES OUTPUTS EFFICIENCY OUTCOMES COST EFFECTIVENESS CUSTOMER SATISFACTION
16
2.4. Ongoing performance monitoring.
1. FOCUSING OF RESULT 2. DATA AND MEASURES 3. EXAMINING PERFORMANCE DATA 4. DESIGN AND IMPLEMENTATION
17
2.4. Ongoing performance monitoring.
Among the systematic approaches to identifying the kinds of results to be tracked by a performance monitoring system are: formal goals, objectives, standards, and targets, program logic models, and the balanced scorecard.
18
2.4. Ongoing performance monitoring.
EXAMPLES:
19
2.5 Quasi-experimentation
Before-After Comparisons Notation of Campbell and Stanley (1966): O X O before after X represents the implementation of a treatment Interupted Time Series Designs O O O O O X O O O O O O O O O O X O O O O O –X O O O O O Treatment intervention: Os represent measurements at repeated times
20
2.5 Quasi-experimentation
Interupted Time Series Designs O O O O O X O O O O O – – – – – – – – – – – – – O O O O O O O O O O a. Comparison group: OA OA OA OA OA X OA OA OA OA OA OB OB OB OB OB OB OB OB OB OB b. Outcome variables: O O O O O X O O O O O O O O O O – – – – – – – – – – – – – – – – – – – – O O O O O O O O O O X O O O O O c. Combinations of Design Features
21
2.5 Quasi-experimentation
X O – – – O Non equivalent Group Design: O X O – – – – O O O O X O – – – – – O O O 1. Measurement occasion: Ograde 1 – – – – – X Ograde 2 Ograde 3 2. Comparison Groups:
22
2.5 Quasi-experimentation
Non equivalent Group Design: O X+ O – – – – O X O O X– O 3. Treatment Interventions: OA X+ OA OB X+ OB – – – – – OA X– OA OB X– OB 4 .Outcome Variables:
23
2.6 randomized experiments
24
Mengapa perlu eksperimen?
Kapan experiment diperlukan? Bisa menyimpulkan causal inferences tentang effectiveness dari suatu intervensi/program/treatment Mulai digunakan pada akhir tahun 1990 an sebagai metode dalam social sciences (awalnya digunakan dalam bidang medical sciences) Mengapa perlu eksperimen? Diperlukan bila kita menginginkan untuk mengukur dampak dari suatu program. Misalnya jika ada program sosial baru yang dilaksanakan, pengambil kebijakan perlu mengetahui dampak berapa keluarga yang terlayani, pelayanan apa saja yang sampai kepada masyarakat, berapa keluarga yang berpartisipasi dst. Diperlukan survey thd partisipan dan program provider
25
Bagaimana melaksanakan experimental evaluation?
Mendefinisikan experinmental contrast: misal: apakah anak yang ikut pre school lebih baik dari anak yang tidak pernah ikut preschool sama sekali? Menkhusukan unit analysis (unit of random assignment) Menetapkan alat statistik yang sesuai: ukuran dari treatment, menetapkan standar keberhasilan, ukuran sample, sub group analysis. Penanganan thd non-participant, crossovers, dan pihak yang berseberangan. Mempertahankan randomization
26
Bagaimana peneliti dapat bekerja sebaik baiknya dengan staf program?
Menjelaskan tujuan, keuntungan, dan kerugian eksperimen Kepada staf yang menerapkan program Mencari situasi di mana randomisasi bisa sangat mudah dilaksanakan Dan memberikan insentif bagi staf program untuk mengikuti Penelitian eksperimental Menyiapkan dan menandatangani kesepakatan evaluasi secara tertulis Memastikan integritas proses randomisasi Menilai keberlanjutan dari program/intervensi yang sedang diuji
27
2.7 Meta-Analysis, Systematic Reviews, and Research Syntheses
Systematic reviews: aplikasi dari suatu strategi yang membatasi bias yang mungkin terjadi saat analysis, critical appraisal, dan di majelis, Penilaian kritis, dan sintesis yang relevan pada tema yang spesifik. Meta- analisis belum tentu digunakan sebagai bagian dari proses ini. Meta-analisis: Sintesis statistik dari data yang terpisah namun serupa (comparable studies), yang mengarah ke ringkasan kuantitatif dari hasil keseluruhan Research synthesis: An attempt to “integrate empirical research for the purpose of creating generalizations [in a way that] is initially nonjudgmental vis a vis the outcomes of the synthesis and intends to be exhaustive in the coverage of the research base” (1994, p. 5).
28
Terima kasih
Presentasi serupa
© 2024 SlidePlayer.info Inc.
All rights reserved.