Presentasi sedang didownload. Silahkan tunggu

Presentasi sedang didownload. Silahkan tunggu

EVIDENCE BASED PRACTICE

Presentasi serupa


Presentasi berjudul: "EVIDENCE BASED PRACTICE"— Transcript presentasi:

1 EVIDENCE BASED PRACTICE

2 Pendahuluan EBP Merupakan intergrasi dari penelitian-penelitian terbaik, keahlian klinis, pengalaman dan penilaian-penilaian yang diperoleh dari pasien atau klien. Penelitian terbaik : Penelitian yang akurat dan relevan dengan fokus pada masalah klinis. Penilaian Pasien : Pertimbangan berdasarkan bukti juga perlu penilaian dari pasien/klien

3 CRITICAL APRAISAL Penilaian Kritis
Penilaian secara kritis,slektif dan detail penilaian terhadap sesuatu untuk menganalisis dan mengevaluasi. Alat bantu untuk memeriksa setiap proses untuk memikirkan apakah proses itu memang tepat dan apakah data alternatif yang lebih baik.

4 LANGKAH-LANGKAH CRITICAL APPRAISAL
Menyiapkan sesi analisis kritis Mengidentifikasi proses yang perlu diperbaiki

5 MANFAAT CRITICAL APPRAISAL
Meningkatkan daya analisis kritis Menentukan alternatif yang lebih baik Memunculkan banyak pertanyaan yang baru Informasi yang diproleh lebih detail dan lebih paham Memperoleh kebenaran dari suatu informasi

6 TUJUAN CRITICAL APPRAISAL
Berhubungan dengan tujuan kegiatan yang di periksa serta tempatnya tahapanya dan sebagainya ada beberapa pertanyaan yang harus dipikirkan dan dijawab.

7 DAMPAK POSITIF Tidak mudah terpengaruh
Tidak membedakan baik dengan yang salah Dapat mengambil keputusan yang akurat

8 DAMPAK NEGATIF Terlalu lama menkaji dan memutuskan suatu informasi

9 TUJUAN EBP untuk mengembangakn kemampuan berpikir kritis
menghasilkan pemikiran yang akurat pemeriksaannya secara teliti agar diagnosisnya tepat untuk memperoleh penyembuhan penyakit

10 FAKTOR-FAKTOR YANG MEMPENGARUHI EBP
Bukti penelitian yang baik Keahlian klinis seorang Fisioterapis Keadaan dan harapan seorang pasien

11 RCT Suatu proses untuk menentukan dan menliai efektifitas dari suatu obat yang dilakukan dengan cara acak dari percobaan tersebut

12 LANGKAH-LANGKAH RCT inform concent (persetujuan setelah mendapatkan penjelasan) etical clearens penelitian Tentukan luas populasi penelitian yang akan dilakukan Tentukan sifat atau kualitas populasi Tentukan sumber informasi tentang populasi Tentukan batasan sampel serta karakteristik yang ingin diteliti yang terdapat didalam sampel Tentukan besarnya sample dengan rumus-rumus yang sesuai Tentukan teknik sampling yang sesuai

13 RCT dibagi menjadi 3 Single blind yaitu yang di uji tidak tahu
Double blind yaitu yang di uji dan pelaksana yang tidak tahu Triple blind yaitu yang di uji,pelaksana dan juga peneliti yang tidak tahu

14 Experimental Research
Example Randomized Clinical or Controlled Trial (RCT): In general, a clinical treatment, or experimental condition, is compared to a control condition, often a placebo but in some cases an alternative treatment, where subjects are randomly assigned to a group. (Portney and Watkins, 2000)

15 Experimental Research, continued…
Examples: Single-Subject Design: Variation of RCT, study of an individual over time with repeated measurement and determined design phases (Portney and Watkins, 2000) In an N=1 RCT, a single individual receives alternating treatment and placebo or alternative treatment, with the patient and the assessor blinded to intervention allocation. Objective or subjective measures are then recorded during the allocation periods. (Guyatt and Rennie, 2002)

16 Experimental Research, continued…
Examples: Sequential Clinical Trial: Variation of RCT, technique that allows for the continuous analysis of data as it becomes available, does not require a fixed sample Quasi-Experimental Research: Comparative research in which subjects cannot be randomly assigned to a group, or control groups cannot be used. Lower level of evidence than RCTs. (Portney and Watkins 2000)

17 Experimental Research, continued
Examples: Systematic Review: Combination of several studies with the same or similar variables, in which the studies are summarized and analyzed (Guyatt and Rennie, 2002) Meta-analysis: Statistical combination of the data from several studies with the same or similar variables, to determine an overall outcome (Portney and Watkins, 2000; Guyatt and Rennie, 2002)

18 Hierarchy of Evidence for Treatment Decisions:
Greatest (Top) to Least (Bottom) N of 1 randomized controlled trial Systematic review of randomized trials* Single randomized trial Systematic review of observational studies addressing patient-important outcomes Single observational study addressing patient-important outcomes Physiological studies (studies of blood pressure, cardiac output, exercise capacity, bone density, and so forth) Unsystematic clinical observations *A meta-analysis is often considered higher than a systematic review (Guyatt and Rennie, 2002)

19 Hierarchy of Evidence Ideally, evidence from individual studies would be compiled or synthesized into systematic reviews, with that information succinctly consolidated into easily and quickly read synopses. All relevant information would be integrated and linked to a specific patient’s circumstance. The medical search literature is still far from this, but working towards that goal. Efforts include clinical prediction guidelines and APTA’s emphasis on EBP.   (Straus et al, 2005)

20 Variables Variables: Characteristic that can be manipulated or observed Types of Variables Independent or Dependent Measurement Scales/Levels Classification is useful for communication, so that readers are aware of the author’s hypothesis of what situation or intervention (independent variable) will predict or cause a given outcome (dependent variable) (Portney and Watkins, 2000)

21 Variables: Independent or Dependent
Independent Variable: A variable that is manipulated or controlled by the researcher, presumed to cause or determine another (dependent) variable Dependent Variable: A response variable that is assumed to depend on or be caused by another (independent) variable (Portney and Watkins, 2000)

22 Variables: Measurement Scales
Useful to convey information to the reader about the type of variables observed Necessary to determine what statistical analysis approach should be used to examine relationships between variables From lowest to highest level of measurement, the scales are nominal, ordinal, interval, and ratio (Portney and Watkins, 2000)

23 Variables: Measurement Scales
Nominal Scales (Classification Scale) Data, with no quantitative value, are organized into categories Categorizes are based on some criterion Categories are mutually exclusive and exhaustive (each piece of data will be assigned to only one category) Only permissible mathematical operation is counting (such as the number of items within each category) Examples: Gender, Blood Type, Side of Hemiplegic Involvement (Portney and Watkins, 2000)

24 Variables: Measurement Scales
Ordinal Scales Data are organized into categories, which are rank-ordered on the basis of a defined characteristic or property Categories exhibit a “greater than-less than” relationship with each other and intervals between categories may not be consistent and may not be known (Portney and Watkins, 2000)

25 Variables: Measurement Scales
Ordinal Scales, continued If categories are labeled with a numerical value, the number does not represent a quantity, but only a relative position within a distribution (for example, manual muscle test grades of 0-5) Not appropriate to use arithmetic operations Examples: Pain Scales, Reported Sensation, Military Rank, Amount of Assistance Required (Independent, Minimal…) (Portney and Watkins, 2000)

26 Variables: Measurement Scales
Interval Scales Data are organized into categories, which are rank-ordered with known and equal intervals between units of measurement Not related to a true zero Data can be added or subtracted, but actual quantities and ratios cannot be interpreted, due to lack of a true zero Examples: Intelligence testing scores, temperature in degrees centigrade or Fahrenheit, calendar years in AD or BC (Portney and Watkins, 2000)

27 Variables: Measurement Scales
Ratio Scales Interval score with an absolute zero point (so negative numbers are not possible) All mathematical and statistical operations are permissible Examples: time, distance, age, weight (Portney and Watkins, 2000)

28 Variables: Clinical Example
A study investigates how a strengthening program impacts a child’s ability to independently walk. In this case, the strengthening program is the independent variable and the ability to independently walk is the dependent variable. Amount of assistance required (if ranked maximal, moderate, minimal, independently, not based on weight put on a crutch or other quantitative testing) would be an example of ordinal data. Studies often have more than one independent or dependent variable

29 Measurement Validity Measurement Validity examines the “extent to which an instrument measures what it is intended to measure” (Portney and Watkins, 2000) For example, how accurate is a test or instrument at discriminating, evaluating, or predicting certain items?

30 Measurement Validity Validity of Diagnostic Tests
Based on the ability for a test to accurately determine the presence or absence of a condition Compare the test’s results to known results, such as a gold standard. For example, a test determining balance difficulties likely to result in falls could be compared against the number of falls an individual actually experiences within a certain time frame. A clinical test for a torn ACL could be compared against an MRI. (Portney and Watkins, 2000)

31 Measurement Validity: Types
Face Validity: Examines if an instrument appears to measure what it is supposed to measure (weakest form of measurement validity) Content Validity: Examines if the items within an instrument adequately comprise the entire content of a given domain reported to be measured by the instrument Construct Validity: Examines if an instrument can measure an abstract concept (Portney and Watkins, 2000)

32 Measurement Validity: Types
Criterion-related Validity: Examines if the outcomes of the instrument can be used as a substitute measure for an established gold standard test.  Concurrent Validity: Examination of Criterion-related Validity, when the instrument being examined and the gold standard are compared at the same time  Predictive Validity: Examination of Criterion-related Validity, when the outcome of the instrument being examined can be used to predict a future outcome determined by a gold standard (Portney and Watkins, 2000)

33 Measurement Validity: Statistics
Ways to Evaluate Usefulness of Clinical Screening or Diagnostic Tools Sensitivity and Specificity Positive and Negative Predictive Value Positive and Negative Likelihood Ratios Receiver Operating Characteristic (ROC) Curve The above mentioned statistical procedures are often used when researchers are introducing (and validating) the test. Hopefully the values from these operations can be found tool’s testing manual or in articles evaluating the tool’s validity within certain populations or settings.


Download ppt "EVIDENCE BASED PRACTICE"

Presentasi serupa


Iklan oleh Google