Evidence-based medicine.

Model
Digital Document
Publisher
Florida Atlantic University
Description
Recent federal legislation has incentivized hospitals to focus on quality of patient
care. A primary metric of care quality is patient readmissions. Many methods exist to
statistically identify patients most likely to require hospital readmission. Correct
identification of high-risk patients allows hospitals to intelligently utilize limited resources
in mitigating hospital readmissions. However, these methods have seen little practical
adoption in the clinical setting. This research attempts to identify the many open research
questions that have impeded widespread adoption of predictive hospital readmission
systems.
Current systems often rely on structured data extracted from health records systems.
This data can be expensive and time consuming to extract. Unstructured clinical notes are
agnostic to the underlying records system and would decouple the predictive analytics
system from the underlying records system. However, additional concerns in clinical
natural language processing must be addressed before such a system can be implemented. Current systems often perform poorly using standard statistical measures.
Misclassification cost of patient readmissions has yet to be addressed and there currently
exists a gap between current readmission system evaluation metrics and those most
appropriate in the clinical setting. Additionally, data availability for localized model
creation has yet to be addressed by the research community. Large research hospitals may
have sufficient data to build models, but many others do not. Simply combining data from
many hospitals often results in a model which performs worse than using data from a single
hospital.
Current systems often produce a binary readmission classification. However,
patients are often readmitted for differing reasons than index admission. There exists little
research into predicting primary cause of readmission. Furthermore, co-occurring evidence
discovery of clinical terms with primary diagnosis has seen only simplistic methods
applied.
This research addresses these concerns to increase adoption of predictive hospital
readmission systems.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The purpose of this quantitative, quasi-experimental study was to examine the
effects of a standardized case conceptualization training workshop on 104 psychotherapy
practitioners recruited from the community. A secondary purpose was to examine the
relationship between participants’ attitudes about evidence-based practice and the effects
of the training. Participants attended two 3-hour training workshops, which taught the
integrative case conceptualization model developed by Sperry (2010b). Pre- and postintervention
case conceptualization skills were assessed using the Case Conceptualization
Evaluation Form (CCEF) 2.0, an updated version of the instrument used in previous
studies. Additionally, participants’ views about case conceptualization were assessed
before and after training using the Views about Case Conceptualization (VACC)
instrument. Participants’ attitudes about evidence-based practice were also examined as a
possible mediating variable between training and effect. These attitudes were assessed
using the Evidence-Based Practice Attitudes Scale (EBPAS). Workshops were separated by four weeks in order to assess whether initial training effects persisted
over time.
Change in case conceptualization skill was analyzed using repeated measures
ANOVA. Participants’ mean CCEF 2.0 scores significantly increased (p < .001) from
pre-test (M = 11.9; SD = 7.74) to post-test (M = 36.7; SD = 7.80) following the first
workshop. The second workshop took place four weeks later with 74 of the original 104
participants. It built on the content of the first workshop and introduced advanced
concepts such as client culture, strengths and protective factors, and predictive ability.
Participants’ mean CCEF 2.0 scores also significantly increased (p < .001) from pre-test
(M = 35.1; SD = 8.11) to post-test (M = 66.3; SD = 10.95) following the second
workshop. There was a small but statistically significant (p < .005) decrease of 1.5 points
in mean scores from the end of Workshop I to Workshop II, indicating that the effects of
the training deteriorate slowly over time. Participants’ attitudes about evidence based
practice and some demographic variables were significantly related to training effects.
Stepwise hierarchical regression analysis determined that these individual variables
account for various portions of the variance in CCEF 2.0 scores. This study’s theoretical,
practice, and research implications are discussed in detail.