查看更多>>摘要:The increasing prevalence of diabetes and its related complications is raising the need for effective methods to predict patient evolution and for stratifying cohorts in terms of risk of developing diabetes-related complications. In this paper, we present a novel approach to the simulation of a type 1 diabetes population, based on Dynamic Bayesian Networks, which combines literature knowledge with data mining of a rich longitudinal cohort of type 1 diabetes patients, the DCCT/EDIC study. In particular, in our approach we simulate the patient health state and complications through discretized variables. Two types of models are presented, one entirely learned from the data and the other partially driven by literature derived knowledge. The whole cohort is simulated for fifteen years, and the simulation error (i.e. for each variable, the percentage of patients predicted in the wrong state) is calculated every year on independent test data. For each variable, the population predicted in the wrong state is below 10% on both models over time. Furthermore, the distributions of real vs. simulated patients greatly overlap. Thus, the proposed models are viable tools to support decision making in type 1 diabetes. (C) 2015 Elsevier Inc. All rights reserved.
查看更多>>摘要:Today, advances in medical informatics brought on by the increasing availability of electronic medical records (EMR) have allowed for the proliferation of data-centric tools, especially in the context of personalized healthcare. While these tools have the potential to greatly improve the quality of patient care, the effective utilization of their techniques within clinical practice may encounter two significant challenges. First, the increasing amount of electronic data generated by clinical processes can impose scalability challenges for current computational tools, requiring parallel or distributed implementations of such tools to scale. Secondly, as technology becomes increasingly intertwined in clinical workflows these tools must not only operate efficiently, but also in an interpretable manner. Failure to identity areas of uncertainty or provide appropriate context creates a potentially complex situation for both physicians and patients. This paper will present a case study investigating the issues associated with first scaling a disease prediction algorithm to accommodate dataset sizes expected in large medical practices. It will then provide an analysis on the diagnoses predictions, attempting to provide contextual information to convey the certainty of the results to a physician. Finally it will investigate latent demographic features of the patient's themselves, which may have an impact on the accuracy of the diagnosis predictions. (C) 2015 Elsevier Inc. All rights reserved.
查看更多>>摘要:Objective: The purpose of this study was to describe a workflow analysis approach and apply it in emergency departments (EDs) using data extracted from the electronic health record (EHR) system.
查看更多>>摘要:Objectives: The increasing of potential medical demand in China has threatened the health of the population, the medical equity, accessibility to medical services, and has impeded the development of Chinese health delivery system. This study aims to understand the mechanism of the increasing potential medical demand and find some solutions.
查看更多>>摘要:HL7 (Health Level 7) International is an organization that defines health information standards. Most HL7 domain information models have been designed according to a proprietary graphic language whose domain models are based on the HL7 metamodel. Many researchers have considered using HL7 in the MDE (Model-Driven Engineering) context. A limitation has been identified: all MDE tools support UML (Unified Modeling Language), which is a standard model language, but most do not support the HL7 proprietary model language. We want to support software engineers without HL7 experience, thus realworld problems would be modeled by them by defining system requirements in UML that are compliant with HL7 domain models transparently. The objective of the present research is to connect HL7 with software analysis using a generic model-based approach. This paper introduces a first approach to an HL7 MDE solution that considers the MIF (Model Interchange Format) metamodel proposed by HL7 by making use of a plug-in developed in the EA (Enterprise Architect) tool. (C) 2015 Elsevier Inc. All rights reserved.
查看更多>>摘要:Background: Traditional approaches to pharmacovigilance center on the signal detection from spontaneous reports, e.g., the U.S. Food and Drug Administration (FDA) adverse event reporting system (FAERS). In order to enrich the scientific evidence and enhance the detection of emerging adverse drug events that can lead to unintended harmful outcomes, pharmacovigilance activities need to evolve to encompass novel complementary data streams, for example the biomedical literature available through MEDLINE.
查看更多>>摘要:Objective: Literature database search is a crucial step in the development of clinical practice guidelines and systematic reviews. In the age of information technology, the process of literature search is still conducted manually, therefore it is costly, slow and subject to human errors. In this research, we sought to improve the traditional search approach using innovative query expansion and citation ranking approaches.
查看更多>>摘要:National syndromic surveillance systems require optimal anomaly detection methods. For method performance comparison, we injected multi-day signals stochastically drawn from lognormal distributions into time series of aggregated daily visit counts from the U.S. Centers for Disease Control and Prevention's BioSense syndromic surveillance system. The time series corresponded to three different syndrome groups: rash, upper respiratory infection, and gastrointestinal illness. We included a sample of facilities with data reported every day and with median daily syndromic counts I over the entire study period. We compared anomaly detection methods of five control chart adaptations, a linear regression model and a Poisson regression model. We assessed sensitivity and timeliness of these methods for detection of multi-day signals. At a daily background alert rate of 1% and 2%, the sensitivities and timeliness ranged from 24 to 77% and 3.3 to 6.1 days, respectively. The overall sensitivity and timeliness increased substantially after stratification by weekday versus weekend and holiday. Adjusting the baseline syndromic count by the total number of facility visits gave consistently improved sensitivity and timeliness without stratification, but it provided better performance when combined with stratification. The daily syndrome/total-visit proportion method did not improve the performance. In general, alerting based on linear regression outperformed control chart based methods. A Poisson regression model obtained the best sensitivity in the series with high-count data. Published by Elsevier Inc.
查看更多>>摘要:The National Cancer Institute (NCI) Cancer Biomedical Informatics Grid (caBIG) program established standards and best practices for biorepository data management by creating an infrastructure to propagate biospecimen resource sharing while maintaining data integrity and security. caTissue Suite, a biospecimen data management software tool, has evolved from this effort. More recently, the caTissue Suite continues to evolve as an open source initiative known as OpenSpecimen. The essential functionality of OpenSpecimen includes the capture and representation of highly granular, hierarchically-structured data for biospecimen processing, quality assurance, tracking, and annotation. Ideal for multi-user and multi-site biorepository environments, OpenSpecimen permits role-based access to specific sets of data operations through a user-interface designed to accommodate varying workflows and unique user needs. The software is interoperable, both syntactically and semantically, with an array of other bioinformatics tools given its integration of standard vocabularies thus enabling research involving biospecimens. End-users are encouraged to share their day-to-day experiences in working with the application, thus providing to the community board insight into the needs and limitations which need be addressed. Users are also requested to review and validate new features through group testing environments and mock screens. Through this user interaction, application flexibility and interoperability have been recognized as necessary developmental focuses essential for accommodating diverse adoption scenarios and biobanking workflows to catalyze advances in biomedical research and operations. Given the diversity of biobanking practices and workforce roles, efforts have been made consistently to maintain robust data granularity while aiding user accessibility, data discoverability, and security within and across applications by providing a lower learning curve in using OpenSpecimen. Iterative development and testing cycles provide continuous maintenance and up-to-date capabilities for this freely available, open-access, web-based software application that is globally-adopted at over 25 institutions. (C) 2015 Elsevier Inc. All rights reserved.