“Incentive Design for Human and Artificial Agents”
Friday, December 13 | 1 – 1:30 p.m.
Sera Linardi, Associate Professor, Graduate School of Public and International Affairs
Abstract: In applied AI we often face the problems of understanding how incentives of participants (human or artificial agents) can be used to improve the behavior of the overall system. This relates to a field in economic theory called Mechanism Design, which is “reverse game theory”: instead of starting with a game and solving for the outcome, we start from a desired outcome (for example, social welfare maximization) and design an institution that would accomplish it. The mechanism then takes into account strategic participants in allocating resources or pricing goods and services. Advances in mechanism design have implications for many areas in computing, including search and recommendation. Similarly, progress can be made on many fundamental questions in Mechanism Design by adopting computational approaches. For example, the focus on complexity and approximation and the use of simulations provide a necessary bridge between mechanism design in theory and its implementation in practice.
“NETLAND: Visualization of the Geometry and Dynamics of Hidden Unit Space”
Friday, December 13 | 12:30 – 1 p.m.
Paul Munro, Associate Professor, Department of Informatics and Networked Systems
Abstract: Classification by multilayer neural networks depends on the existence of appropriate features in the early hidden layers, so that the representations are linearly separable in the penultimate layer. By using hidden layers with just two or three units, the representational structure of the intermediate layers can be visualized. Time courses of the evolution of the hidden layer representations are visualized as animations. The visualizations reveal tendency for the hidden unit image of the input space to collapse into a non-linear (warped) manifold of lower dimensionality; i.e. the weight matrix is (nearly) singular. A task which is not linearly separable in the input space is rendered linearly separable by the warping of the manifold. Because of the matrix singularity, deeper layers of the network do not have sufficient information to reconstruct the input in its original form. Thus, the deep layers are incapable of discriminating certain distinct stimulus patterns.
“Breast Cancer Risk Assessment via Symmetrical and Temporal Changes”
Friday, November 22 | 1 – 1:30 p.m.
Saba Dadsetan, Graduate Student, Intelligent Systems Program
Abstract: On average, nowadays, every two minutes a woman is diagnosed with breast cancer in the United States. Identifying significant risk factors in developing breast cancer has been an active research area for a long time and can help to prevent or in early detection. Mammograms as an efficient way allow physicians to diagnose any suspicious fibroglandular tissue and if necessary, ask for the next screening step such as biopsy. Developing asymmetry is a term that recently has caught the attention of physicians for diagnosing malignant tissues. By considering that, they can put current and previous mammograms together and locate any abnormality in tissue. Although, there are some studies in using prior mammograms to predict the cancer case 1 to 5 years beforehand; However, no specific investigation has been done in discovering any association between developing asymmetry and breast cancer risk prediction. Inspiring by how radiologists use prior mammograms along with advances in video temporal analysis, we propose an end-to-end deep learning method to investigate this relationship. Our structure is designed to calculate the difference between left and right breasts in prior mammograms and subsequently find the temporal correlation between them to assess the risk of cancer in the specific patient. Final results comparing previous risk assessment methods demonstrate the existence of an association between developing asymmetry and normal vs cancerous patients.
“Dynamic Knowledge Modeling for Adaptive Textbooks”
Friday, November 22 | 12:30 – 1 p.m.
Khushboo Thaker, Graduate Student, Intelligent Systems Program
Abstract: Adaptive textbooks use student interaction data to infer the current state of student knowledge and recommend the most relevant learning materials. A challenge of student modeling for adaptive textbooks is that conventional student models are constructed based on performance data (quiz or problem-solving), however, students’ interactions with online textbooks may produce a large volume of student reading data but a limited amount of performance data. In this work, we propose a dynamic student knowledge modeling framework for online adaptive textbooks, which utilizes student reading data combined with few available quiz activities to infer the students’ current state of knowledge. The evaluation shows that the proposed model learns more accurate students’ knowledge state than Knowledge Tracing.
“Decomposing Response Time to Give Better Predictions of Student Performance”
Friday, November 8 | 1 – 1:30 p.m.
Deniz Somnez Unal, Graduate Student, Intelligent Systems Program
Abstract: In educational systems, response time is defined as the time between when students see a problem step and when they first reacted to this step. Response time has been used as an important predictor of student performance in various kinds of models. Much of this work is based on the hypothesis that if a student responds to a step too quickly or too slowly, they are most likely to be unsuccessful in that step. However, something that is less explored is that students may be cycling through different kinds of states within a single response time and the time spent in those states could have separate effects on students’ performance. We hypothesize that identifying the different states and estimating how much time is devoted to them in a single response time period will help us give more accurate predictions of student performance. In this talk, I present our methods to decompose response time into meaningful sub-categories in a reading application, and a predictive model that uses the derived sub-categories of response time as predictors instead of raw response time.
“What do you Think of Vaping? Machine Learning Methods for Twitter Stance Detection”
Friday, November 8 | 12:30 – 1 p.m.
Sanya Taneja, Graduate Students, Intelligent Systems Program
Abstract: Electronic nicotine delivery systems (ENDS), commonly known as e-cigarettes or vape devices, are widely used for smoking cessation. However, as evidenced in current news media, vaping may pose serious health hazards. The popularity of vape devices, particularly Juul, among young adults has given rise to a vaping epidemic, calling for a need to measure and understand the risks, behaviors and outcomes related to vaping. The social media platform Twitter has a large user base and has been successfully used in the past for tracking health conditions and outbreaks. This talk presents the work done by the presenter as a Research Assistant at the Center for Research on Media, Technology and Health (CRMTH) at the School of Medicine. We compare machine learning methods for classification of Twitter messages based on relevance to vaping, promotional content and user stance toward vaping. Our goal is to use Twitter as a surveillance system to identify changes in content over time as well as assess unique user perspectives related to e-cigarettes.
“Improving Causal Discovery with Data Integration”
Friday, October 25 | 12:30 – 1 p.m.
Sofia Triantafillou, Assistant Professor, Department of Biomedical Informatics
Abstract: Causal modeling is important in biomedicine because it describes a system’s behavior not only under observation but also under intervention. Graphical causal models connect causal properties of a system to probabilistic properties under observation and intervention. My research integratively analyzes data sets collected under different experimental conditions, and possibly measuring different variables, to reverse-engineer causal models that are consistent all of the data. Integrating multiple datasets improves causal discovery and leads to novel inferences.
Bio: Sofia Triantafillou is an Assistant Professor in the Department of Biomedical Informatics in the University of Pittsburgh. After getting a PhD in the University of Crete, she was a postdoc in Northwestern University and the University of Pennsylvania. Her research focuses in designing causal discover algorithms and applying them to make novel inferences in biomedicine.
“Exploring Middle School Mathematics Help-Giving Behavior and Modeling Factors to Provide Collaboration Support in a Cross-Platform Environment”
Friday, September 27 | 1 – 1:30 p.m.
Ishrat Ahmed, Graduate Student, Department of Computer Science
Abstract: Existing adaptive collaborative learning support attempts to improve student learning outcomes by providing personalized supports they need to collaborate more effectively. These systems have focused on a single platform. However, recent technology-supported collaborative learning platforms allow students to collaborate in different contexts: computer-supported classroom environments, network-based online learning environments, or virtual learning environments with pedagogical agents. Our goal is to better understand how middle school students participate in collaborative behaviors across different platforms i.e., an interactive digital textbook, an online asynchronous environment, and a teachable agent. We focus on a specific type of collaboration - help-giving. We use a Design-Based Research approach to design, develop, and implement our cross-platform technology in a middle school math classroom context. We also conducted semi-structured interviews to understand students’ perception of help-giving on different platforms. Our initial analysis shows students' help-giving behavior across the platforms is influenced by a combination of motivational, contextual, and opportunity factors rather than solely determined by their individual differences. Our ultimate goal is to build a model that will predict student help-giving quality and improve student collaboration across the platforms.
“Expanding the Reach of AIED Systems” Adapting to Social Learning Processes”
Friday, September 27 | 12:30 – 1 p.m.
Erin Walker, Associate Professor, Department of Computer Science
Abstract: Artificial intelligence in education (AIED) systems are technologies that use artificial intelligence and machine learning to understand how students learn, and adapt the learning experience to each individual's cognitive, metacognitive, and motivational needs. In some cases, these systems have been demonstrated to be nearly as effective as human tutors and better than traditional forms of classroom instruction. However, they have primarily been used to improve individual problem-solving in well-defined domains such as math and science, through the explicit modeling and support of domain-related procedures and knowledge. My research explores ways to expand these systems to a broader set of learning processes and outcomes. In this talk, I present examples of novel applications of AIED systems, including EMBRACE, an intelligent tutoring system drawing from an embodied cognition theory of reading comprehension, and Nico, a teachable robot for mathematics learning. I discuss the implications of this work for the development of AIED systems that can better adapt to a student's social learning context.
“Predicting Falls in the Nursing Home: A Comparison of a Long Short Term Memory (LSTM) Recurrent Neural Network with Other Methods”
Friday, September 13 | 12:30 – 1:30 p.m.
Richard Boyce, Associate Professor, Department of Biomedical Informatics
Abstract: Fall events are one of the most common and dangerous adverse events that occur in the nursing home (NH) setting. The mean incidence of falls is estimated to be 1.7 falls per bed per year, with 10 - 25% resulting in fracture or laceration. A validated model that uses NH data to predict the probability of a NH patient falling in the near future would be a valuable component of an overall safety monitoring system that provides clinicians with informative, patient specific, and actionable alerts.
The purpose of this study was to develop and validate such a model using patient-specific factors recorded in electronic data available in most of the 15,000 nursing homes in the United States. Specifically, data from drug dispensing and from Long-Term Care Minimum Data Set assessments. Our goal was to develop a model that uses these data to accurately predict the probability of a given patient experiencing a fall up to three months after qualified staff complete a Minimum Data Set assessment for a given patient. In this study we compared a variety of machine learning methods including logistic regression, support vector machines, tree algorithms, and Long Short Term Memory (LSTM) Recurrent Neural Networks. Our hypothesis was that a LSTM model would significantly outperform a broad range of other methods because LSTM networks can to model complex relationships that unfold over time.
Faculty Research Presentations
Friday, August 30 | 1 – 1:30 p.m.
Various ISP Faculty
“Improving Sentence Retrieval from Case Law for Statutory Interpretation”
Friday, August 30 | 12:30 – 1 p.m.
Jaromir Savelka, Graduate Student, Intelligent Systems Program
Abstract: Statutory texts employ vague terms that are difficult to understand. Here we study and evaluate methods for retrieving useful sentences from court opinions that elaborate on the meaning of a vague statutory term. Retrieving sentences instead of whole cases may spare a user the need to review long lists of cases in search of useful explanations. We assembled a data set of case law sentences that were responses to statutory queries and labeled them in terms of their usefulness for interpretation. We have run a series of experiments on this data set, which we have made public, assessing different techniques to solve the task. These include techniques that measure the similarity between the sentence and the query, utilize the context of a sentence, expand queries, or assess the novelty of a sentence with respect to a statutory provision from which the interpreted term comes. Based on a detailed error analysis we propose a specialized sentence retrieval framework that mitigates the challenges of retrieving case law sentences for interpreting statutory terms. The results of evaluating different implementations of the framework are promising.
Dissertation Defense: Jeya Balaji Balasubramanian
Tuesday, August 27 | 9 – 11 a.m.
“Knowledge Discovery with Bayesian Rule Learning Methods for Actionable”
Abstract: Discovery of precise biomarkers are crucial for improved clinical diagnostic, prognostic, and therapeutic decision-making. They help improve our understanding of the underlying physiological (and pathophysiological processes) within an individual. To discover precise biomarkers, we must take a personalized medical approach that accounts for an individual's unique clinical, genetic, omic, and environmental information. The molecular-level omic information provides an opportunity to understand complex physiological processes at an unprecedented resolution. The reducing costs and improvements in high-throughput technologies, which collect omic data from an individual, has now made it feasible to include a person's omic information as a standard component to their medical record. This information can only be clinically actionable if it is understandable to a clinician and applicable in the correct medical context. Biomarker discovery from omic data is challenging because they are— 1) high-dimensional, which increases the chance of false positive discoveries from traditional data mining methods; 2) most diseases are multifactorial, where many factors influence the disease outcome, making it challenging to be modeled by most data mining algorithms while keeping it interpretable to a clinician; and 3) traditional data mining methods discover only statistically significant biomarkers but do not account for clinical relevance, therefore they do not translate well in clinical practice.
In this dissertation, I formulate the problem of learning both statistically significant and clinically relevant biomarkers as a knowledge discovery problem. In computer science, knowledge discovery in databases is "a non-trivial process of the extraction of valid, novel, potentially useful, and ultimately understandable patterns in data". Clinical practice guidelines in decision support systems are often presented as explicit propositional logic rules because they are easy for a clinician to understand and are often actionable instructions themselves. Bayesian rule learning (BRL) is a rule-learning classifier that learns patterns as a set of probabilistic classification rules. I develop BRL to efficiently learn from high-dimensional data and obtain a robust set of rules by identifying context-specific independencies in the data. To help model multifactorial diseases, I study various ensemble methods with BRL, collectively called Ensemble Bayesian Rule Learning (EBRL). I also develop a novel ensemble model visualization method called Bayesian Rule Ensemble Visualization tool (BREVity) to make EBRL more human-readable for a researcher or a clinician. I develop BRL with informative priors (BRLp) to enable BRL to incorporate prior domain knowledge into the model learning process, thereby further reducing the chance of discovering false positives. Finally, I develop BRL for knowledge discovery (BRL-KD) that can incorporate a clinical utility function to learn models that are clinically more relevant. Collectively, I use these BRL methods, developed for the task of biomarker discovery, as the knowledge engine of an intelligent clinical decision support system called Bayesian Rules for Actionable Informed Decisions or BRAID, a concept framework that can be deployed in clinical practice.
“Linguistic Entrainment in Multi-Party Spoken Dialogues”
Tuesday, May 28 | 3 – 5 p.m.
Zahra Rahimi, Graduate Student, Intelligent Systems Program
- Rebecca Hwa
- Kevin Ashely
- Louis-Philippe Morency
Dissertation Defense: Gaurav Trivedi
Monday, April 29 | 3 – 5 p.m.
“Interactive Natural Language Processing for Clinical Text”
Abstract: Clinicians use free-text to conveniently capture rich information about patients. Care providers are likely to continue using narratives and first-person stories in Electronic Medical Records (EMRs) due to their convenience and utility, which complicates information extraction for computation and analysis. Despite advances in Natural Language Processing (NLP) techniques, building models is often expensive and time-consuming. Current approaches require a long collaboration between clinicians and data-scientists. Clinicians provide annotations and training data, while data-scientists build the models. With the current approaches, the domain experts - clinicians and clinical researchers - do not have provisions to inspect these models and give feedback. This forms a barrier to NLP adoption in the clinical domain by limiting power and utility of real-world applications.
Building models interactively can help narrow the gap between clinicians and data-scientists (Figure 1). Interactive learning systems may allow clinicians, without machine learning experience, to build NLP models on their own and also reduce the need for prior annotations upfront. These systems make it feasible to extract understanding from unstructured text in patient records; classifying documents against clinical concepts, summarizing records and other sophisticated NLP tasks. Interactive systems enable end-users to review model outputs and make corrections to build model revisions within an interactive feedback loop.
Interactive methods are particularly attractive for clinical text due to the diversity of tasks that need customized training data. In my dissertation, I demonstrate this approach by building and evaluating prototype systems for both clinical care and research applications. I built NLPReViz as an interactive tool for clinicians to train and build binary NLP models on their own for retrospective review of colonoscopy procedure note. Next, I extended this effort to design an intelligent tool to identify incidental findings from radiology notes as clinicians review patient notes during their regular workflow. I follow a two-step evaluation with clinicians as study participants: a usability evaluation to demonstrate feasibility and overall usefulness of the tool, followed by an empirical evaluation to evaluate model correctness and utility. Lessons learned from the development and evaluation of these prototypes will provide insight into the generalized design of interactive NLP systems for wider clinical applications.
“Monitoring Mortality Risk with Long Short-Term Memory Recurrent Neural Network”
Friday, April 12 | 1 – 1:30 p.m.
Ke Yu, Graduate Student, Intelligent Systems Program
Abstract: In intensive care units (ICU), mortality prediction is a critical factor not only for effective medical intervention but also for allocation of clinical resources. Structured EHR data contains valuable information for resolving this task, but current solutions usually require human-engineered features, which are both laborious and sensitive to missing values.
Inspired by language-related models, we design a new framework for dynamic monitoring of patients’ mortality risk. Our framework relies on automatically extracted feature representations from all relevant medical events based on most recent history. Specifically, our model uses latent semantic analysis (LSA) to encode the patients’ states into low-dimensional embeddings, which are further fed to long short-term memory networks for mortality risk prediction. We observe that bidirectional long short-term memory demonstrates competitive performance, probably due to successful capture of both forward and backward temporal dependencies.
“Automatic Extractive Summarization of Trade Secret Misappropriation Cases”
Friday, April 12 | 12:30 – 1 p.m.
Huihui Xu, Graduate Student, Intelligent Systems Program
Abstract: Automatic extractive summarization is a process of selecting the most important passages from a document using a computer program. Given the rise of a large number of legal documents in electronic format, there is an increasing demand for effective information retrieval tools that present important information in a suitable user-friendly format. A team of CMU students and we carried out an experiment with a corpus of human-created summaries of trade secret misappropriation cases paired with their corresponding full text decisions. The team employed sentence classification, relevance/redundancy-based selection, and specialized word embeddings. The team applied both automatic evaluation metrics and expert grading to evaluate the automatically generated summaries along with human standard summaries. As the CMU students have now graduated, I will assist the instructor in rerunning the experiments and generating new results.
“Predicting Drug Sensitivity Based on Omics Data with Self-Attention Mechanism and Multi-Label Learning”
Friday, April 5 | 1 – 1:30 p.m.
Shuangxia Ren, Graduate Student, Intelligent Systems Program
Abstract: Precision oncology has achieved great success with the capability of prescribing personalized treatments targeting aberrations to an individual patient’s tumor specifically. However, there are significant limitations of current single-gene-based therapeutic indication (STI) based precision oncology. In order to overcome the limitations of STI-based approaches, combining contemporary AI methodologies and omics data is proposed to predict drug sensitivity. The model I choose utilizes multi-head self-attention mechanism to identify omics data of a cell line including gene expression, gene mutation status and somatic copy number alteration that likely determine the drug sensitivity status. This model learns a vector as an abstract representation (omics embedding) of functional impact for each omics data, then further generates an abstract representation (cell line embedding) of functional impact for each cell line, thus instantiating the states of the hidden layer. The cell line embedding is the weighted sum of omics embedding using self-attention mechanism, and can be used as the input of a multi-layer perceptron to predict drug sensitivity of the cell line to different drugs using multi-label learning.
“Estimating Facial Action Unit Intensity on Larger Training Sets”
Friday, April 5 | 12:30 – 1 p.m.
Yaohan Ding, Graduate Student, Intelligent Systems Program
Abstract: Facial expressions are important for people to express themselves and interact with each other in social life. Facial expression analysis helps researchers better understand the underlying emotion, intention, physical pain and psychopathology of the performer. Automatic facial expression analysis is important in affective computing, social robotics, marketing, tutoring, and mental health among other applications. Researchers (Ekman, Friesen, Hager, 2002) (Cohn, Ekman, 2005) broke down facial expressions into different facial action units (AUs), which represent specific actions of one or more facial muscles involved in this process. Ekman and colleagues developed a Facial Action Coding System (FACS) to annotate anatomically-based facial actions (AUs) that can describe nearly all possible facial expressions. Action units may vary in both occurrence and intensity. While most research has emphasized AU occurrence, variation in intensity is critical to emotion communication. Social smiles, for instance, have lower intensity and more rapid onset than felt smiles (Ambadar, Z., et al., 2009).Since previous work on AU intensity used different databases with relatively small sizes, it is possible that the limited number of subject within the database attenuated performances of models (Corneanu, Simón, Cohn, & Guerrero, 2016). Besides, (Girard, J., et al. 2015) suggested that the accuracy of AU occurrence detection increased a lot when training set size got larger. Inspired by this work, we trained different models on both large dataset and small dataset and compared their performances.
“Scoring Ancestral Graphs”
Friday, March 22 | 1 – 1:30 p.m.
Bryan Andrews, Graduate Student, Intelligent Systems Program
Abstract: Maximal ancestral graph (MAGs) provide a natural extension to directed acyclic graphs in the case where a subset of the vertices are latent. Accordingly, MAGs may be used to model and perform causal reasoning on systems with latent (confounding) variables. In this work, we introduce m-connected sets, a novel representation of MAGs with several nice theoretical properties. Using m-connected sets, we derive a consistent, score equivalent, and computationally efficient score for MAGs and illustrate the practicality of the score on learning MAGs from data with a simulation study.
“KARL: Knowledge Augmented Rule Learning for Biological Pattern Discovery”
Friday, March 22 | 12:30 – 1 p.m.
Mahbaneh Torbati, Graduate Student, Intelligent Systems Program
Background: Ongoing molecular profiling studies enabled by advances in biomedical technologies are producing vast amounts of `omic' data for early detection, monitoring and prognosis of diverse diseases. A major common limitation is the scarcity of biological samples from which biomarker measurements are made and evaluated using case-control designs, necessitating integrative modeling frameworks that can make optimal use of all available data for any particular disease classification task. Related data sets are often available from different studies within and across laboratories, but may have been generated using different technology platforms. There is thus, a critical need for flexible modeling methods that can handle data from diverse sources to facilitate discovery of robust biological patterns that underlie disease regulatory processes.
Results: This paper develops and evaluates a novel framework called Knowledge Augmented Rule Learning (KARL), that is based on transfer learning of classification rules, and incorporates lookup tables to augment prior knowledge use when learning interpretable predictive models from data. Classification rules facilitate the extraction of robust biological patterns characterized by their statistical evidence, given both knowledge and data. In this work, KARL models are generated on twenty-five publicly available gene expression data sets, five each for five cancers of the brain, breast, colon, lung, and prostate. These are evaluated for completeness and consistency, along with positive or negative impact of knowledge transfer using ten-fold cross-validation measures of Balanced Accuracy (BAcc), which captures the sensitivity-specificity trade-off.
Conclusions: Our results show that knowledge augmented rule learning with KARL produces, on average, rule models that are more robust classifiers than baseline RL without any background knowledge, using 25 publicly available gene expression datasets. Moreover, KARL produces biologically interpretable rule patterns with complementary classification rules, and detects unique and consistent behavior for gene families that are discriminative for the cancer datasets studied herein. Future work would involve extensions to KARL to handle hierarchical knowledge to derive more general hypotheses to drive biomedicine.
“A Radiomics Approach to Microvascular Invasion Prediction in Hepatocellular Carcinoma from Pre-Operative Multiphase MRI”
Friday, March 8 | 1 -1:30 p.m.
Giacomo Nebbia, Graduate Student, Intelligent Systems Program
Introduction: Micro vascular invasion (mVI) is the most significant independent predictor of recurrence for hepatocellular carcinoma (HCC) but its pre-operative assessment is challenging. In this study, we investigate the use of multi-phase MRI to predict micro vascular invasion before surgery.
Methods: We retrospectively gathered pre-operative multi-phasic MRI scans for 99 patients who were diagnosed with HCC and received surgery, thus allowing mVI diagnosis by pathological examination. We extracted radiomics features from the manually segmented HCC regions in each MRI sequence and we built Machine Learning classifiers to predict mVI. We investigated the use of features extracted from the tumor region only, the peritumoral marginal region only, and the combination of the two.
Results: By combining information extracted from different MRI sequences, we were able to achieve AUCs of 86.69%, 84.62%, and 84.19% when considering features extracted from the tumor only, the peritumoral region only, and the combination of the two, respectively.
Conclusions: Our results indicate that mVI prediction may be feasible from pre-operative MRI scans. In addition, information from different MRI sequences is complementary in identifying mVI. From our experiments, marginal information does not improve prediction, possibly because automatic computation of the margin may include extra-hepatic areas that introduce noise.
“Using Bayesian Hierarchical Models to Predict Relevant Patient Data”
Friday, March 8 | 12:30 – 1 p.m.
Mohammadamin Tajgardoon, Graduate Student, Intelligent Systems Program
Abstract: We are investigating the use of Bayesian hierarchical models (HM) to predict information-seeking behavior of physicians in electronic medical record (EMR) systems. Intensive care unit (ICU) physicians reviewed patient cases and identified patient data that were relevant in the context of a specific clinical task. Since each physician reviewed multiple cases, the annotations for these cases are not statistically independent. It is known that HMs are suitable in the situation where the independence assumption among data samples does not hold. I will introduce an ongoing research project on the development of a Learning EMR (LEMR) system that uses models of information-seeking behavior of physicians to draw a physician’s attention to the right data, at the right time for the right patient. I will present preliminary results of our experiments using Bayesian HMs models of information-seeking behavior.
“Novel Approaches to Health Information Technology”
Friday, February 22 | 12:30 – 1:30 p.m.
Yalini Senathirajah, Visiting Associate Professor, Department of Biomedical Informatics
Abstract: Health information technology has great promise to save lives, reduce costs, and increase efficiency. However, the healthcare domain is also fraught with challenges not found in other sectors. These include the great complexity of both the medical domain and healthcare institutions, historical developments, financial considerations, the high-stakes high-stress collaborative nature of the work, and other factors. This has led to technology that often does not meet healthcare needs, resulting in protests by clinicians (doctors and nurses) nationally and internationally due to concerns about safety, usability, and interoperability.
We discuss research into a different ‘composable’ approach which gives the nonprogrammer clinician end-user greater control over some aspects of electronic health record design and use, with implications for improvements in safety, fit to the task, human-computer interaction and cognitive load, communication/collaboration, efficiency, and costs of software creation and deployment.
Bio: Dr. Senathirajah is Visiting Associate Professor in the department of biomedical informatics at the University of Pittsburgh, and previously held positions at Northwell Health, Downstate Medical Center, and Columbia University. She conducts research into the design and use of healthcare information technology, including provider-facing EMR design and patient-facing applications including mobile health interventions for underserved or minority populations
“ContactDB: Analyzing and Predicting Grasp Contact via Thermal Imaging”
Friday, February 8 | 12:30 – 1:30 p.m.
James Hayes, Associate Professor, Georgia Institute of Technology; Staff Scientist, Argo AI
Abstract: Grasping and manipulating objects is an important human skill. Since contact between hand and object is fundamental to grasping, measuring it can lead to important insights. However, observing contact through external sensors is challenging because of occlusion and the complexity of the human hand. We present ContactDB, a novel dataset of contact maps for household objects that captures the rich hand-object contact during grasping, enabled by use of a thermal camera. Participants in our study grasp 3D printed objects with a post-grasp functional intent. ContactDB includes 3750 3D meshes of 50 household objects textured with contact maps and 375K frames of synchronized RGB-D+thermal images. To the best of our knowledge, this is the first large-scale dataset that records detailed contact maps for functional human grasps. Analysis of this data shows the influence of functional intent and object size on grasping, the tendency to touch/avoid "active areas" on the object surface, and the importance of palm and lower finger contact. Finally, we learn to predict diverse contact patterns for unseen objects by using state-of-the-art image translation and 3D convolution algorithms.
Bio: James Hays an associate professor of computing at Georgia Institute of Technology since fall 2015. Since 2017, James also work with Argo AI to create self-driving cars. Previously, James was the Manning assistant professor of computer science at Brown University. James received his Ph.D. from Carnegie Mellon University and was a postdoc at Massachusetts Institute of Technology. His research interests span computer vision, computer graphics, robotics, and machine learning. He studies fundamental computer vision problems such as object detection and place recognition and their applications to domains such as robotics, biology, and graphics. James's research often involves finding new data sources to exploit (e.g. geotagged imagery) or creating new datasets where none existed (e.g. human sketches). James is the recipient of the NSF CAREER award and Sloan Fellowship.
“Causal Inferences with Applications in Systems Medicine”
Friday, January 25 | 12:30 – 1:30 p.m.
Panayiotis (Takis) Benos, Professor, Department of Computational and Systems Biology
Abstract: The advancement of technologies for high-throughput collection of personal data, including lifestyle, clinical and biomedical data, has inadvertently transformed biology and medicine. Integrating and co-analyzing these different data streams has become the research bottleneck and, in all likelihood, will be a central research topic for the next decade. My group has historically worked on the development of statistical and computational methods to identify key molecules (genes, microRNAs, etc) that affect disease onset and progression. More recently, we became interested in how we can combine the power of genomics with the rich medical data that are available. In this talk, I will present some of our recent efforts on causal modeling over mixed data types (continuous and discrete variables) and how to apply them to address important biomedical and clinical questions.
Friday, January 11 | 12:30 – 1:30 p.m.