Past Events - 2022

“HealthPrompt: A Zero-Shot Learning Paradigm for Clinical Natural Language Processing”

Friday, April 8 | 1 – 1:30pm

Sonish Sivarajkumar, graduate student, Intelligent Systems Program

Abstract: Deep learning algorithms are dependent on the availability of large-scale annotated clinical text datasets. The lack of such publicly available datasets is the biggest bottleneck for the development of clinical Natural Language Processing(NLP) systems. Zero-Shot Learning(ZSL) refers to the use of deep learning models to classify instances from new classes of which no training data have been seen before. Prompt-based learning is an emerging ZSL technique where we define task-based templates for NLP tasks. We developed a novel prompt-based clinical NLP framework called HealthPrompt and applied the paradigm of prompt-based learning on clinical texts. In this talk, I’ll go through the challenges of clinical NLP development and how the novel HealthPrompt framework can resolve some of these challenges.

“Complementary cues from audio help combat noise in weakly-supervised object detection”

Friday, April 8 | 12:30 – 1:30pm

Cagri Gungor, graduate student, Intelligent Systems Program

Abstract: We tackle the problem of learning object detectors in a noisy environment, which is one of the significant challenges for weakly-supervised learning. We use multimodal learning to help localize objects of interest, but unlike other methods, we treat audio as an auxiliary modality that assists to tackle noise in detection from visual regions. First, we use the audio model to generate new ``ground-truth'' labels for the training set to remove noise between the visual features and noisy supervision. Second, we propose an ``indirect path" between audio and class predictions, which combines the link between visual and audio regions, and the link between visual features and predictions, and improves object classification. Third, we propose a sound-based ``attention path'' which uses the benefit of complementary audio cues to identify important visual regions, and boosts object classification and detection performance. We use contrastive learning in our framework that performs region-based audio-visual instance discrimination to incorporate information from both audio and video frames, and capture relationships between audio and visual regions. We show our methods which use sound to update noisy ground-truth, and to provide an indirect path and attention path, greatly boost performance on the AudioSet and VGGSound datasets compared to single-modality predictions, even when contrastive learning is used.

“Contrastive Pretext Task Considerations for Robust Object Detection”

Friday, April 1 | 12:30 – 1:30pm

Kyle Buettner, graduate student, Intelligent Systems Program

Abstract: Contrastive learning has recently emerged as a competitive self-supervised pretraining strategy and is becoming increasingly focused towards downstream object detection scenarios. Despite this progress, there has been minimal investigation into the robustness of detection-focused contrastive representations with respect to downstream distribution shifts. In the work discussed in this talk, we study and enhance the robustness of such representations to downstream distribution shifts through changes to the Instance Localization pretext task. As a result, we propose a novel set of robust pretext task variants based on (1) geometric changes to crops (% used, IoU constraints, contrastive CAM-based object focus) and (2) appearance-based data augmentations (Poisson blending, elastic deformation, and texture flattening). Evaluation of our pretext task variants shows AP gains on various targeted domain shift datasets, including abstract cartoons, common corruptions, and a novel set of splits that capture object co-occurrence bias.

Bio: Kyle is a first-year PhD student in the Intelligent Systems program at the University of Pittsburgh. His research interests broadly encompass computer vision and AI robustness, and in particular focus on object detection, robust representation learning, and self-supervision.

“Team Structure and Scientific Advance”

Friday, March 25 | 12:30 – 1:30pm

Dr. Lingfei Wu, Assistant Professor, Department of Informatics and Networked Systems

Abstract: With teams growing in all areas of scientific and scholarly research, we explore the relationship between team structure and the character of knowledge they produce. Drawing on 89,575 self-reports of team member research activity underlying scientific publications, we show how individual activities cohere into broad roles of (1) leader- ship through the direction and presentation of research and (2) support through data collection, analysis and discussion. The hidden hierarchy of a scientific team is characterized by its lead (or L)-ratio of persons playing leadership roles to total team size. The L-ratio is validated through correlation with imputed contributions to the specific paper and to science as a whole, which can be used to effectively extrapolate the L-ratio for 16,397,750 papers where roles are not explicit. Relative to flat, egalitarian teams (high L-ratio), we find that tall, hierarchical teams (low L-ratio) produce less novelty and more often develop existing ideas; increase productivity for those on top and decrease it for those beneath; increase short-term citations but decrease long-term influence. These effects hold within person—the same person on the same-sized team produces science much more likely to disruptively innovate if they work on a flat, high L-ratio team. These results suggest the critical role of flat teams not only for sustainable scientific advance, but also for the training and advancement of scientists.

Bio: Lingfei Wu is a computational social scientist interested in measuring innovation in science and Technology, discovering effective team mechanisms to accelerate innovation, and designing AI tools to scale innovation.

“ISP AI Forum”

Friday, March 18 | 12:30 – 1:30pm

Dr. Nathalie Baracaldo, Manager, AI Security and Privacy Solutions, IBM

Abstract: The adoption of AI systems in daily life and critical applications is becoming ubiquitous. This wide availability has at the same time raise questions about the trustworthiness, security, and privacy implications of using these systems. While novel technologies and methodologies have been emerging to protect the privacy and security of AI Systems, there are still open challenges that need to be addressed by the community. Over the past years, my research has focused on the creation of defenses to protect the machine learning pipeline and the design of privacy-aware methodologies to enable the training of accurate machine learning models without transmitting the training data to a central place. In this talk, I will first provide an overview of the challenges and threats inherent to the machine learning pipeline in traditional setups where all the training data is available in the same place and some mitigation techniques to deter these attacks. In the second part of the talk, I will cover a game-changing and privacy-by-design paradigm known as federated learning (FL), where data owners do not need to share or transfer their data to collaboratively train a model. During this part of the talk, I will present multiple cutting-edge approaches, interesting aspects of making FL available in a product and some open research directions.

Bio: Nathalie Baracaldo leads the AI Security and Privacy Solutions team and is a Research Staff Member at IBM’s Almaden Research Center in San Jose, CA. Nathalie is passionate about delivering machine learning solutions that are highly accurate, withstand adversarial attacks and protect data privacy. Nathalie has led her team to the design of the IBM Federated Learning framework, which is now part of the Watson Machine Learning product. In 2020, Nathalie received the IBM Master Inventor distinction for her contributions to IBM Intellectual Property and innovation. Nathalie also received the 2021 Corporate Technical Recognition, one of the highest recognitions provided to IBMers for breakthrough technical achievements that have led to notable market and industry success for IBM. This recognition was awarded for Nathalie's contribution to the Trusted AI Initiative. Nathalie has received multiple best paper awards and published in top-tier conferences and journals. Nathalie’s research interests include security and privacy, distributed systems and machine learning. Nathalie received her PhD from the University of Pittsburgh in 2016.

“A Practical Guide to Robust Multimodal Machine Learning and Its Application in Education”

Friday, February 25 | 12:30 – 1:30 pm

Dr. Zitao Liu, Head of Engineering, Xueersi 1 on 1, TAL Education Group

Abstract: Recently we have seen a rapid rise in the amount of education data available through the digitization of education. This huge amount of education data usually exhibits in a mixture form of images, videos, speech, texts, etc. It is crucial to consider data from different modalities to build successful applications in AI in education (AIED). This talk targets AI researchers and practitioners who are interested in applying state-of-the-art multimodal machine learning techniques to tackle some of the hard-core AIED tasks. These include tasks such as automatic short answer grading, student assessment, class quality assurance, knowledge tracing, etc.

In this talk, I will share some recent developments of successfully applying multimodal learning approaches in AIED, with a focus on those classroom multimodal data. Beyond introducing the recent advances of computer vision, speech, natural language processing in education respectively, I will discuss how to combine data from different modalities and build AI driven educational applications on top of these data. Participants will learn about recent trends and emerging challenges in this topic, representative tools and learning resources to obtain ready-to-use models, and how related models and techniques benefit real-world AIED applications.

Speaker Bio: Zitao Liu is the Head of Engineering, Xueersi 1 on 1 at TAL Education Group (NYSE:TAL), one of the largest leading education and technology enterprises in China. His research is in the area of machine learning, and includes contributions in the areas of artificial intelligence in education, multimodal knowledge representation and user modeling. He has published over 70 papers in highly ranked conference proceedings, such as NeurIPS, AAAI, WWW, AIED, etc. and his applied research has resulted in more than 20 technology transfer and patents. Zitao serves as the executive committee of the International AI in Education Society and top tier AI conference/workshop organizers/program committees. He won the 1st place at NeurIPS 2020 education challenge (Task 3), 1st place at Ubicomp 2020 time series classification challenge, 1st place at CCL 2020 humor computation competition and 2nd place at EMNLP 2020 ClariQ challenge. He is a recipient of ACM and CCF Distinguished Speaker and Beijing Nova Program 2020. Before joining TAL, Zitao was a senior research scientist at Pinterest and received his Ph.D degree in Computer Science from University of Pittsburgh.

“Graphical Causal Models for Integrative Analysis of Biomedical and Clinical Data”

Friday, February 11 | 12:30 – 1:30 pm

Dr. Takis Benos, Professor, School of Medicine

Abstract: The advancement of technologies for high-throughput collection of personal data, including environmental, lifestyle, clinical and biomedical data, has inadvertently transformed biology and medicine. Integrating and co-analyzing these different data streams has become the research bottleneck and, in all likelihood, will be a central research topic for the next decade. Machine Learning has shown promise addressing biomedical and clinical problems, especially regarding classification. Causal graphical models allow for inferring potential cause-effect relations on the whole dataset and are by nature interpretable. My group has worked in extending causal learning framework to mixed data types, incorporating prior information and learning causal graphs with latent confounders. In this talk, I will present an overview of causal inference and some of our results on applications on important biomedical and clinical questions in cancer diagnosis and therapy.

“Legal Question Answering on Private International Law”

Friday, January 28 | 12:30 – 1:30 pm

Francesco Sovrano, PhD student, University of Bologna

Abstract: International Private Law (PIL) is a complex legal domain that presents frequent conflicting norms between the hierarchy of legal sources, other legal domains, and the adopted procedures. This clearly presents (even in civil law) a daunting challenge for humans, whenever regulations change frequently or are large enough in size, making evident the need for support from emerging AI technologies that can automate question answering and data analysis. In the AI literature, the task of answering questions, using a large collection of documents of diverse topics (i.e. PIL’s), is called open-domain Question Answering (QA).  A system for open-domain QA usually consists in a combination of traditional information retrieval techniques and neural reading comprehension models. But, neural reading comprehension of legal texts is not a trivial task because Legalese is rarer, mercurial and in many ways different from an ordinary natural language. Hence, the difference between legal and ordinary languages does foster technical issues when applying or fine-tuning general purpose language models for open-domain question answering, on legal resources. This is especially true when the meaning of a legal document is encoded in its (discourse) structure differently from the spoken language. For example, longer sentences may be preferred in laws (i.e. Brussels I bis Regulation EU 1215/2012), to reduce potential ambiguities and improve comprehensibility. But the noise introduced by the excessive length of the sentence can distract a language model trained on ordinary English, pushing it to commit more errors. Starting from the hypothesis that an ordinary language normally uses different discursive patterns from its legal counterpart, with this research we investigate some mechanisms to isolate and capture these patterns within a legal document in order to perform “zero-shot” question answering, improving the performance of an open-domain QA system pre-trained on ordinary English.  Specifically, we make use of open-domain QA systems based on information retrieval and neural reading comprehension and we study what happens when changing the type of information to consider for retrieval. Indeed, by cherry picking only those pieces of information that are deemed to be the most important parts of the discourse, we should be able to help the information retriever and the QA system by partially hiding noise within answers. To this end, we performed several experiments. First of all, we experimented with TF-IDF, to understand the role of syntagmatic relations in Legalese. Then we also analysed how discourse structure can impact on QA, by using a famous discourse theory called PDTB to identify the Elementary Discourse Units (EDUs) of a legal text and also the theory of Abstract Meaning Representations (AMRs) to identify factual clauses.  The results of our experiments show that properly filtering/capturing the discourse structure of Legalese texts allow us to perform decent “zero-shot” question answering without fine-tuning existing language models.

“Better biomarkers based on quality, reproducible and open science”

Friday, January 21 | 12:30 – 1:30 pm

Dr. Pradeep Reddy Raamana, Assistant Professor, School of Medicine

Abstract: Mental health is a major public health challenge currently costing trillions of dollars. Neuroscientific approaches are key to understanding the underlying risk factors as well as developing biomarkers and treatments. The rapid adoption of data sharing, and open science practices present an unprecedented opportunity to offer better care at a lower cost for mental health issues. However, the quality and efficacy of these potential biomarkers and treatments is critically dependent on the quality of multiple stages of data science they are based on. These stages include but are not limited to preprocessing, quality control, feature extraction, model building, and performance evaluation. Often overlooked, inaccuracies at these stages can get multiplied to produce suboptimal and/or irreproducible final results (so-called “garbage-in, garbage-out” and butterfly effects). In this talk, I present an outline of few projects at the Open MINDS lab to solve these challenges in two key areas: neuroimaging quality control and biomarker performance evaluation. I encourage you to visit the following websites to learn more: niQC SIG and crossinvalidation.com.

Short Bio: Dr. Raamana is an Assistant Professor in the Department of Radiology at the University of Pittsburgh. He leads the Open MINDS lab @ Pitt and is interested in developing multimodal biomarkers for brain disorders, and the necessary data science frameworks and tools to realize personalized medicine. He founded the special interest group on neuroimaging quality control (niQC) at the International Neuroinformatics Coordinating Facility to improve quality and reproducibility of neuroimaging analyses. He is passionate about bridging the gap between the clinic and computer science, by removing the barriers and building assistive tools for predictive modelling and quality control. He is also a passionate advocate for open science.

“Modeling Human Behaviors and Designing Adaptive Systems in Human-Automation Interaction”

Friday, January 14, 2022 | 12:30 – 1:30 pm

Na Du, Assistant Professor, School of Computing and Information

Abstract: Autonomous systems have the potential to improve human performance and reduce workload in a number of safety-critical environments, such as driving, medicine, and military. This talk will discuss several projects that address the gaps between increasingly complex autonomous systems and restricted human capabilities in information processing. The projects apply human factors, cognitive psychology, and data analytics techniques to improve human performance, safety, and well-beings in human-automation interaction. First, I will introduce how we use wearable technologies to investigate human behaviors and performance in automated vehicles. Second, I will present the computational models and alert systems we develop to help human interact with automated vehicles safely. Third, I will showcase a series of design ideas we propose and evaluate to facilitate human trust in and acceptance of automation. I will conclude my talk with an overview of future research directions.

Short Bio: Na Du is an Assistant Professor in the Department of Informatics and Networked Systems, School of Computing and Information, University of Pittsburgh. Her research interests include human factors in smart cities, computational modeling of human behaviors, and human-centered design. She received her PhD in Industrial & Operations Engineering from University of Michigan and a Graduate Certificate in Data Science at Michigan Institute for Data Science