Week of AI Schedule


Select the titles with ‘Watch’ for a recording of the session!
Monday, March 31, 2025
Session
9 to 10 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Undergraduate Students; Graduate Students; Staff;
Embracing the Future: Opening Session for the Week of AI
Join us for the opening session of the Week of AI at University of Texas Dallas, where we will explore the transformative potential of artificial intelligence. This session will set the stage for an exciting week of discussions, presentations, and hands-on activities, highlighting the latest advancements and future possibilities in AI. Don’t miss this opportunity to embrace the future and be part of the AI revolution!
Workshop
10:30 to 11:30 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Staff;
Empowering Every Learner with Accessibility and AI, Presented by Dr. Elka Walsh
This workshop will explore how we can enhance accessibility and remove barriers so all learners can thrive. We’ll explore how Universal Design of Learning and Inclusive Design intersect so that technology can be used to enhance student engagement. Participants will explore a variety of tools that support increased accessibility.
Objectives:
- Identify ways to utilize Universal Desing of Learning and Inclusive Design principals to create more accessible learning models.
- Understand the current landscape of accessibility technologies and how these tools can remove barriers to learning.
Structure:
- Overview of how UDL and Inclusive Design can deepen accessibility. (10 minutes)
- Case studies:
- How AI has transformed accessibility in higher education – A Review of Case Studies from Across the Country and Globe? (15 minutes)
- Interactive World Café Session: Identifying gaps and opportunities to increase accessibility. Participants will identify where technology can be used within their institutions and share ideas for how action can be taken to realize accessibility in learning and services for all students. (35 minutes)
Session
noon to 1 p.m.
Virtual
Audience: Faculty; Staff;
10 Ways to Use AI In Your Workflow, Presented by Sarah Moore & Amanda Pritchard
Join Provost Teaching Fellow Dr. Sarah Moore and Amanda Pritchard from OIT for a webinar designed to transform your daily tasks with artificial intelligence—or at least provide you with a few new ways to use AI. In “10 Ways to Use AI In Your Workflow,” you’ll learn practical strategies to streamline administrative duties, enhance communication, and optimize decision-making. This session offers actionable tips and demonstrations on integrating AI tools into your routine, regardless of your technical expertise.
Session
1 to 1:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Staff; Undergraduate Students; Graduate Students;
Watch: Empowering UTD with Enterprise AI Tools, Presented by Kishore Thakur
Join us for an insightful event where we explore the latest advancements in AI across our campus and introduce the enterprise AI tools available to our community.
Session
2 to 2:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Academic and Administration Leadership;
Watch: Success Formula for Institutionalizing AI, Presented by Frank Feagans
I’ve been working with EAB and Gartner to design a success formula. It has What, Why, and How components. Accurate when compared to successful cases such as Florida, Arizona State, Ohio University, Elon, etc.
Watch: AI Use Cases with Gartner, Presented by Rachel McCloney
An overview of the different use cases Gartner has for education and different business units through campus access.
Session
4:15 to 5:15 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Graduate Students; Faculty
Watch: AI Health Care Hub
AI HealthCare Hub is an open-source initiative designed to revolutionize healthcare through Artificial Intelligence (AI) and Machine Learning (ML). This project aims to enhance diagnosis, treatment recommendations, and patient management by leveraging AI-driven solutions such as disease prediction models, medical imaging analysis, NLP-powered virtual assistants, and automated electronic health record (EHR) management.
This event will feature an Innovation Showcase and Hackathon, where participants develop AI-powered solutions to address real-world healthcare challenges. Emphasizing ethical AI implementation, data security, and regulatory compliance, the project fosters collaboration between AI researchers, healthcare professionals, and software developers.
Through AI HealthCare Hub, the team contributes to the advancement of AI-driven healthcare solutions, improving accessibility, efficiency, and accuracy in medical decision-making while promoting interdisciplinary innovation.
Session
4 to 5 p.m.
Cecil H. Green Hall (GR), 2.326
Audience: Undergraduate Students; Graduate Students;
Student Roundtable Discussion: AI Ethics and Policy
Artificial intelligence is transforming the way we work, learn, and live—faster than we ever imagined. From ChatGPT to facial recognition, AI is already shaping decisions in education, business, and even the legal system. But with rapid advancement comes uncertainty: How do we ensure fairness, transparency, and accountability?
Join us for a facilitated roundtable discussion as part of the campus-wide Week of AI. This is a space to engage in real conversations about the ethical and policy challenges surrounding AI, from privacy concerns to academic integrity and beyond. All majors welcome!
This session does not require registration.
Session
5:30 to 6:30 p.m.
Virtual
Audience: Faculty; Undergraduate Students; Graduate Students; Staff
Watch: Generative AI Today and Tomorrow in Multiple Disciplines Panel Discussion, Presented by Nathaniel Walker
AI in business from a practitioner viewpoint. Members of the first cohort for a Doctorate of Business Administration discuss the current implementation in AI in their respective business and the opportunities for future integration.
Tuesday, April 1, 2025
Transforming Career Readiness with AI, Presented by Dr. Elka Walsh
This workshop will delve into how the future of work is transforming with AI and how we can evolve career development opportunities for students using AI tools. Participants will explore the in-demand skills and employer expectations in the AI-empowered workforce and identify opportunities to elevate learning opportunities and expand AI literacy, personalize experiences and assist in career planning to help students be better prepared to thrive in the future of work.
Objectives:
- Examine the impact of AI on the workforce including in-demand skills and employer expectations
- Identify opportunities to help students be ready to use and create with AI tools in the world of work
Structure:
- Overview of the research on the future of work and the impact of AI on workforce skills, hiring and employer expectations. (10 minutes)
- Case study:
- Participants will analyze and example from the University of Waterloo that has transformed the way they are preparing students to be ready for the world of work with innovative skilling opportunities, AI tools for job matching and productivity tools for staff to focus on helping students. (10 minutes)
- Interactive session: Using a mini-design sprint participants will work together to identify creative ways to take action and use AI to help students be ready to thrive in the world of work. (40 minutes)
Session
10 to 10:50 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Graduate Students
Watch: Transforming Learning in Elementary and Middle School Education Using AI – Insights from AI and Neuroscience Research, Presented by Abdulrahman Abou Dahesh
AI is reshaping the landscape of education, providing innovative ways to enhance learning experiences for young students. This session will explore how AI can be effectively integrated into elementary and middle school education, highlighting its potential to personalize learning, foster engagement, and support educators.
Key Topics:
- AI-powered tutoring systems and personalized learning pathways
- The student brain on AI
- Gamification and interactive storytelling in AI-driven education
- Ethical considerations and responsible AI use in classrooms
- The role of AI in supporting teachers and reducing workload
- Challenges and limitations of AI in early education
- Future prospects: AI and the evolving role of K-12 educators
Objectives:
- Provide insights from neuroscience on the use of AI in classrooms.
- Demonstrate how AI tools can enhance learning experiences for young students.
- Discuss the balance between AI assistance and human instruction.
- Highlight ethical and practical challenges in AI adoption for schools.
- Inspire educators, students, and researchers to explore AI-driven educational solutions.
- Explore how the use of AI affects student cognitive and brain development.
Expected Audience:
This session is designed for educators, AI enthusiasts, policymakers, EdTech developers, and students interested in the intersection of AI, Neuroscience and education.
Session
11 to 11:50 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Undergraduate Students; Faculty; Graduate Students; Staff;
Watch: AI Frontiers – Exploring Multimodal LLMs & NeRFs – The Future of Intelligent Systems, Presented by Khushi Shah
Overall Theme: Exploring Transformative AI: From Multimodal Understanding to Immersive 3D Worlds
This AI Week proposal explores two transformative AI technologies: Multimodal Large Language Models (LLMs) and Neural Radiance Fields (NeRFs).
Key Topics Covered:
- Multimodal Large Language Models (LLMs): Architecture, capabilities (text, image, audio, video integration), real-world applications (content creation, medical diagnosis, human-computer interaction), exceeding traditional chatbots, ethical considerations (bias, hallucination).
- Neural Radiance Fields (NeRFs): Fundamental principles (AI-driven 3D reconstruction and rendering), applications across industries (Virtual & Augmented Reality, autonomous navigation, digital asset creation), advantages over traditional 3D modeling, challenges and future directions (computational costs, real-time processing, AI ethics).
Overall Objectives:
- Introduce participants to cutting-edge advancements in AI beyond traditional applications.
- Demonstrate the transformative potential of Multimodal LLMs in understanding and interacting with diverse data.
- Explain the revolutionary principles behind NeRFs and their ability to generate photorealistic 3D models.
- Showcase real-world applications of both technologies across various industries.
- Highlight the advantages of these AI approaches over existing methods.
- Foster awareness of the ethical considerations and challenges associated with these powerful technologies.
- Provide participants with insights into the future directions of research and development in these fields.
Expected Audience:
This AI Week is designed for a diverse audience interested in the forefront of artificial intelligence, including:
- Tech Enthusiasts: Anyone curious about the latest breakthroughs in artificial intelligence and their potential to shape the future.
- Technology Professionals: AI/ML engineers, software developers, researchers, data scientists.
- Industry Leaders and Innovators: Executives, managers, and strategists seeking to understand the potential impact of these technologies on their respective fields (e.g., media, healthcare, automotive, gaming, design).
- Academics and Students: Researchers, professors, and students in computer science, artificial intelligence, and related disciplines.
- Investors and Entrepreneurs: Individuals interested in the investment and business opportunities emerging from these AI advancements.
Session 1: Beyond Chatbots – The Power of Multimodal LLMs
This session will delve into how Multimodal LLMs integrate text, images, audio, and video, surpassing traditional chatbots in contextual awareness and decision-making. We will explore real-world applications, including:
- Automated Content Creation – AI generating diverse media formats.
- Advanced Medical Diagnosis – AI analyzing textual and visual medical data.
- Human-Computer Interaction – AI understanding emotions, speech, and gestures.
We will discuss Retrieval-Augmented Generation (RAG) and its role in enhancing AI’s ability to retrieve and generate contextually relevant responses. Ethical concerns like bias, hallucination risks, and responsible AI deployment will also be addressed.
Session 2: NeRFs and the Future of 3D Vision
This session will explore how NeRFs create photorealistic 3D models from 2D images, revolutionizing industries such as:
- Virtual & Augmented Reality – Building immersive digital environments.
- Autonomous Navigation – Enhancing self-driving car perception.
- Digital Asset Creation – Streamlining game and VFX production.
We will compare NeRFs to traditional 3D modeling, discuss computational challenges, and explore future directions in real-time processing and AI ethics.
Participants will gain deep insights into the capabilities, diverse applications, and critical challenges associated with Multimodal LLMs and NeRFs, fostering a broader and more informed perspective on the ever-evolving landscape of artificial intelligence.
Research Round-Up
noon to 12:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Session
A Perspective on AI in Astrophysics, Publishing and the Humanities, Presented by Roger Malina
Former professional NASA astrophysicist, I have of experience in data analytics and algorithmic analysis, now often called AI. 40 years’ experience as Executive Editor of the Leonardo Publications at MIT press, I have experience of peer reviewing and fake publishing. At UT Dallas, I am part of a team developing, with humanities author Fred Turner AI, software called Fred the Heretic. Its results are based on a database of everything Turner has written in the 8o years of his life. I will compare and contrast the use of AI in astrophysics, publishing industry and the humanities. I will argue that we should start developing AI that fails the Turing Test because we need a variety of ‘intelligences’ to solve future problems. I am on the editorial board of the special issue of Leonardo with Tsinghua University in China on “Augmented Wisdom”. I elaborate Fred Turner’s insights on how his best poetry had been heavily influenced by his ‘private life’ that we have not yet designed into AI. Prompts often rely on private information not available on the internet eg the dreams that a human has while asleep, and the level of trust one develops with people one has worked with for decades. AI is not yet an ‘adult”. Intended audience will be people struggling to integrate AI into their daily life in UT Dallas. I will summarize my experience and will try to develop new focus area for our Phase 2 of development of Fred the Heretic AI at the Off-Center for Emergence Studies at UT Dallas.
Audience: Graduate Students; Undergraduate Students; Faculty; Staff; visitors;
Session, Research Presentation
Transcribing In-Home Natural Language Recordings with an Automatic Speech Recognition Model (ASR), Presented by Rowan Soo Levick
Aims to inform researchers and students interested in language research methods about advantages and disadvantages of Automatic Speech Recognition Models in comparison to human transcribing for studying natural language recordings.
Description:
Natural language recordings are a crucial tool for language and language processing research. Having access specifically to in-home recordings also increases the reach of study populations to more rural communities and families that are not comfortable in a laboratory setting. In-home self-recordings also promote true naturalistic language since there is no need for a researcher to be present. However, studying natural language remains difficult and not widely accessible due to the barriers presented by the standard methods of hand transcribing which are very time consuming and laborious. In exploring possibilities for aiding the efficiency of this transcription process, we compared the accuracy and approximate speed of WhisperX, an automatic speech recognition and transcription model (see Bain et. al. 2023 for specifics), to those same WhisperX transcriptions after subsequent in-depth hand-editing on 42 in-home natural language recordings with multiple family members at a mealtime. We found that WhisperX alone, captured on average about 82.0% of the words that were recorded post-editing and about 74.6% of the utterances. We also tested inter-rater reliability for the editing on 17 of those videos and found that the average words and utterances captured between transcribers was 99.3% and 100.8% respectively. In regard to speed, the hand transcriptions that we previously completed using software like CLAN, would take approximately 1 hour to transcribe 1-3 minutes of audio whereas raw WhisperX transcriptions offline took about 1/60th of that time. WhisperX is not able to accurately identify speakers for in-home recording transcriptions, but speaker identification can be added by hand at about 1 hour to 30 minutes of video. Using in depth hand-editing on top of this takes approximately 1 hour per 10 minutes of audio, overall taking about 20-25% of the time that pure hand transcription takes. So, while WhisperX does not achieve the same level of accuracy as hand transcription, it does pick up a large portion of words spoken, and on a greatly reduced timescale. Depending on the research application, this alone, may be enough. Otherwise, hand-editing on top of this, has very reliable accuracy still in a fraction of the time that it would take without WhisperX. Additionally, WhisperX can be used offline as it was in this study to protect the data of participants, or, with the consent of future participants, can be used online to help improve the WhisperX model by contributing to the diversification of its language samples.
Audience: Faculty; Undergraduate Students; Graduate Students; Staff;
Session
High-Throughput Chemical Screening Using Machine Learning, Presented by Meghraj Magadi Shivalingaiah
Artificial intelligence and machine learning have revolutionized scientific discovery, enabling data-driven approaches for material and chemical property predictions. This proposal focuses on leveraging machine learning techniques to predict and design molecules with high optical absorption, enabling advancements in biomedical imaging, materials science, and optoelectronics. By integrating experimental data from UV-Vis-NIR spectroscopy and cheminformatics databases such as ChEMBL and PubChem, we aim to develop predictive models that can accurately correlate chemical structures with their optical properties. This will significantly accelerate molecular discovery, replacing labor-intensive trial-and-error approaches with AI-driven design.
Objectives:
The primary objective of this project is to build a robust dataset and train machine learning models—including deep neural networks, support vector machines, and random forests—to predict key optical properties such as absorption spectra and refractive indices. Molecules will be represented using the Simplified Molecular Input Line Entry System (SMILES) format and processed through cheminformatics tools like RDKit. Feature extraction will incorporate molecular descriptors, including molecular fingerprints and electronic structure features, to develop a reliable predictive framework. The models will be trained using an 80/20 train-test split and optimized through hyperparameter tuning techniques such as Bayesian optimization to enhance prediction accuracy.
Once trained, these models will be used to predict novel molecules with enhanced optical absorption properties, guiding experimental synthesis and validation. This will significantly reduce the need for exhaustive chemical screening and allow researchers to focus on a smaller set of high-potential candidates. To further improve the predictive power of the framework, we aim to incorporate transformer-based architectures in the future, which have shown promise in learning richer molecular representations from chemical structure data.
Looking ahead, we plan to extend our approach by integrating generative machine learning models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). These models will allow us to generate entirely new molecular structures, optimizing their absorption characteristics while ensuring synthetic feasibility. Additionally, we will explore the use of graph neural networks (GNNs), which treat molecules as graph-based structures rather than simple linear fingerprints, enabling a more nuanced understanding of molecular connectivity and functional groups. The combination of predictive modeling and generative AI will pave the way for inverse molecular design, allowing us to propose novel molecules tailored for applications in biomedical imaging, optical transparency, and next-generation optoelectronics.
This research will be of particular interest to scientists working at the intersection of AI, cheminformatics, and materials science. AI/ML researchers focusing on chemistry and material discovery, computational chemists exploring predictive modeling for molecular properties, and biomedical scientists interested in designing molecules for optical tissue transparency will find value in this work. Additionally, physicists and engineers working in optoelectronics and materials research will benefit from an AI-driven discovery pipeline that identifies high-performance molecules efficiently. By bridging the gap between computational chemistry and AI, this project aims to develop a scalable, data-driven framework for molecular discovery. Through machine learning, generative modeling, and cheminformatics, we seek to uncover new chemical compounds with optimized optical properties, offering solutions for real-world challenges in imaging, sensing, and optoelectronics.
Audience: Undergraduate Students; Graduate Students;
Session
Nanosheet LDMOS Study Through TCAD Simulation and Bayesian Optimization, Presented by Elham Mashhadi
The work aims to reduce incredibly the specific on-resistance RSP(on) in low voltage applications, achieving up to a five-time reduction in RSP(on) compared to conventional LDMOS structures[Ali] while maintaining or even enhancing slightly competitive breakdown voltage (BV). Consequently, results illustrate impressive increase in vital factor of efficiency for power devices represented by Baliga (FOM)[Baliga]. We combine TCAD simulations with Bayesian optimization to explore a large design space and tune device parameters, demonstrating a novel process for designing highly efficient power devices.
Audience: Graduate Students; Faculty; Undergraduate Students; Staff;
Student Engagement, Deep Learning and Empowering Faculty with AI Tools, Presented by Dr. Elka Walsh
AI is transforming many aspects of higher education and how we adapt teaching and learning models to engage students and deepen learning is critical. In this workshop participants will explore how AI can revolutionize pedagogical models, and will critically analyze when and how to utilize the tools to deepen learning. We will focus leveraging Responsible AI Frameworks as well as deepen understanding of how Generative AI tools like Copilot Chat and Copilot 365 can assist students to expand their AI Literacy skills to be ready to use and create with the technology in productive ways. Faculty will benefit from this session as we utilize a community of practice approach to explore how to leverage AI tools in support of learning utilizing Responsible AI Frameworks.
Objectives:
- Explore current research on the impact of Generative AI tools on student learning.
- Identify teaching and learning practices that can be utilized to enhance student engagement and deepen learning.
- Utilize Responsible AI Frameworks and good practice in prompting to extend AI Literacy Skills for students.
Structure:
- Introduction to the current literature on Generative AI in teaching and learning including Responsible AI Frameworks and the Art of Writing Good prompts (20 minutes.
- Case studies: Overview of how universities across the country and globe have been utilizing AI tools for teaching and learning and preliminary impacts. (10 minutes)
- Interactive session: Developing learning experiences for engagement and deeper learning (30 minutes)
Workshop
2 to 2:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Undergraduate Students; Graduate Students; Staff;
Watch: Using AI To Improve Productivity in IT
Description: This demo will showcase how AI was used to assist in the production of an application in the UTD OIT department. Demo will include a showcase of using AI to convert code from one language to another and gain insight into solving various coding problems.
Session
3 to 3:50 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Staff; Business Unit Leaders on campus;
GEN AI in Higher Education – Use Cases and Demos from AWS, Presented by Amazon Web Services
AWS is excited to bring you Gen AI use cases and live demos from higher education institutions around the country. Target audience: faculty, staff and all business units on campus.
Session
4 to 4:50 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Undergraduate Students; Graduate Students;
Microsoft Copilot Chat
Description: The primary goal of this initiative is to create an engaging and educational environment where students can watch videos about Microsoft Copilot, ask questions, and interact with relevant staff members. This will not only increase awareness of Microsoft Copilot but also provide students with the opportunity to receive personalized assistance and support.
Video Content: The video series will cover various aspects of Microsoft Copilot, including:
- Introduction to Microsoft Copilot and its functionalities.
- How Microsoft Copilot can assist with writing, research, and organization.
- Tips and tricks for maximizing the use of Microsoft Copilot.
- Real-life examples and success stories from students who have benefited from using Microsoft Copilot.
Wednesday, April 2, 2025
Session
9 to 9:50 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Undergraduate Students; Graduate Students; Staff
M365 Copilot Agents
Microsoft 365 Copilot Agent and Studio empower users to create custom AI assistants tailored to specific business needs. Copilot Agents are specialized assistants that can perform tasks, retrieve data, and interact with enterprise systems using natural language prompts. These agents can be customized with actions, knowledge, and starter prompts to enhance their functionality. Copilot Studio is a low-code tool that allows users to build and manage these agents, integrating them with Microsoft 365 Copilot. It supports automation through Power Automate and connects agents to enterprise data and scenarios. Together, Copilot Agent and Studio aim to streamline workflows, improve productivity, and provide personalized AI-driven solutions for various business processes.
Workshop
11 to 11:50 a.m.
Virtual
Audience: Faculty; Staff
Watch: PowerPoint in a Minute
In this hands-on workshop, participants will learn how to leverage the power of AI to create professional and engaging PowerPoint presentations in one minute or less. Microsoft 365 Copilot uses advanced AI to streamline the presentation creation process, making it accessible and efficient for users of all skill levels.
Smart Studying: Leveraging AI for Academic Success – Comet Calendar
Smart Studying: Leveraging AI for Academic Success is designed to empower students with the knowledge and skills to effectively utilize Microsoft Copilot as a powerful study tool. The Office of Information Technology will host a two-hour window on April 2nd where students can stop by the TechKnowledgy Bar and receive guidance on how to use AI to enhance their study habits. During this window, students will be introduced to Microsoft Copilot and its capabilities, learning how AI can assist in studying and productivity. To encourage students to become comfortable with the platform, an OIT representative will guide students through creating prompts that are commonly utilized during schoolwork, such as creating flashcards, summarizing notes, designing study schedules, and developing quizzes. Additional OIT representatives will be nearby for students to ask questions and receive real-time assistance on how to use Microsoft Copilot for their specific study needs.
This session does not require registration.
Research Round-Up
1 to 1:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
- Large Language Models for Market Research: A Data-Augmentation Approach – Presented by Selene M. Wang
LLMs have revolutionized AI and market research, particularly in conjoint analysis. Traditional surveys face scalability and cost issues, making LLM-generated data a promising alternative. However, biases exist between LLM-generated and human data. This paper proposes a statistical data augmentation approach to integrate LLM-generated data with real data, using transfer learning to debias the LLM data with a small amount of human data. Validated through studies on COVID-19 vaccine preferences and sports car choices, this method reduces estimation error and saves data and costs by 24.9% to 79.8%. LLM-generated data complements human responses within a robust statistical framework. - From Data to Dialogue: Using Artificial Intelligence and Natural Language Processing tools to Teach Wildfire Risk Management -Presented by Dr. Steven Haynes
This presentation explores using AI, particularly NLP, to enhance property risk management education. By analyzing public comments on wildfire risks in Texas, the study used sentiment analysis and topic modeling to convert qualitative data into educational content. Sentiment analysis revealed public emotions on wildfires, insurance, governance, climate change, and preparedness. Topic modeling identified themes like political influence and financial issues. Integrating AI insights into classroom presentations connected theoretical concepts with real-world perceptions, engaging students and promoting critical discussions. This project demonstrates AI’s potential as a transformative educational tool, linking theory to practice and enhancing student comprehension. - ConfliBERT: An LLM for Political Event Detection – Presented by Patrick Brandt
Most conflict event data are expensively coded by humans from news reports. This project relies on recent advances in artificial intelligence (AI) and large language models (LLM) to address that problem, and builds on earlier NSF efforts that created a publicly available large language model to study inter- and intra-state conflict, called ConfliBERT (Hu et al., 2022). This project expands ConfliBERT to multilingual settings, including Arabic and Spanish. It also focuses on creating network data for individuals, groups, locations, and events. - Using ML for cell segmentation -Presented by Nikhil Inturi & Hemanth Mydugolam
This presentation explores the application of YOLO v11 for cell segmentation in Xenium transcript images, focusing on detecting neurons and delineating cell boundaries with high precision. We will discuss the challenges of segmenting cellular structures in transcriptomic data, the advantages of using YOLO v11’s real-time object detection capabilities, and its performance in comparison to traditional segmentation methods. The talk will highlight model training, evaluation metrics, and potential applications in neuroscience research and single-cell analysis.
Audience: Faculty; Graduate Students; Undergraduate Students; Staff
Research Round-Up
2 to 2:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Session
- Intrusion Detection System Using AI Agents – Presented by Kunal Mukherjee
Advanced Persistent Threats (APTs) infiltrate organizations through stealthy, prolonged attacks with catastrophic consequences. System Provenance-based Intrusion Detection Systems (PIDS) are crucial for comprehensive surveillance and defense against APTs. They capture information about system resources and their interactions, forming a provenance graph that models dynamic behaviors. PIDS are uniquely positioned to detect and thwart stealthy attacks. Ensuring transparency, reliability, and accountability in PIDS is essential. A new agentic framework for PIDS uses large language models (LLMs) to analyze threat reports, query provenance databases, and provide explainability, scalably detecting attacks and summarizing explanations from threat reports. - Tafsiri: Revolutionizing Podcast Accessibility with AI – Presented by Sir Mbwika
Tafsiri is an AI-powered software developed to transcribe and translate podcasts from ArtSciLab’s Creative Disturbance Publishing project into over 100 languages, enhancing global accessibility. It uses the seamlessM4T V2 model for speech-to-speech translation, ensuring non-English speakers can access CDP’s content. Tafsiri democratizes knowledge, promoting inclusivity and diversity. It highlights AI’s role in content accessibility, showcases technological integration, and encourages cross-disciplinary collaboration. Tafsiri aims to inspire industry professionals, educators, students, and university staff to explore AI applications in solving global challenges. This project exemplifies AI’s potential to make digital content more inclusive. - MedCeptor: How AI Can Revolutionize Medical First Responder Training – Presented by Ivan Tong
MedCeptor is an AI-powered virtual EMT field training officer that bridges the gap between textbook knowledge and real-world application. It offers 24/7 access to realistic medical emergencies, dynamic skill assessments, and personalized feedback through professionally curated practice scenarios. MedCeptor ensures effective learning by requiring students to perform medical calls from start to finish, including interventions, differential diagnosis, and gathering medical history. This practical and accessible training method does not require expensive VR simulations or in-person FTOs. MedCeptor aims to revolutionize EMT training and the healthcare industry, with potential expansion to paramedic, nursing, and medical schools. - Beyond the Gaze: Rethinking the Representation of Genocide in Media and AI – Presented by Maruf Rahman
My dissertation proposal studies the discourses and scholarships of the photographic and documentary representation of the Holocaust and conducts a comparative study on how the perpetrator’s/ Nazi’s gaze is still predominant in the visual representation of the Rohingya genocide both in mainstream media and digital media. For my dissertation, I am creating a six-to-eight-minute co-created documentary using video clips recorded by the victims of the Rohingya genocide. My second project examines the representation of the Holocaust and the Rohingya genocide in digital media and examines the future of the Holocaust studies and challenges harnessed from generative AI for Holocaust and Rohingya genocide testimony.
Audience: Faculty; Graduate Students; Undergraduate Students; Staff
Workshop
3 to 4 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Staff; Data Governance Teams – everyone involved in data strategy and/or AI.
SageMaker Unified Studio: Implement a comprehensive Data, Machine Learning and AI strategy within a single unified environment
Your AI strategy is only as good as your data. Join AWS to learn how you can leverage SageMaker Unified Studio to develop a centralized data strategy, build machine learning models and develop Gen AI applications in a single web-based interface.
Workshop
3 to 4 p.m.
Audience: Faculty; Undergraduate Students; Graduate Students; Staff
Microsoft 365 Copilot: Enhancing Productivity with AI
Microsoft 365 Copilot is an AI-powered tool designed to boost productivity within Microsoft 365 applications like Word, Excel, PowerPoint, Outlook, and Teams. By integrating seamlessly with these apps, Copilot streamlines workflows and enhances efficiency. Copilot assists in creating documents, generating formulas, creating power point presentations, summarizing emails, and more within the familiar environment of Microsoft 365 apps. It further provides personalized assistance by using Microsoft Graph, and tailors responses based on individual work emails, chats, and documents, ensuring users only see data they have permission to access. Copilot also provides real-time, AI-generated information and suggestions, helping users complete tasks efficiently throughout the M365 suite. The M365 Copilot suite provides many benefits for users. For example, by automating routine tasks and providing intelligent suggestions, Copilot boosts overall productivity. It Improves Collaboration with enhancements to Microsoft Teams. Based on UT Dallas and System agreements M365 copilot provides security, compliance, and enterprise data protection ensure secure handling of sensitive information.
Thursday, April 3, 2025
Session
9 to 10 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Staff; All Business Units on campus; Graduate Students; Undergraduate Students
Watch: Deep Dive on Amazon Bedrock – Managed Service To Build Gen AI Applications with Foundation Models
Join us for a deep dive see how you can build on Amazon Bedrock – the easiest way to build and scale generative AI applications with foundation models with enterprise grade security and privacy. You will also dive into Amazon Bedrock Multi Agent Collaborator which is a service that coordinates multiple AI agents to work together to solve complex tasks. *More Technical. Session is for those who want to build their own Gen AI agents using the latest LLMs, including Deep Seek.
Research Round-Up
Session
10 to 10:45 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
- Fred The Heretic (FTH) AI – Presented by Venkatesh Prasad Ravichandran
AI has advanced in creative fields like poetry. Paul Fishwick’s chatbot, FredTheHeretic, mimics Dr. Frederick Turner’s style. Evaluating AI-generated poetry is subjective, making comparisons with human-written poetry difficult. Traditional methods rely on subjective impressions, complicating meaningful comparisons. - Deep Optical Sensing – Presented by Fan Zhang
Optical sensors measure the information of light, such as intensity, polarization state, wavelength or spectrum. They are traditionally mono-functional, bulky, and inefficient. By integrating advances in moiré quantum materials and deep learning, we demonstrate an all-in-one intelligent sensing scheme for light in mid-infrared regime at 79 K. By leveraging the top-gate and bottom-gate tunability of two-dimensional materials, each incoming light produces a unique nonlinear response map, which encodes all the light information. A trained convolutional neural network is shown to be able to decode the power, wavelength, and polarization state of the light simultaneously and instantaneously with a high precision level. This new scheme not only identifies a pathway for future intelligent sensing technologies in an extremely compact, on-chip manner but also opens a new horizon for deep quantum sensing schemes that can be generalized to include other frequency regimes. - How Do NGOs Really Work? – Presented by Elizabeth A.M. Searing
Nonprofits, or NGOs, have been in the news recently. Despite public records, our understanding has been limited to finances for 45 years. AI now allows access to written records, revealing how nonprofits deliver services without owners or taxes. Projects include categorizing US nonprofits by SDGs, learning UK nonprofit investment rules, and mapping funding in US and Canadian nonprofits. Understanding nonprofits is crucial before applying data science. - The Art of Intelligence: Integrating Lessons from AI’s Commonsense Knowledge Problem – Presented by Lisa Whitsett
This talk examines the commonsense knowledge problem in AI history, showing how early assumptions shape current research. Early AI assumptions prioritized abstract reasoning over perception. Melanie Mitchell identifies a fallacy in AI: assuming easy tasks for humans are easy to program. Biomimicry shifts focus to embodied, ecological intelligence but retains anthropocentric assumptions. Revisiting the commonsense knowledge problem reveals AI’s narrow views of cognition and anthropocentric bias, suggesting a more ethically informed approach engaging diverse intelligences. Insights from biomimicry, embodied cognition, and multispecies studies invite us to see AI as part of a larger ecological and cognitive web. - Leveraging AI to Assess Social Attention in Young Autistic Children – Presented by Erin E. Kosloski
This presentation discusses using AI to analyze social attention in autistic toddlers. An AI model was trained on over 10,000 video frames from the Rollins Autism Corpus. Preliminary results showed limited success due to “neurotypical bias,” but the new model, fine-tuned with autism-specific data, significantly improved gaze detection accuracy. This research highlights the need for a more inclusive reference database. Supported by a UT Dallas Seed Program, the presentation is designed for a general audience.
Workshop
11 to 11:50 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Undergraduate Students; Graduate Students; Staff
Research Using AI – Examples of Present
Artificial Intelligence has significantly impacted academia, transforming research methodologies and adding enhanced efficiency to the process. As current applications are being developed with AI tools to assist in research, many resources are available to researchers on database platforms and our Library catalog. These advancements have revolutionized traditional research methods but can be overwhelming for new users. I propose to show what AI options are available through our Library catalog, databases such as JSTOR, and database platforms like EBSCO to give examples of current capabilities and potential for the future
Watch: 5 Ways Copilot Enhances Professors’ Workflows
Discover how Microsoft 365 Copilot can revolutionize the way professors at the University of Texas at Dallas manage their academic responsibilities. This demonstration showcases five practical scenarios where Copilot assists professors in creating lecture materials, grading assignments, scheduling office hours, conducting research, and communicating with students. By leveraging Copilot’s capabilities, professors can streamline their workflows, enhance productivity, and focus more on teaching and research. Join us to see how Copilot can make a significant impact on your academic journey.
Session
1 to 2 p.m.
Virtual
Audience: Faculty Undergraduate Students; Graduate Students; Staff
Watch: How Are You Preparing for a Human+AI world?
Presented by Bethany AuHoy and CEO of Quinncia, Himal Ahuja
Session
2 to 2:50 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Faculty; Undergraduate Students; Graduate Students; Staff
Watch: Amazon Q – Your Intelligent AI Assistant for Business and Development
Join us to discover how Amazon Q, an enterprise-ready generative AI assistant, can transform your organization’s productivity and innovation. Explore the power of Amazon Q’s suite of AI assistants: Q for business productivity, Q Developer for coding assistance, Quicksight Q for data analytics, and Q in Amazon Connect for customer service enhancement. Learn how these intelligent tools securely integrate with your enterprise systems to deliver personalized, context-aware support across your organization. Join us for live demonstrations and practical implementation guidance for your organization.
Workshop
3 to 4 p.m.
SU 1.204 (TechKnowledgy Bar) & Virtual
Audience: Faculty; Undergraduate Students; Graduate Students; Staff
Local LLMs – Can You Run Them and Why Are They Useful?
This seminar is designed to introduce participants to the process of running a large language model (LLM) locally on their own machines with Ollama and OpenWebUI. Open to individuals of all skill levels, this session will provide resources for running step-by-step walkthrough, ensuring that everyone can follow along and successfully run an LLM. During the presentation we will go through the technologies and what’s required to successfully run local llms including hardware and software requirements. Running an LLM locally offers several benefits, including enhanced privacy and security, as data remains on the user’s machine rather than being sent to external servers. It also allows for greater control and customization of the model, enabling users to tailor it to their specific needs. We may even get a sneak peak at how to use Stable Diffusion to generate images locally! Best of all it’s all free!
Session
3 to 3:45 p.m.
Virtual
Audience: Faculty
Writing an AI Policy: Assignments & Syllabi
This workshop guides faculty in drafting AI policies for assignments and syllabi, balancing responsible use, effective assessment, and innovation. Participants will explore best practices, real examples, and practical implementation strategies. This is a hands-on online workshop as part of the Week of AI.
Workshop
4 to 4:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Undergraduate Students; Graduate Students
AI & AI: Writing Your Research Papers with Integrity
I can review the lesson I use with my students on Academic integrity and AI and how to use Grammarly in line with academic standards. I also spotlight how AI fails to do what it is asked and how this can be harmful for communicating and also inhibit accessibility and inclusion goals.
Workshop
4 to 6 p.m.
Virtual
Audience: Staff; Undergraduate Students; Graduate Students; Faculty
Watch: Learn & Apply – Prompt Engineering for GenAI Powered Video Games, Hosted by Sharif Abrar Labib
In the first part of the event, I will guide participants through practical exercises, demonstrating how to craft effective prompts to create interactive game elements. In the later part, participants will put their skills to the test in a hackathon, developing their own AI-powered games using the concepts learned and leveraging these AI tools for coding and content generation. In the first part of the event, I will guide participants through practical exercises, demonstrating how to craft effective prompts to create interactive game elements. In the later part, participants will put their skills to the test in a hackathon, developing their own AI-powered games using the concepts learned and leveraging these AI tools for coding and content generation.
Friday, April 4, 2025
Workshop
9 to 10 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Graduate Students; Undergraduate Students
Watch: Understanding Artificial Intelligence (AI), Generative AI & MS Copilot from Student’s Perspective
It covers a wide range of topics related to AI, including the basics of AI, machine learning, and generative AI, as well as their benefits and use cases for students. The presentation also delves into large language models, prompt engineering, and the features of Microsoft Copilot. Additionally, it includes practical examples, quizzes, and a demo to engage the audience.
AI in the Classroom – Weaving Use of AI into Teaching
This 10-15 minute talk would describe how I weave the current state of tools in AI relevant to each class I teach into its syllabus (4 courses in Bioengineering: Intro to Machine Learning, Advanced Computational Tools, Image Processing, and Bioinstrumentation). I will focus on a course in Machine Learning that requires students to program in the python language without requiring that they are proficient. The statistical models of machine learning are the topic of the course, and python is a tool for implementation. AI co-pilots are used to fill in the gaps in a student’s knowledge of python. And I will cover general guidelines for how I educate our students to the way AI might be used in industry in the field, currently and in the future.
This session does not require registration.
Enhancing Business Analytics Education Through AI and Prompt Engineering: A Hands-On Approach
As part of efforts to enhance the Business Analytics curriculum, I have been integrating AI and prompt engineering into my graduate-level course in Business Analytics. The main objective is to help students understand the power of AI prompts in guiding model development and producing specific, relevant, and high-quality models. By teaching students the importance of prompt engineering, we can help them iterate effectively and achieve better results in their data mining projects. The CLEAR framework (Context, Limitations, Examples, Audience, Requirements) will be used to create specific prompts for each phase, ensuring clarity and precision.
To minimize students’ frustration and hesitation, I incorporate hands-on exercises geared towards their class projects. These examples emphasize clear instructions and iterative refinement. I will share initial results of this experiment and demonstrate how this approach can not only enhance students’ understanding of AI but also prepare them for real-world applications in business analytics.
This session does not require registration.
Workshop
11 to 11:50 a.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Staff; Graduate Students; Undergraduate Students; Faculty; Anyone interested in AI agents
Watch: AI Investment Agent for Stock Day Trading
AI investment agent capable of providing day trading recommendations based on a comprehensive analysis of historical, technical, and real-time data. The AI agent integrates (1) predicted stock price generated by a ML model, (2) key technical indicators and (3) the latest news headlines. Upon receiving a stock ticker as input from the user, the AI agent processes the data that it has access to, and outputs a trading recommendation for the following trading day.
JSOM Faculty Teaching Workshop, Presented by Ashim Bose
Sharing of AI-enhanced use cases in business education
To enroll please email: Ashim.Bose@UTDallas.edu
Watch: AI Showcase
1 to 1:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Session
Audience: Staff; Graduate Students; Undergraduate Students; Faculty
- Leveraging Cursor.ai for Rapid Web Development and Data Processing – Presented by Tarun Raghu
AI has advanced in creative fields like poetry. Paul Fishwick’s chatbot, FredTheHeretic, mimics Dr. Frederick Turner’s style. Evaluating AI-generated poetry is subjective, making comparisons with human-written poetry difficult. Traditional methods rely on subjective impressions, complicating meaningful comparisons. - Enhancing Student Success with AI: A Live Demo of Campus Connect – Presented by Daniel Febres
Campus Connect is an AI-driven platform that personalizes student success by recommending campus resources, scholarships, and support services. The interactive demo will show how AI can revolutionize student engagement with data-driven recommendations and a chatbot for instant answers. Attendees will see how AI-driven personalization can improve student retention, academic support, and campus experience. Key topics include finding campus resources faster, personalized scholarship recommendations, and data-driven engagement strategies. Objectives are to demonstrate AI’s impact on student success, improve campus resource utilization, gather feedback, and explore partnerships with universities and AI researchers. - The Role of AI in Revolutionizing the Entertainment / Casino Industry – Presented by Srikanth Srinivas
The integration of Artificial Intelligence (AI) in the casino industry is driving significant transformation, enhancing both the gaming experience and operational efficiency. This presentation explores how AI is reshaping various aspects of the casino environment, from personalized player experiences to advanced security measures. AI-driven systems are improving customer service through intelligent chatbots, providing targeted marketing strategies, and offering real-time player behavior analytics. In terms of security, AI-powered facial recognition and fraud detection tools are safeguarding both physical and online casinos. - Bubble-Bursting and Hype-Busting – Navigating The Realities of AI Profitability – Presented by Paul Nichols
Student Secrets: Is AI Lowkey Doing Our Homework?
As AI reshapes education and the workplace, how do students perceive its impact? What skills do they believe will define success in an AI-driven future? And how are they using AI? In this student-led panel, UTD students will share their experiences with AI in learning, job preparation, and daily life. They’ll discuss their expectations and hopes for faculty, institutions, and employers, and challenge faculty, staff, and TAs to rethink how we teach, mentor, and prepare students for a world where AI is not just a tool but a transformational force.
This session does not require registration.
Research Round-Up
Session
2 to 2:45 p.m.
Amistad Conference Room (SP2 – 12.216), Virtual
Audience: Staff; Graduate Students; Undergraduate Students; Faculty
- AI in Sustainability – Machine Learning enabled Trash Collectors – Presented by Areeb Iqbal
Waste management poses a significant ecological threat due to non-recyclable trash in landfills. This presentation explores AI/ML applications in waste management, focusing on robots and object detection models. It covers the problem statement, types of waste, disposal methods, and the role of robots in waste collection. The research proposes energy-efficient robots for deployment on campuses or large events, simplifying complex ideas for both beginners and experienced individuals. - Advancing AI in Surgery: Real-Time Computer Vision for Decision Support and Training – Presented by Sara Juneja
At the Schumacher lab at Tufts University, we are developing AI algorithms to analyze real-time surgical videos. Our goal is to train models to recognize key anatomical structures, track workflows, and identify high-risk scenarios, enhancing patient safety and outcomes. Our research has shown AI can accurately identify critical structures like the pulmonary artery and inferior pulmonary vein. This innovation aims to improve surgical precision, reduce complications, and support trainee education through AI-assisted feedback. - Modeling social attributes of dynamic faces with deep neural networks -Presented by Suvel Muttreja
This proposal explores the application of machine learning (ML) and deep This study investigates if DCNNs can replicate human-like social attribute judgments from dynamic faces. A behavioral experiment with 12 students showed consistent human trustworthiness judgments but almost no correlation with DCNN representations. This suggests that while DCNNs excel at identity recognition, they fail to capture human-like social inferences. The findings highlight the need for models incorporating human-like cognitive frameworks, especially for ethical AI use in hiring or law enforcement. - Research Presentation (Faculty/Student AI research review) by Presented by Wangzhou, Andi
This proposal explores the application of machine learning (ML) and deep learning techniques to uncover the mechanisms of pain using medical and transcriptomic data from human donor tissues. By analyzing large-scale biological and neurophysiological datasets, our approach aims to identify novel pain biomarkers and potential drug development targets. This research will enhance our understanding of pain processing in humans, facilitating the discovery of more effective, targeted therapies for pain management. - CBTI (Cognitive Behavioral Therapy for Insomnia) with AI -Presented by Mohsin Maqbool, MD.
AI can revolutionize insomnia treatment by enhancing Cognitive Behavioral Therapy for Insomnia (CBTI) with AI-powered chatbots and personalized machine learning models. This approach offers real-time, adaptive therapy, expanding CBTI’s reach and optimizing patient outcomes. I propose presenting this at UT Dallas Week of AI to highlight the integration of neuroscience, behavioral therapy, and technology.
Session
3:30 to 4:30 p.m.
Virtual
Audience: Faculty Undergraduate Students; Graduate Students; Staff
AI Trends and Job-ready Skills
AI is evolving at a frenetic pace making it harder for employees, students and educators to keep themselves continually skilled and job-ready. This presentation and/or panel will focus on key trends, skills and strategies that are relevant for job readiness in AI.