Research Symposium Program - Individual Details
5th annual Undergraduate Research Symposium, April 17, 2025
Joe Muldowney

BIO
My journey into software development started in a trade school, where I was learning to fix cars. As a kid, I was always fascinated by how things work. I enjoyed taking things apart to either fix them or understand their mechanics. This curiosity eventually led me to an automotive apprenticeship at a local gas station, where I worked in the shop and at the front desk while attending trade school. That is when I first encountered the intersection of technology and problem-solving, often troubleshooting technical issues with the computer system and network. During this apprenticeship, I also learned how to code by creating web forms to replace paper forms and inserting the data directly into a database.
I am focused on mastering C++ and Python, which are at the heart of the future of software development, especially with the rise of AI. In addition to my academic studies, I work on personal projects to deepen my understanding of artificial intelligence. I clean datasets and build machine-learning models on sentiment analysis and text summarization. These side projects help me stay hands-on with AI technologies and continue to improve my skills outside the classroom.
Fine-Tuning an Open-Source Model for Summarization
Authors: Joe Muldowney, Dr. Karen WorksStudent Major: Computer Science
Mentor: Dr. Karen Works
Mentor's Department: Computer Science Mentor's College: Florida State University Co-Presenters:
Abstract
Natural Language Processing has seen rapid advancements in recent years, with large language models increasingly integrated into mainstream applications—such as Google’s Gemini, which now summarizes emails and chat logs in Gmail and Google Chat. This research project uses a dataset of CNN daily news articles to fine-tune a pre-trained, open-source model from the Hugging Face platform. The process involved extensive data cleaning, preprocessing, and model training. This presentation highlights the model architecture, the use of the Hugging Face platform, key challenges faced during fine-tuning, and performance metrics resulting from various training configurations.
Keywords: LLM open-source fine-tuning AI NLP Hugging Face Bart