Loyola Marymount University

08/07/2024 | News release | Distributed by Public on 08/07/2024 20:49

BCLA Task Force Explores Dimensions of AI Course

Inevitable. Incorrigible. Revelatory. Reviled. Expansive. Expensive. Liberating. Treacherous.

A lot has been said about artificial intelligence and its role in higher education. No doubt a lot more will be said.

Acknowledging the importance of the topic - if not simply the tools themselves - a Loyola Marymount University task force has been meeting with a dual purpose: to go beyond the surface of the AI discussion by examining the phenomena from various scholarly viewpoints; and to build a framework for an undergraduate minor course of study, or a major, in AI. The endeavor will serve as a complement to AI courses offered in the Frank R. Seaver College of Science and Engineering by creating a wholistic understanding - and analysis - of AI and its impacts. 

"While artificial intelligence may seem like a sudden and complex phenomenon dominating current discourse, the ethical dilemmas it presents are neither novel nor exclusive to this technology," said Astrid Floegel, a graduate of the Bioethics Institute and an AI privacy consultant. "At their core, AI-related concerns are deeply rooted in fundamental questions about human existence, aspirations, and moral imperatives. These issues have been the subject of extensive deliberations for centuries, across various disciplines such as philosophy, sociology, psychology, law, religion, and ethics. Rather than starting from scratch, we would be wise to draw upon this rich tapestry of existing knowledge and insight as we grapple with the ethical challenges posed by modern AI developments." 

At the outset of their work, Dean Richard Fox of LMU Bellarmine College of Liberal Arts told task force members to "dream big." The group is led by Professor Roberto Dell'Oro, O'Malley Chair of Bioethics and director of the Bioethics Institute.

"Professor Dell'Oro has enlisted a group of faculty members, working in philosophy, bioethics, theology, and computer science to help develop BCLA programs that might address and prepare students for a future where AI plays a central role in how we live," said Fox. "This AI Task Force is having three-hour seminars every week this summer and ultimately hopes to culminate by putting forward a new program for students in 'AI, Ethics, and Society.'  I am very excited to see what they come up with and our students will be too!"

Dell'Oro said the group will touch on most aspects of a humanitarian approach to artificial intelligence with an eye toward how students would benefit. Task force members present papers and lead discussions on topics in the humanities that apply to AI. "Our task force, composed of faculty from BCLA and a few graduate students, has one important goal, i.e., to contribute to the creation of undergraduate programs that address the relation between AI and the humanities," said Dell'Oro, who is also a professor of theological studies. "We decided to devote the summer to work toward the development of such a program by dividing the work of the task force into two phases. In the first, we study the most relevant literature on AI in a seminar-style format … and discuss different aspects of the problem in an academic fashion through presentations. In the second phase, we will work on the more administrative dimensions, detailing the various components of the minor, the type of classes we envisage for it, the faculty that will teach in it, etc. We hope to have a program proposal by Aug. 15."

The subjects of the presentations are impressively broad and deep. Dell'Oro talked with the group about the philosophical underpinnings that would bear on artificial intelligence and the possibility of considering robots as having agency; Daniel Speak, professor of philosophy and graduate director, connected AI to virtue education and intellectual character; Carissa Phillips-Garrett,associate professor of philosophy, discussed AI and virtue ethics; Nicholas Brown, clinical associate professor and director of Bioethics, presented on moral enhancement and AI; Robin Wang, professor of philosophy, connected AI to Dao; Alexander Zambrano, an affiliated faculty member of the Bioethics Institute, discussed moral status and AI; Jennifer Gumer, a part-time faculty member of the Bioethics Institute and a law partner at CGL LLP, delved into the legal issues surrounding AI; Juliet Szatko, a graduate philosophy student, presented on the social bias concerns raised by AI; Eryn Leong, an attorney and graduate student in bioethics, connected theological concerns to AI; and Floegel explored privacy issues and AI.

Speak connected the task force's work to the core of an LMU education. "Philosophy is central to Jesuit education because it gives us the widest framing for reflection on the human condition," he said. "In philosophy, our goal is to think to the very bottom of things to see how everything hangs together. So, the benefit for students of approaching AI from the widest philosophical point of view is largely to avoid technological shallowness. To be technologically shallow is to accept each new technological trend without scrutinizing it, without asking what values underlie it and support it, without considering the possible dangers to human flourishing that it may bring on. Put positively, then, the value to students of this approach is a matter of their becoming truly technologically sophisticated by way of developing a critical mind for the structure, claims, and actual promise of artificial intelligence." 

Robin Wang, professor of philosophy, said, "AI challenges what it means for humans to be humans, what our moral society means, and how our societal values are shifting? Our values, society, and laws are centered around humans, and under the current revolution of AI technology, we must ask what impacts it would have on each of these aspects?" She added, "Daoism (300 B.C) made two distinctions that might be helpful. A distinction between natural intelligence and artificial stupidity, which warns human beings to avoid the fake intelligence (cleverness) that humans are creating; a distinction between a quest for that which is genuine and mere satisfactions. The ultimate human pursuit is the search for genuineness, which is quite different than satisfying merely desires."

There have also been presentations by invited guests, including Andrew Forney, associate professor of computer science, who described the basics of AI operations, its potential and limitations. In addressing how important it is for students to understand how AI works, Forney said, "It depends on the use-case: if you plan on being a consumer of tools like ChatGPT, it helps to know its limitations and at least a vague sense of how it's made: these are associative intelligencesthat will mimic as closely as possible what they've seen in the past (whether right or wrong) and will answer confidently, however correct the advice. As developers/researchers, you really do have to get further into the weeds to know how and where there are problems for any hope to fix them."

Forney added, "Like any new era of technology, change will bring harm at first before lasting benefit: many jobs may be lost before people of the next era realize those weren't jobs that people want to be doing anyways - I don't think anyone pines for the days of pre-industrial factory work either, even though at the time, the change brought inequity and resultant anxiety. I dream of a future where AI saves us from the tedium that we've accepted as necessary because of our current social context - a future where people are free to pursue what they really want to, not just because it pays them." He cautioned against the hype involved in the development of AI: "To contextualize, this is a warning against the current era of associative intelligences like ChatGPT: beware implicitly trusting the smart-sounding parrot."

The BCLA Task Force is not the only discussion happening on campus: The Provost's Office, Student Affairs, and an administration team attended a retreat in early June to discuss AI and attempt to learn more about it and its applications, said Kat Weaver, vice provost for Faculty, Research, and Strategy. Also, Heather Tarleton, associate provost for Faculty Affairs and Professional Development, noted that the Center for Teaching Excellence this year sponsored an AI faculty learning community that included faculty across campus. In addition,  a group of university leaders and faculty, led by José Badenes, vice provost for Academic Programs, and Máire Ford, associate vice provost for Faculty Development, has accepted an application to participate in an upcoming AACU Institute on AI. 

Dell'Oro said, "I think it is important to include the ethical dimension in discussions about AI for different reasons. First, there is the issue of how AI will be used, either with respect to the fairness of its applications or the distribution patterns of its potential benefits. Second, I also believe ethical mindfulness must define AI at the level of its development, i.e., what is AI for? Here the problem of the purpose or the goals for building machine-learning models becomes prominent."