As artificial intelligence becomes more prominent in higher education, experts in the field are paying greater attention to its ethical use. Dr. Terence Ow, WIPLI Fellow in AI and Professor of Information Systems and Analytics in the College of Business Administration, has thought extensively about how institutions of higher education can ensure that artificial intelligence is used in ways that responsible.
The past predicts the future
Ow describes artificial intelligence, particularly large language models, as tools for pattern recognition. AI can recognize input or detect patterns in the data, compare it to previous instances in its training data, and then predict the logical output based on this information.
However, an AI’s ability to do this well relies on a solid, unbiased data set.
“If you have bias or any other bias in your past data, your end result will be inaccurate and will need to be corrected,” says Ow. “It will take time for the people who work on these things to refine the data set and correct the errors.”
Distinguish facts from opinions
Major language models currently produce “hallucinations”; answers presented as facts but nonetheless inaccurate. This reflects the limits of artificial intelligence. For example, AI has difficulty putting its results into context.
“Artificial intelligence has a hard time determining whether something is a fact or an opinion, for example, and if you reproduce something a million times and it’s wrong, the AI will call it fact because it most often completes the model. This is a big flaw at the moment,” says Ow.
People who can use the right AI tool to increase their own critical thinking skills and independent judgment will be best positioned for tomorrow’s job market.
Ethical application
While artificial intelligence opens up broad possibilities for positive change, unethical actors have access to these same tools. For example, companies hoping to increase cigarette sales can more precisely target people who are prone to smoking or trying to quit. Deepfake videos allow fraudsters to imitate the faces and voices of their loved ones.
In this world, it is more important than ever that students are educated on the limitations of AI and its appropriate use cases.
“We need to think about the societal impact of artificial intelligence; who gets this data, what it is used for and how we direct people towards value-creating activities,” says Ow. “The use of AI has the potential to improve your life and provide insights and opportunities to the individual, community and society. It balances the field and gives hope for greater social mobility; you come to Marquette because you want to use technology for these purposes.
To learn more, join us on November 21 at Marquette University for the first AI Ethics Symposium, “From Policy to Practice,” sponsored by the Northwestern Mutual Data Science Institute and Marquette Center for Data, Ethics and Society..
