[ad_1]
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter
Three professors from Iowa’s public universities are working to raise awareness of the importance and contradictory nature of artificial intelligence in higher education, pointing to concerns about privacy, bias and academic integrity.
The professors, speaking to the Board of Regents on June 14, pointed to the benefits and detriments of AI use in classrooms, as it is necessary for the workforce in some occupations and hinders others.
“It’s important that we are, in all cases, educating our faculty, staff and students on the use of these technologies, both from the perspective of the opportunity they offer, but also the challenges and concerns that they present,” Barrett Thomas, professor and senior associate dean of the Tippie College of Business at the University of Iowa, said.
Abram Anders, an associate professor of English and the interim associate director of the Student Innovation Center at Iowa State University, said the impact of AI is being witnessed by “pioneers” at higher education institutions across the world. He said the large language model technology, in which computers can learn and generate human languages, is raising the bar for what’s possible in the classroom, but it does come with limitations.
“Even though we can see magical-like performances of these tools, it’s really important to know they have limitations,” Anders said. “It’s not like they’re sentient; they don’t think and feel like a human does. They’re not objective, they are likely to have some of the same biases of the human language that they’re trained on. They are not authoritative. Like a human author, they cannot be responsible for the consequences of their texts and they are not ethical.”
Thomas agreed with Anders about the detriments of the newer AI generator technology, including bias.
“More broadly, all AI technologies have questions of bias and that bias comes in algorithmic design, it comes in how we sample the data that is used to train these models,” he said. “It comes from the way the data is generated. This is, in these cases, human-generated data and so the data you get depends on who has access to that human generation.”
He also pointed to AI responses that simply aren’t true when asking questions, which spreads misinformation and impacts individual users. Thomas pointed to a “now infamous case” of a lawyer using ChatGPT and citing case law that doesn’t exist.
Academic integrity questions and classroom needs
Jim O’Loughlin, professor and head of the University of Northern Iowa’s Languages and Literature Department, showed the regents several headlines about academic integrity and the use of ChatGPT. He said questions of plagiarism are not new and the universities in Iowa have policies on academic infringements.
“There’s already some mechanism for dealing with electronic text,” he said while showing the regents a copy of UNI’s Academi Ethics Violation policy. “But we are — in the section in red — working on what modest changes may need to be made to account for generative AI.”
O’Loughlin said that these policies must remain flexible to see proper use in different classroom settings, as some may encourage understanding AI for future occupational application. Some students will need extensive understanding of generative AI, he said, while others may just need a little knowledge on it.
He pointed to the job of prompt engineers, who develop, refine and optimize AI text prompts for accuracy and relevant responses. Some current students at Iowa’s universities will go into these jobs, he said, who will need several classes on how to use and better AI.
Those aren’t the only cases, though, O’Loughlin told the board.
“Clearly, there are going to be some circumstances and some classes where the use of AI would be detrimental and would need to be prohibited and faculty would need to have the leeway for that,” he said.
Another issue is the current infrastructure professors have to determine if student work is plagiarized or not, O’Loughlin said.
“There are some concerns that a lot of faculty have right now,” he said. “Electronic plagiarism checkers that are already in place, they’ve actually struggled to accurately identify AI-produced text, particularly a lot of false positives come up for students for whom English is not their first language.”
Needing new assignments
O’Loughlin said the assignments the regents and some current professors at UNI, ISU and the University of Iowa would have experienced in their educational journeys will likely be nullified because of generative AI.
“We are also finding, now, that some standard forms of assessments, things that we all would’ve done — the take-home exam, the annotated bibliography, the research paper — these are going to become less reliable indicators of student performance because ChatGPT can be used with them so easily,” he said.
Written communication, argumentation and basic computer coding skills are easily assisted or even fully written by generative AI, he said. Discernment and understanding if something is good, bad or argumentative is becoming more important in higher education, he said, which is taught in more humanities courses.
New courses are also being offered surrounding AI, Anders said, pointing to a class he’s teaching at ISU entitled “Artificial Intelligence and Writing.” He will teach literacy tools for students to understand and develop effective prompts and find accurate information using AI.
O’Loughlin pointed to an epidemiology class at UNI where students analyze what ChatGPT has to say on public health issues for accuracy. There are also creative writing courses that use AI to understand original story ideas.
Opportunities for AI use are everywhere and in every discipline, Thomas said, including classes at the UI in entrepreneurship and AI as well as providing hands-on experiences in the Commercializing New Technology Academy.
“It’s going to impact all of the research across campus and then also all of our students as they go into the workforce,” he said. “And it’s important that we’re preparing them for that space.”
Privacy concerns
Thomas said one of the major issues with using ChatGPT and similar software is that students may not realize it stores data.
Generative AI holds onto the information input by people to train its next version, which includes any sensitive data.
“There are changes that are coming, particularly in ChatGPT, to allow you to keep your data private but I think there are still concerns and it requires education to make sure that people understand these and, probably in certain circumstances, prohibition against using these technologies with certain data,” he said.”
The time is now
Anders said the disruption of AI is happening now.
“These technologies, unlike other technologies, are not emergent in the sense that we don’t have to wait five years to see what they can do,” he said. “They can already do it now and if we had no further progress they would already be transforming our world.”
AI won’t replace jobs, he said, but a human using AI will as the technology is focused on “ramping up” human talent.
“The last point, that I think we all three agree on, is the question is not to ban or not to ban,” Andes said. “That’s already gone. This is here for good. But how can we assume leadership for inventing ethical features, ones that mitigate harms in our learning communities and prepare our students to use these tools moving forward.”
Iowa Capital Dispatch is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Iowa Capital Dispatch maintains editorial independence. Contact Editor Kathie Obradovich for questions: [email protected]. Follow Iowa Capital Dispatch on Facebook and Twitter.
Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter
[ad_2]
Source link