WILMINGTON — There’s a common phrase: “If you can’t beat them, join them.”
No one really knows for certain where it originated.
It is sometimes referred to as an old proverb, included in Random House’s 1996 “Dictionary of Popular Proverbs and Sayings.” Some attribute the earliest known use to Atlantic Monthly magazine, listed as one of Senator James Watson’s favorite maxims. Per the ask-and-you-shall-receive forum Quora, it could have come from Quentin Reynolds’s assertion “if you can’t lick ’em, jine (join) ’em,” in his 1941 book on London in World War II, “The Wounded Don’t Cry.”
According to ChatGPT, OpenAI’s latest addition to the artificial intelligence landscape, the phrase is often attributed to American businessman and author James Corbett. Known as “Gentlemen Jim,” Corbett was a famous heavyweight boxer and used the phrase as shorthand for his belief that if his opponent could not be beat, he would adapt and learn from them.
In its four paragraph answer, the chatbot goes on to say the phrase has been in existence well before Corbett, stating “the underlying concept of joining forces with one’s adversaries or adapting to their methods has likely been a part of human strategic thinking for centuries.”
A Google search, though, shows an array of opinions on how much humanity should join forces with artificial intelligence.
Some writers have called AI a waste of time, stating it’s mostly being used as a toy for mimicking a song sung by a popular artist or to generate funny images of friends.
Many point to the harms of developing the technology, such as its potential to make social media more toxic, take away incentives from public entities to solve public need due to private inventions, and change the workforce in ways that make work more soulless, much like a machine.
But many people hail the technology as a way for humans to actually save time wasted on menial tasks and dedicate more time to creative endeavors. AI incorporation can help track down child pornography on the web, increase the accessibility and performance of healthcare technology, and can assist in just about any industry.
Despite conversations happening in major, albeit high-brow, forums, adaptation is moving at a slow pace compared with the rapid AI advances. This can, in part, be attributed to a lack of understanding in part by the general public and, thus, legislators (see teased TikTok congressional hearing).
Coupled with the arms race of AI development in Silicon Valley, people often fall on either extreme of the AI thought spectrum: those who think AI is the harbinger of humanity’s doom and those who think it could bring about a golden era of human existence. But no one can really be in denial that changes are coming.
Higher education may be at the forefront of developing a way to manage AI.
Across the country, universities are rethinking the way education is delivered and the assignments completed by students, while also formulating action plans to deploy AI across campus and mitigate negative impacts.
Now, a cohort of professors are pushing UNCW in that same direction.
Shifting sands
Karl Ricanek is a professor in UNCW’s Department of Computer Science and director of its Institute for Interdisciplinary Identity Sciences. The latter is a center for complex research projects, including AI and cybersecurity, and leverages resources from higher education, the federal government, and strategic partners to support national interests.
Having studied AI and machine learning in the 1980s, Ricanek told Port City Daily on Wednesday AI is nothing new. He has been conducting research on the technology since joining UNCW in 2003.
“We’ve been using AI for more than 70 years,” he said. “They’ve been used in all kinds of systems. And you didn’t realize it because no one let you know, right? So it’s out there now.”
The buzz it’s facing is due to improvements in generative AI, such as the release of ChatGPT, which can produce quality text, images, video and audio of people, highly capable of mimicking the real thing.
UNCW is one of the state’s leading institutions on AI research, dedicating an academic track within the computer science department to AI and hosting an applied research lab. It also recently launched a new program, Intelligent Systems Engineering, the first in North Carolina to blend AI and engineering for undergraduates.
The program blends mechanical engineering with the study of AI algorithms, with affects in anything from robotics to IoT.
“Internet of Things — so think about, you know, your technology for your refrigerator, that understands what you have in the refrigerator,” Ricanek said. “You can think about medical IoT, where you’ll create a device that can, you know, instantly read someone’s blood pressure by looking at their face.”
Leveraging the technology doesn’t mean computer science is immune from AI’s classroom disruptions. Ricanek describes an incident last year when he gave his class programming assignment and they all used ChatGPT to complete it; he could tell because the code was written in a format the class hadn’t been taught yet. When he pointed that out in class, he said he got “crickets” in return.
“I went on to explain to them, you know, I’m not looking to give you a failure on an assignment and say that you cheated,” Ricanek said. “What I want you to do is understand the tools that you’re using and when these tools can be used, how they can be used, and when you should trust them.”
Chair Rick Olsen has also facilitated conversations in the Department of Communication, trying to develop a few standard policies on AI use and which courses it’s well-suited for.
‘You’re not gonna be able to ban it,” Olsen said. “It’s like trying to ban spellcheck.”
What he has discovered, along with many professors across the country, is to mitigate disruptions of AI in a liberal arts education is to shift the methods of testing student knowledge. Often professors gather writing samples to check if a student has read something or understands basic concepts. Now, professors will be challenged to survey that in a different way and leave essays and papers to in-depth critical thinking exercises where students have to explain how and why they wrote what they did.
Olsen said this could be a move away from written take-home assignments to in-classroom, laptops and cellphones off, or oral exams, already used by foreign language departments.
This is not to say AI cannot be a tool for brainstorming. Olsen said it could be useful to give students examples of how to format a paper, a jumping off point for original thought.
Over in the Department of English, professor Lance Cummings has incorporated AI writing tools into the curriculum and hopes more professors will do the same. He teaches a course, AI in Digital Storytelling, where students use ChatGPT to generate written dialogue.
“You actually spend a lot of time constructing a prompt that’s going to get you anything that’s actually useful for real-life purpose,” Cummings said. “In the process of putting that prompt together, I think students actually learn more about what makes a good dialogue than if they just sat down and tried to write the dialogue.”
By the end of the class, Cummings said students say it’s easier to write the story than craft a prompt that will deliver a quality result.
Cummings hopes the evolution of writing tools, particularly ones with predictive text, will be able to assist writers when they reach a roadblock in the writing process.
“So it’s actually built for fiction writers, like you can write, and then if you’re having trouble describing something in the scene, the AI can generate different descriptions,” Cummings said.
The purple job eater
Universities will also need to shift the way students view and prepare for the world outside its walls.
“When I first started, I was very much focused on, ‘We’ve got we got to teach the students how Congress works, we’ve got to teach them all the stuff,’ and over time, I’ve definitely shifted more of like, we just need to teach them they want to get a job; let’s help them get a job,” Aaron King, professor in the Department of Public and International Affairs.
King stressed the importance of critical thinking, through the lens of whichever major a student chooses, to contend with a changing workforce in the face of generative AI.
Data shows AI replaced 4,000 jobs in May, from copywriting to clerical work to call centers. This followed Goldman Sachs’ March prediction that AI could eventually replace 300 million full-time jobs globally, particularly white-collar jobs always thought to be outside automation’s grasp.
“So there’s a lot of negative conversations about how AI is going to achieve or increase plagiarism or take people’s jobs,” Cummings said. “Part of that is grounded in not truly understanding the technology and how it works.”
Speaking for writing careers, he said most jobs will be impacted — but that doesn’t mean someone is getting fired because of it.
“It’s not going to be computer engineers; it’s going to be tech writers that do that,” Cummings said.
Ricanek said as AI progresses, it is even more imperative for humans to monitor the technology, created by companies, that may have “backdoor” codes to allow businesses to steal trade secrets, consumer data, or even top secret information.
Walmart and Amazon have reportedly told employees to avoid sharing confidential information on ChatGPT; J.P. Morgan and Verizon have allegedly banned their employees from using the tool. In response, some tech leaders are creating blockers that would scan employee activity and prevent secret information from being entered into the chatbot.
For Ricanek, it’s about adaptation, something humanity has been doing for centuries.
“Every technology now has displaced jobs,” Ricanek said. “But people aren’t out of work. Why? Because we develop new jobs.”
Take the Industrial Revolution, brought forth by steam-powered engines that did away with the laborer’s jobs. The work many were doing with their hands, in a time-consuming process, could then be manufactured much faster in factories. But then those factories created different jobs, for people to man the equipment and make repairs.
A more modern example is the emergence of the internet, which then necessitated whole careers dedicated to cybersecurity, gig-work, social-media managing, and more.
Often, the rise of one technology does not eclipse the previous way of doing things completely; the rise of Amazon’s two-day shipping of just about any good did not eradicate in-person stores.
Olsen said he hopes the changes could bring about serious conversations about the way we work, namely normalizing a four-day work week.
“It would be nice if radical ideas could be put on the table to say, there’s a lot of billionaires in how might their revolutions actually lift humanity because the 40-hour work week was an arbitrary, argued settlement between unions and industrial barons,” Olsen said. “Maybe it’s time a similarly large conversation, brave conversation, could happen.”
‘A brave conversation‘
To bring forth a responsible AI future where everyone can benefit and also be protected, many data scientists and other thinkers agree there should be some regulation on the development of AI. AI is still a “black box” in that facilitators cannot explain how the technology reaches the answers it does.
Some researchers argue trying to create interpretative models would reduce the effectiveness of the technology. In a competitive market — not just in the United States but also internationally, delivering results to shareholders where often billions of dollars are involved — thoroughness and transparency isn’t always top of mind.
Some AI researchers are advocating giving users who are impacted by a certain system a bigger role in participating in the development process. They also argue for more localized AI systems to counteract biases, some being that the internet — AI’s data source — overrepresents young people and those in “developed” countries who use the internet more.
“Those involved cannot just be technologists because we only see one aspect of the problem,” Ricanek said.
But what incentive is there for tech companies to develop AI ethically? Some say that’s where the government comes in. But the legislative process is notoriously slow, more often reactive than proactive — qualities incompatible with each new phase in the AI revolution. As soon as lawmakers try to pin down AI with parameters, the technology will have escaped their grasp.
“There’s just no way for government to be out ahead,” King said. “So it’ll always be catch up. I mean, that’s how it was with the internet in general. I don’t think that that means that you shouldn’t try to have some regulation, that’s needed.”
Ricanek’s opinion was that any regulation will need the flexibility to be continuously reviewed and modified.
King also noted regulation should address the ability of bad actors to leverage the technology for misinformation. Imagine a fake image of a politician committing a crime circulating social media and the effort to prove that the photo was AI.
Cummings, generally an optimist on the technology, said he fears such situations — the ability to mass produce fake news, propaganda. He stressed the importance of regulation, but also AI literacy among the general population.
But if the majority of the public only views AI as a parlor trick, Congress may not have the impetus to strike a balance between helpful and harmful AI.
Ricanek said he would like to see broad guidelines set forth by the federal government with more specific laws defined by states, and even more specifics decided by smaller groups, like universities.
“There is great concern about, you know, the possibility of AI sort of taking over the world — I don’t think that we’re remotely close to that,” Ricanek said. “We need to have great concern about the technology and start to look at it and study it and understand where we may go off track with what’s going to start creating and producing technology.”
Have tips or comments? Email info@portcitydaily.com
Want to read more from PCD? Subscribe now and then sign up for our newsletter, Wilmington Wire, and get the headlines delivered to your inbox every morning.