On Feb. 12, Clark alumnus Sid Dani, an Artificial Intelligence (AI) product manager at Paramount+ and AI content creator, connected the Office of Graduate Admissions to an opportunity offered to multiple universities: a year of free Perplexity Pro for every student and faculty member.
According to the Graduate Admissions Office, Perplexity Pro, a large language model generative AI service, will assist in improving graduate employability by granting students experience in the usage of AI. Students can sign up for the service until Dec. 23 and receive a one-year free trial.
As Senior Director of Graduate & Professional Enrollment, Alyssa Orlando says, “We weren’t searching for it when the partnership happened.” Orlando compares the offer to “Clark swag,” a way to say, “We’re so excited to have you; here are some things you can take advantage of.”
The Graduate Admissions Office says the Perplexity membership will aid the newly-launched Applied Artificial Intelligence Master’s and Concentration, and the Clark graduate program’s growing focus on future employability.
Joe Kalinowski, the Interim Chief Information Officer and Assistant Vice President for the Information Technology (IT) department, supported the addition. He explained that generative AI programs like Perplexity Prowill learn and train off what a user inserts, and thus cannot share personal data or classified information, as Clark has no contractual language or legal protection from Perplexity AI stealing users’ data.
“The reason they give [Perplexity Pro] away is so it can learn; it wants your information…for the majority of cases, and if you’re sharing public information with it, it’s not a concern,” Kalinowski said. “It’s when you start to share restricted or confidential information [that] you need to be a little more careful.”
Kalinowski referenced Clark’s data classification policy as being relevant to privacy concerns relating to generative AI. He added that people should rely on their intuition, humanity, and intelligent thought when using large language models for studying or research.
Currently, the University has no overarching AI policy; instead, professors choose how or if to implement generative AI in their classrooms.
Marc Jacobs, a Professor, PhD student, and ‘89 alumnus of Clark University, said he will start implementing AI in his classroom. Over the summer, Jacobs will teach “AI and Personhood” and “AI and Government Regulation and Ethics.” He plans to task students “to interact with an AI in a particular way, and then provide a transcript,” then analyze and comment on the conversation to encourage transparency and critical thinking skills. He encourages others to use AI as a “thought partner” to think through difficult topics rather than letting the service think for the user.
In a similar vein, Associate Professor of Language, Literature, and Culture Eduard Arriaga-Arango advises AI users not to replace writing and language-learning with something like Perplexity Pro to simply translate words.
Clark adding Perplexity Pro to their offered services signals a larger phenomenon of schools and universities embracing AI for both students and faculty.
In 2024, Perplexity started a different initiative: “Race to Infinity.” If 500 or more students signed up for Perplexity Pro at certain Universities, all students at that university would receive a year of Perplexity Pro for free. 45 schools, such as MIT, Penn State, Harvard, and Brigham Young University (BYU), have reached the 500-student target. Other schools have added applied artificial intelligence programs, including UC Berkeley’s Master’s in AI Law.
With the growing implementation of AI on college campuses, concerns about plagiarism, inherent biases, and potential limitations on learning and critical thinking are on the rise.
“I think we’re advancing very quickly without understanding the consequences,” Paul Cotnoir, Dean of the Becker School of Design, remarked. “It’s so easy and it’s so ubiquitous that we’ve been taking it for granted. We’ve been moving forward without really understanding it.”
“The intellectual property we use to create the platforms, is it really ours?” Cotnoir continued. “And once we use it, is what we’ve created open for others to plagiarize?”
Arriaga-Arango raises additional concerns over biases embedded within the technology against certain races, socio-economic statuses, and genders. “They are trained by existing information perceived by us,” Arriaga-Arango explained, which holds inherent biases against marginalized communities.
Arriaga-Arango says people should develop AI with “social interest in mind, opposed to AI with economic interest in mind [and] include marginalized communities in the process.” He goes on to say we must “give voice to these communities that have produced data, have produced knowledge and contributed to the ways these tools work.”
“AI with a social benefit in mind would look more like we use this to empower communities,” Arriaga-Arango concludes. To help those communities “live a life that is full of accomplishments, and I’m not talking about money. The possibility of having services, the possibility of having basic rights, to having access to information without that information being filtered by certain interests.”