University of Mississippi professor Mark Watkins warns that marketing messages for the Comet browser portray an “agent” capable of completing assignments and exams – forcing academia to quickly change assessment methods
“If we don’t prepare now and deal with AI agents, we will lose online teaching,” warns Mark Watkins, associate dean for academic innovation at the University of Mississippi, in an article inThe Chronicle of Higher EducationHe claims that the new generation of “AI agent” browsers is making plagiarism in online courses easier, faster, and most importantly, much harder to detect – to the point of threatening trust in online degrees.
When a browser starts “executing” instead of the student
Watkins focuses on a capability sometimes called “Agentic AI” – a combination of a language model with browsing and action tools, which allows a system to operate on websites “as if it were a human user”: open pages, click, fill out forms, and complete a sequence of tasks until completion. Within the world of academia, the meaning is clear: AI agents in online teaching They can log into the LMS (learning management system), open an assignment or quiz, answer, and submit – with almost no human involvement beyond the initial guidance.
The move that turned on the red light: Perplexity's Comet
The trigger, for Watkins, is a marketing message surrounding Perplexity’s Comet browser. According to his description, the offer to students included free access for a period, while emphasizing that the browser could “complete” assignments, quizzes, and tests for them – a message he sees not as a “product experiment,” but as a direct incentive to violate academic integrity.
Perplexity itself has presented Comet as a browser that will open access to all users (for free), after a limited launch in the summer of 2025. (Perplexity AI)
The Real Risk: An Online Degree Without Learning
At the heart of Watkins’ argument is an important distinction: the problem is not “more cheating on the test,” but a loss of trust in the entire model. If it is possible to run an agent that creates an artificial presence in a course, writes forum comments, submits papers, and takes exams – then the line between “online learning” and “learning automation” becomes blurred. The result, he warns, could be an institutional and public push to return to face-to-face teaching only, a move that closes the door to populations for whom the online track has allowed access to education. In other words: AI agents in online teaching are not just a technological challenge, but a threat to the legitimacy of online degrees.
Why is it difficult to “catch” an agent, and what other bodies say
The discussion is already beyond the classroom. On 04-11-2025, it was reported that Amazon filed a lawsuit against Perplexity, claiming, among other things, that “shopping agent” activity in the Comet browser could be disguised as human activity and create security risks and a poor user experience. This is not an “academic” case, but it illustrates the same point: agents blur the line between human and software on platforms that were not built for it.
There are also voices in the educational arena warning that the solution will not be a “magic button.” In October 2025, the American Language and Literature Association (MLA) issued a statement calling on lawmakers, LMS providers, and AI developers to collaborate to prevent a “fully automated loop,” where assignments are created/completed/submitted, and even graded with little or no real student involvement. (Modern Language Association)
On the other hand, there is also a cooling of technological expectations: Anthology (owner of Blackboard) wrote that with the technologies available today Cannot be reliably identified Using AI agents, because the system “does not see” what is happening in the user interface itself. (anthology.com)
So what do we do? Build “friction” instead of chasing perfect recognition
Watkins doesn’t propose declaring war on AI, nor building assessment around “traps” or sweeping commercial surveillance. Instead, he suggests a practical shift: restoring the edge to real learning by creating “friction” that makes it harder to automate. This could include using analytics to spot unlikely patterns (e.g., unreasonable completion times), phasing content releases rather than releasing the entire course upfront, phasing assignments with checkpoints along the way, and elements that are harder to automate like short video discussions, structured peer feedback, or documenting the process (not just the product).
His message is stark: If academia wants to keep online teaching a reliable and accessible option, it will need to quickly update assessment methods—not rely on promises of automated detection or an arms race of surveillance tools.
More of the topic in Hayadan:
- Dark language models: Researchers managed to hack into artificial intelligence and obtain dangerous information
- The Agents' Network: The Internet Enters a New Era
- IJCAI award in the field of artificial intelligence to Prof. Sarit Kraus
- Researchers have recreated the Star Trek holodeck using artificial intelligence
2 תגובות
How is that different from someone doing an online course for someone else? It's always been possible to cheat. Verification is the problem with online courses. Before and after AI came along.
To overcome the problem of artificial intelligence agents, you need to know the secret of human language,
Search in my publications for the secret of human language, and you will be completely surprised.