Elana Zeide：嗨，我是Elana Zeide。我是内布拉斯加州大学的法学副教授。这是高等教育中使用的新兴人工智能的三个陷阱。一个是自动组成应用。您可能会看到我在谈论的自动语法和组成工具。现在有一些应用程序，例如语法，海明威，ProWritingAid，这些工具也已被整合到许多文字处理器中。这些，它们就像撒了拼写检查器，对吗？因此，他们不仅要进行拼写，还可以进行一些语法，而是现在一些构图。因此，他们试图检测被动措辞或重复的单词。在某些情况下，这些很棒。 They can be powerful learning tools, if students pay attention to the suggestions, and try to incorporate that learning into their own work, but they can also become a crutch. And in some cases do more than merely copy edit. For example, there are systems that kind of remix learning materials and write student essays. And you also see students over-relying on these and just pretty much accepting everything, even though that sometimes makes their work nonsensical. It's a matter of being attentive to the line between providing help and reducing students' motivations to improve their own skills. The next artificial intelligence tool is an increasing use of AI for grading and evaluation. The problem here is is the famous human in the loop problem. You get teachers over-relying on automated suggestions without having a sufficient understanding, or taking the time to evaluate AI generated evaluations. Part of this is because there's a human tendency towards automation bias, to believe the accuracy results that are computer generated. But educators are also not always in the best position to review automated outcomes and assessments, because the results are often complicated. They're based on data that is unknown, and it's difficult to go behind the output into the analytics because of the black box of AI. So it's hard for the human evaluator teachers to know when something might have gone wrong, and often there's not a really good visible signal of something that has gone wrong. Generally, the reason people put these tools into place in the first place is because they save time. And so if you have an educator having to go through everything again, it reduces much of the point of putting an AI tool in the first place. The last AI tool to worry about is something I guess that all you know about already which is automated online proctoring tools. I'm specifically concerned about the artificial intelligence part that is used in the automated monitoring systems that detect when students cheat. Companies refer to this as monitoring students and test takers, but it's more accurately described as surveillance, often involving audio visual input, some digital input, and then subsequent AI analysis that is supposed to detect cheating. Although actually explicitly the vendors are very clear to say they are not trying to detect cheating. That is the teacher's job, but rather they detect suspicious behaviors. However, these tools don't have sufficient evidence supporting again their accuracy or their efficacy. And they're based on a questionable assumption that there is some normal profile of student behavior that artificial intelligence can identify that normal profile. And that deviations from that baseline indicate that something suspicious is happening, rather than just foreseeable diversity of student physiology and environments. During the pandemic, this has led to unacceptably high rates of false flags for innocent behavior. And in many cases, schools implemented these tools without giving teachers enough training, or oversight, or providing ways for students to understand what's going on, if there is a system in place that acts as a safety net in case they're being flagged falsely for innocent behavior and how they might be able to challenge problems they run into with the technology. And the main problem with these are over reliance on the automation, minimal transparency and explainability. And in some cases insufficient evidence supporting their accuracy and efficacy, both overall and across marginalized populations. These tools really do have great promise. It's partly just a matter of being aware of their limitations and not ignoring those in the ambition to try something new, and really the hopefulness of implementing some really exciting, and exciting sounding tools.