As of now, using AI to complete assignments is labeled as an infringement of the TBS Honor Code unless permission is granted explicitly. Still, its integration is a possibility. So why should it be incorporated into Benjamin’s program, or why not?
Not long ago, the Middle School’s faculty was given a briefing on the role of artificial intelligence in an academic setting. Vanderbilt University professor Andrew Van Schaack spoke on the subject. “AI doesn’t replace the person but gives them more ability,” the professor explained. Artificial intelligence was not created to replace humans but to aid them in discovery and research. This idea applies to academics as well, where AI tools should be used in moderation to build on a student’s knowledge and offer inspiration. While it’s safe to assume that many TBS students have probably experimented with the tool already, it is important that teachers enforce its use correctly. This is not only to guarantee students are capable of completing work independently, but also to prepare them for a future where artificial intelligence will play a huge part in society. AI is a very broad term, so where does one start in understanding it?
According to research, one of the most commonly used tools amongst students is ChatGPT, an AI chatbot developed by OpenAI. With its extensive capabilities, ChatGPT and other AI extensions can do anything from solving simple math equations to writing paragraphs, all based on a few user-given prompts. But this isn’t necessarily the way it should be employed by TBS students. A better approach would be to allow students to utilize the tool, but in moderation. As quoted above, AI should assist humans, not replace them. With its advanced intelligence, AI can be used to generate recommendations. By entering in some detailed prompts, AI may present students with a library of concepts. Instead of using AI to write paragraphs, students can use tools such as Grammarly to fix mistakes. This method will not only continue to foster students’ creativity, one of TBS’s main goals, but also educate them on AI’s purposes.But before entrusting students with this privilege, educators must gauge AI’s reliability
Like anything else, AI has its own flaws. In this case, it is the accuracy of AI. It’s essential for educators to investigate the tool before implementing it. Despite its undeniable competence, there’s plently reports on artificial intelligence’s imprecision. A fault such as this is problematic, especially with all the trust placed in AI by humans. However, there are multiple reasons for these errors, and learning about them would help users learn how to use AI better. One of the most common mistakes people make when using AI is bad prompts. Without detailed instructions, AI can’t generate proper answers. After all, the tool is modeled after the human brain, which requires the same. An easy solution is to write more thoughtful prompts. The more considerate and sophisticated the prompt, the higher-quality the answer will be. Moreover, weak prompts can also result in AI hallucinations. Confused AI’s may begin to make up answers, unintentionally misleading users into believing they are fact. This creates a problem for students trying to gather research and complete homework accurately. There is little that can be done about this from a student’s position, but there are some ways to avoid mistakes. Solutions such as the previously mentioned prompt-reworking, along with keeping AI softwares up to date will help. AI is improving everyday. Updates to AI software are also bound to include bug fixes, which will decrease the probability of hallucinations. With this in mind, it is important to update AI programs whenever possible. After all, the quality and accuracy of answers will vary depending on the version of the AI program being used. Newer versions of AI will have more information than previous ones. However, problems like hallucinations are not officially resolved. Developers are working their hardest to prevent these errors, but the constant recurrences make it quite difficult. AI is rapidly advancing, but people must make do with whatever versions are available now.
In relation to this point, these issues and others will also affect teachers. While AI hallucinations and like errors will be hindering towards teachers, there are other glitches that may trip teachers up. One of the most significant is false positives. Even though AI is becoming an increasingly more common tool, many institutions, including TBS, have policies against it. Of course, many students can bypass these rules with loopholes as simple as rewording AI’s writing. This creates an issue for teachers when grading work. While many teachers are able to differentiate between the work of their students versus an AI’s, many actually turn to AI itself for help. There are countless sites designed to help teachers detect AI use. But make no mistake; as of now, none of them are 100% trustworthy. The closest thing would be Grammerly’s plagiarism check tool, but that caters more to students in this case. This is not to say AI surveillance is completely unreliable, but they frequently produce false positives. False positives are when a detector wrongly recognizes a person’s writing to be AI generated. This leads to students being wrongly accused of cheating; something that may result in undeserved consequences and major effects on careers. Students are left feeling wronged, while teachers become skeptical of the student’s true integrity. However, students always have the right to advocate for themselves. If you’re wrongly accused of using AI, it’s a good idea to consult with the teacher and parents. A conference can help students prove themselves and earn back their teacher’s trust. Of course, teachers should not be relying on AI detectors in general. No matter how promising they appear, all current detectors are full of imperfections. A more efficient method for teachers to check students’ work would be to do so themselves. Nothing compares to an teacher’s knowledge of a student and of their writing style. AI detectors will never have access to the same connection formed between a student and teacher. It’s easy for most teachers to tell if a student’s work is really original because they’ve read and graded previous pieces and know each student’s unique style if writing. AI cannot make links between student’s writing and therefore makes assumptions. It’s best for teachers to just trust their own gut feeling, for now at least.
While AI continues to evolve, people must learn how to properly use it. By the time most of TBS’s current students graduate, AI will be at its peak. It’s fair to say that AI hallucinations and other errors may be completely solved by that time, or close to. As artificial intelligence continues to change, we must adapt. AI is bound to become an even bigger part of daily life as it improves, so it’s a wise decision to begin educating students on the benefits and problems early on. And chatbots like the ones described here are only one form of AI. There’s plenty more to explore, and you’ve got all the time in the world to do so. Artificial intelligence is complex, but if teachers can properly guide students, it will greatly benefit them in the future. So, how will TBS work to help its students master this tool?
Brooke • Dec 25, 2023 at 7:12 pm
Great article!
Geoff • Dec 19, 2023 at 5:23 pm
wow! this is a major topic as all students at all levels of education will be involved in working with AI to learn and present ideas. Very well done Annabel! will be interesting to read again many years from now.
Poppop • Dec 15, 2023 at 1:25 pm
Well done Annabel ??