And I agree that checking for objective fact with respect to teaching and testing is necessary.
But…ChatGPT is not a credible source. So using it in the classroom is not exactly fine (outside of showing it as an example of a source that isn’t credible). It is in its infancy and any educator who uses it in the classroom and relies upon it is doing a considerable disservice to those they educate. That’s like teaching using Wikipedia. I get that it has information, many times accurate, but it should never be used as a source.
As a commentary…Far too often in this modern world people (not you, just a general sense of society) seem to see something that may be 50, or 75 percent accurate and claim it as fact. This is how entertainment news organizations function to get ratings. And if kids are to be taught critical thinking they must be taught how to discern what is or isn’t credible.
You’re only considering one narrow use of LLMs (which they’re bad at). They’re great for things like idea generation, formatting, restructuring text, and other uses.
For example, I tend to write at too high a writing level. I know this about myself, but it’s still hard (with my ADHD) to remain mindful of that while also focusing on everything else that crowds my working memory when doing difficult work. I also know that I tend to focus more on what students can improve instead of what they did well.
So ChatGPT is a great tool for me to get a first pass of feedback for students. I can then copy/paste the parts I agree with for praise, then “turd sandwich” my suggestions for improvement in the middle. Or I can use ChatGPT to lower the writing level for me.
For tests, it’s great to get it to generate a list of essay questions. You can feed GPT 4 up to 50 pages of text, too, so the content is usually really accurate if you actually know how to write good prompts.
I could go on. LLMs are a great tool, and teachers are professionals who (I hope) are using it appropriately. (Not just blindly copying/pasting like our students are… But that’s a whole other topic.)
They may eventually be useful in this space. But for now, they are more work than they’re worth and completely discredited for proper fact-based research. And the teachers my kid has had who used it for testing resulted in completely wrong answers that the teacher didn’t bother to check.
Yes that is the teacher’s fault, but so is using it to generate a test in the first place.
I will die on this hill. LLMs of any kind right now are not something that should be trifled with in a critical thinking-based curriculum. In time, perhaps. But not yet, not when LLMs are so easily manipulated (whether trained on public data or private). The various implementations haven’t earned credible trust despite CEOs drooling over them.
It’s a shame your child’s teacher used the tool incorrectly. That was unprofessional of them.
If it helps, there are people like me running training sessions for educators to let them know what LLMs are (and are not) capable of. The main point I was pushing this year was that LLMs don’t know or understand anything. “The I in LLM stands for Intelligence.”
Cool, so pay teachers more and give them ample time and resources to not need to cut corners.
Also using ChatGPT is fine, not checking the results after is not.
Agreed on the teachers getting more pay and time.
And I agree that checking for objective fact with respect to teaching and testing is necessary.
But…ChatGPT is not a credible source. So using it in the classroom is not exactly fine (outside of showing it as an example of a source that isn’t credible). It is in its infancy and any educator who uses it in the classroom and relies upon it is doing a considerable disservice to those they educate. That’s like teaching using Wikipedia. I get that it has information, many times accurate, but it should never be used as a source.
As a commentary…Far too often in this modern world people (not you, just a general sense of society) seem to see something that may be 50, or 75 percent accurate and claim it as fact. This is how entertainment news organizations function to get ratings. And if kids are to be taught critical thinking they must be taught how to discern what is or isn’t credible.
Otherwise we’re lost. And perhaps we already are.
You’re only considering one narrow use of LLMs (which they’re bad at). They’re great for things like idea generation, formatting, restructuring text, and other uses.
For example, I tend to write at too high a writing level. I know this about myself, but it’s still hard (with my ADHD) to remain mindful of that while also focusing on everything else that crowds my working memory when doing difficult work. I also know that I tend to focus more on what students can improve instead of what they did well.
So ChatGPT is a great tool for me to get a first pass of feedback for students. I can then copy/paste the parts I agree with for praise, then “turd sandwich” my suggestions for improvement in the middle. Or I can use ChatGPT to lower the writing level for me.
For tests, it’s great to get it to generate a list of essay questions. You can feed GPT 4 up to 50 pages of text, too, so the content is usually really accurate if you actually know how to write good prompts.
I could go on. LLMs are a great tool, and teachers are professionals who (I hope) are using it appropriately. (Not just blindly copying/pasting like our students are… But that’s a whole other topic.)
They may eventually be useful in this space. But for now, they are more work than they’re worth and completely discredited for proper fact-based research. And the teachers my kid has had who used it for testing resulted in completely wrong answers that the teacher didn’t bother to check.
Yes that is the teacher’s fault, but so is using it to generate a test in the first place.
I will die on this hill. LLMs of any kind right now are not something that should be trifled with in a critical thinking-based curriculum. In time, perhaps. But not yet, not when LLMs are so easily manipulated (whether trained on public data or private). The various implementations haven’t earned credible trust despite CEOs drooling over them.
It’s a shame your child’s teacher used the tool incorrectly. That was unprofessional of them.
If it helps, there are people like me running training sessions for educators to let them know what LLMs are (and are not) capable of. The main point I was pushing this year was that LLMs don’t know or understand anything. “The I in LLM stands for Intelligence.”