Oh sweet baby Jesus no, burn AI to the ground, humanity can't be trusted with it

Chimpzy

Simian Abomination
Legacy
Escapist +
Apr 3, 2020
12,872
9,309
118
I don't don't about the rest of you, but if the AI known for confidently spouting bs claims it wrote ... every ... single ... assignment of an entire class, I'd get a bit suspicious.

Then again, having a PhD does nothing to preclude someone from being an idiot.
 

Ag3ma

Elite Member
Jan 4, 2023
2,574
2,208
118
I don't don't about the rest of you, but if the AI known for confidently spouting bs claims it wrote ... every ... single ... assignment of an entire class, I'd get a bit suspicious.

Then again, having a PhD does nothing to preclude someone from being an idiot.
Indeed. At minimum, you could just do something like Google for a program that checks AI. Although, honestly, any halfway competent university should be on top of this already and have provided its staff with guidance for how to check for AI-assisted scripts.

In fact, I just checked through a coursework assignment and about a sixth of the class were identified as having a significant proportion of their work written by AI. However, the AI checkers can also turn up false positives, so they also have to be reviewed by staff to see if we can see anything that we think is wrong. In the end, I've submitted only a fraction of those accused for further investigation and possible disciplinary action.
 
  • Like
Reactions: Chimpzy

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
If the work is written by AI, isn't it likely to be pretty bad and full of inaccuracies anyway?
 

Ag3ma

Elite Member
Jan 4, 2023
2,574
2,208
118
In all seriousness, the best use I've seen for AI writing is to create bad fanfics faster than humans can. And even then they struggle. Now, maybe in the future they will be a concern, but right now they seem borderline useless.
I disagree.

I think for relatively high level work (final year and postrgraduate), AI might struggle. However, if someone's smart about using it, it's extremely hard to tell. For instance, you can constrain the AI: tell it to write an essay on a certain topic using certain references. It'll get the references right, and the information will probably be accurate. There are things in terms of detail and sophistication the AI is not good at, but in the first couple of years of a bachelor's degree, it'll probably generate something that will at minimum pass, and could even be marked as good.

There are giveaways of AI text. You can get odd repeats of information. The sentences are usually short and in very plain English with minimal to no errors in spelling, grammar and sentence construction, so if you have examples of the student's written work that's in a different style, it might stick out. This includes that they default to US spelling, so if you're in a country that doesn't use US spelling, that can be a giveaway. And so on.

There's also an element of structure. If you look at two AI-generated essays on the same assignment, they tend follow almost exactly the same structure - and I mean very, very similar. However, whether you can take action against a student is hard to say: the risk may be that you penalise a student with a very formulaic style, and I'm not sure misconduct processes will support holding a student guilty of misconduct because the style and structure of their essay is very similar to another's (and just like an AI would produce).
 

Ag3ma

Elite Member
Jan 4, 2023
2,574
2,208
118
If the work is written by AI, isn't it likely to be pretty bad and full of inaccuracies anyway?
No: the AI can be "directed" so it gets facts and references right. Or, a student can manually check for obvious problems and fix them.

One can argue that if a student uses a sort of "blended" approach it's potentially not a problem: they have to know the material in order to check and correct what the AI has produced, which sort of indicates job done that they have learnt. However, how would you tell from a student that knew nothing and lucked out that the AI got it right and one that actually did a proper check? That is a major problem for whether the assessment is robust and reliable.

Basically, the conventional coursework essay is dying and will soon be dead. Universities will either have to find a replacement form of assessment, or they will have to do them as a sort of invigilated, open book exam.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
Ah, ok, not used AI for any academic (or pretending to be academic) purpose, but I've usually struggled to get useful information when asking questions with objective, factual answers.
 

Terminal Blue

Elite Member
Legacy
Feb 18, 2010
3,923
1,792
118
Country
United Kingdom
One can argue that if a student uses a sort of "blended" approach it's potentially not a problem: they have to know the material in order to check and correct what the AI has produced, which sort of indicates job done that they have learnt. However, how would you tell from a student that knew nothing and lucked out that the AI got it right and one that actually did a proper check? That is a major problem for whether the assessment is robust and reliable.
I mean, to be fair I think it's often already extremely difficult to accurately assess the quality of academic work.

We like to pretend academic writing is some neutral, objective medium for conveying factual information, but the truth is that it is also a literary style. A knowledge of academic writing can already be used to cover weak research or to give the appearance of credibility. Hence why Jordan Peterson somehow got a job.

My personal feeling, particularly as a neurodiverse person, is that anything that minimizes the barriers between brain and text is potentially a positive asset. Technology has completely changed a lot of things already, and an older person could easily say that using an automated citation manager or online research platform is "cheating" compared to what research used to entail. I'm not sure there's any reason to cry foul at people using computers to synthesize writing in an academic style. In fact, I tend to see that kind of democratization of the knowledge economy as broadly a good thing.
 
Last edited:

Kwak

Elite Member
Sep 11, 2014
2,343
1,873
118
Country
4
later it was rebranded as an AI chat companion (an autonomous character designed to learn which answers sound more empathic towards the user) with a customizable animated avatar. It became very popular during the pandemic (as opportunities for social interaction with real people became limited and people still needed to interact with someone).
Hmm, I wonder if my life and mental health could be improved by engaging an artificial intelligence to talk to instead of just rambling to myself?
Should I take the final step in embracing our dehumanising dystopia of technological marvels?
 
  • Like
Reactions: RhombusHatesYou

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
If you lose your case because your lawyer is a total clueless muppet who got ChatGPT to do their work for them, is there anything you can do?
 
  • Like
Reactions: Leg End

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,145
3,888
118
On a related note, reliance on calculators on getting the right answer means people don't recognise an obviously wrong answer as readily, such as when they hit the wrong button without realising it.
 

CaitSeith

Formely Gone Gonzo
Legacy
Jun 30, 2014
5,374
381
88
Hmm, I wonder if my life and mental health could be improved by engaging an artificial intelligence to talk to instead of just rambling to myself?
Should I take the final step in embracing our dehumanising dystopia of technological marvels?
I doubt it. The AI would probably end up rambling back at you (no offense intended; the AI would just learn it from you).