On Reddit, joke (usually bad joke, low effort meme or pop culture reference) comments were the absolute worst kind of spam that destroyed the readability of comment threads.
That sort of content belongs in its own space, not polluting places that are still worth reading.
It’s definitely possible, but such an AI would probably be good enough to take over every other field too. So it’s not like you can avoid it by choosing something else anyway.
And the disruption would be large enough that governments will have to react in some way.
First step would be tagging posts/comments, to clearly separate ones meant as pure opinion from ones meant as a factual claim. Then tagging for sourced/unsourced/disputed/misleading/omitting crucial details, etc. claims. Then tagging things like how confident the poster feels about what they’re saying (e.g. from “I heard it somewhere” to “I’ve seen it with my own eyes on multiple occasions”)
Then you would need easy to inspect metadata showing the sourcing chain all the way to the origin. And ability to comment on that (e.g. if some source’s claims are misinterpreted and the source doesn’t actually claim the thing).
Then you would need the people to actually care about facts, even if the facts go against their existing beliefs or preferences.
Also people need to be able to think more with varying degrees of uncertainty built-in, not just “this is definitely true”/“this is definitely false” (unless there is enough material to back that up).