This isn’t an Large Language Model, it’s an Image Generative Model. And given that these models just present human’s biases and stereotypes, then doesn’t it follow that humans should also be kept far away from decision making processes?
The problem isn’t the tool, it’s the lack of auditable accountability. We should have auditable accountability in all of our important decision making systems, no matter if it’s a biased machine or biased human making the decision.
Voting creates a signal about the quality of a post so other users can rank posts based on the collective perspective. You don't vote for yourself, you vote to help other users.