Every week there’s a better AI model that gives better answers. But a lot of questions don’t have better answers, only ‘right’ answers, and these models can’t do that. So what does ‘better’ mean, how do we manage these things, and should we change what we expect from computers?
More practically, you can try them with your own workflows. Does this model do a better job? Here, though, we run into a problem, because there are some tasks where a better model produces better, more accurate results, but other tasks where there’s no such thing as a ‘better’ result and no such thing as ‘more accurate’, only right or wrong.
Some questions don’t have ‘wrong’ answers; the quality of the output is subjective and ‘better’ is a spectrum. This is the same prompt applied to Midjourney versions 3, 4, 5, and 6.1. Better!

