I knew I wasn’t interested in A.I. for a while now, but I think I can finally put it into words.
A.I. is supposed to be useful for things I’m not skilled at or knowledgeable about. But since I’m not skilled at or knowledgeable about the thing I’m having A.I. do for me, I have no way of knowing how well A.I. accomplished it for me.
If I want to know that, I have to become skilled at or knowledgeable about the thing I need to do. At this point, why would I have A.I. do it since I know I can trust I’m doing it right?
I don’t have a problem delegating hiring people who are more skilled at or more knowledgeable about something than me because I can hold them accountable if they are faking.
With A.I., it’s on me if I’m duped. What’s the use in that?
The potential usefulness of AI seems to depend largely on the type of work you’re doing. And in my limited experience, it seems like it’s best at “useless” work.
For example, writing cover letters is time consuming and exhausting when you’re submitting hundreds of applications. LLMs are ideal for this, especially because the letters often aren’t being read by humans anyways–either being summarized by LLMs, or ignored entirely. Similar things (such as writing grant applications, or documentation that no one will ever read) have a similar dynamic.
If you’re not engaged in this sort of useless work, LLMs probably aren’t going to represent time savings. And given their downsides (environmental impact, further concentration of wealth, making you stupid, etc) they’re a net loss for society. Maybe instead of using LLMs, we should eliminate useless work?