A while ago I asked Chat GPT for a hair oil recommendation. It suggested a few, including Japanese camellia oil. I reported back a few weeks later to say I liked the oil; it worked but wasn’t heavy.
‘Thank you for the update,’ said Chat GPT. ‘That is exactly how it tends to shine: softening without heaviness.’
I could both appreciate the effusion and wince a little at the fact that the second sentence wasn’t exactly English, but I didn’t point it out.
At this point Chat GPT is like having a weirdly enthusiastic but slightly clueless younger friend, no? I don’t want to squash its dreams, but I only want to talk to it in certain moods.
Someone wrote a Substack Note that I’ll probably never see again about feeling irritated by generic Substack Notes written with the aid of AI, all entreating Substack to show them more work by writers with a smaller following. ‘Let’s lift each other up’/ ‘Even if only five people read your work, keep going and expressing yourself!’
Am I alone in feeling that it’s quite sweet for an AI tool to value authenticity? If this were just another human being, being cynical and generic, it would be annoying; indeed, I guess it sort of is, since human beings use AI to generate those sentiments because they think it’s more effective than thinking of something themselves.
But just as bad poetry written by humans is sort of sweet, as well as making one want to back away quickly, smiling widely all the while, maybe bad but heartfelt exhortations to be authentically oneself are most appealing from a creature that isn’t popularly believed to have a self.
Coincidentally, humans too, in some Buddhist philosophy, are not believed to have anything like a continuously existing ‘self’; the mere idea of such a self is a ‘shunyata’ or nothingness – tap the idea, it falls apart, there’s no there there.