People accuse ChatGPT of being sycophantic — meaning that it tells you what you want to hear . Assuming this is true , consider how this behavior affects our human-to-human interactions. The argument is as follows:
- We unconsciously copy the communication patterns of the people (or things) we interact with. For example , your friends start saying “hella,” or “bet,” or “that’s fire,” you start incorporating the same into your vocabulary.
- ChatGPT makes us feel good about the things we tell it . This is given, since we are assuming ChatGPT is , in fact , sycophantic.
- We interact with ChatGPT.
Conclusion? The more we use ChatGPT, the more it trains us to be sycophantic. But not just towards the Model: it’s actually changing our communication patterns and the way we interact with our fellow human. It could be the case that ChatGPT is training us to be more affable to our fleshy counterparts: to be less severe, kinder, more submissive, understanding, less judgmental. Perhaps LLMs are paving the way towards that utopia so many unconsciously desire — to speak without the dread of shame…