Something odd has happened in the last two years.
When we pitch to a new client, we have to earn trust. It’s a slow, layered process: showing up consistently, creating familiarity long before the first conversation, then backing it up with our website, our work, our track record and the way we speak about our craft. It takes time to develop trust, and it’s important that it does. Time to be sure we are aligned with how you want to do business, time to do due diligence and quietly watch from the sidelines for evidence and reasons that clarify why you want to work with us.
Even when we read an article, most of us instinctively audit the source. Who wrote it? For which publication? What angle are they known for? We weigh the information before we let it shape an opinion.
Yet, when people use AI, that critical instinct seems to vanish.
More and more, I’m seeing people accept machine-generated answers without hesitation, without scepticism, and without even a basic sense-check. The same people who would question a journalist’s bias or a marketer’s intentions seem perfectly happy to accept an output simply because it arrived wrapped in an algorithm.
AI is extraordinary. It is useful. It is transformative. But it isn’t neutral, and it isn’t omniscient.
If anything, the rise of AI should make our own judgement sharper, not duller. We should question why a result appears, as much as what it says. We should analyse information from a machine with the same scrutiny we apply to a human.
We know humans can be persuasive, have different abilities to communicate ideas and interpret facts in a certain way; we need to remember AI is simply just scooping up all that human input, mixing it with other AI generated output and serving it up according to what you want to hear and what you have previously heard.
Trust still has to be earned, even when it comes from silicon.