Many years ago, during a final exam in my Engineering program, the professor asked me a question on a topic I hadn’t studied. It was about Maxwell’s displacement current, Ampère’s Law, and the boring subject of wave propagation. I had no idea what to say, but I attempted an answer using my imagination and some vague memories of Maxwell and André Ampère. The professor smiled and said: “I can’t imagine an answer more detached from reality, but I admire your creativity.” I left with a sense of peace—after all, he admired something. Of course, I didn’t pass the exam (in case you’re wondering—I passed the subject a few months later, after studying and finally understanding what the professor meant by “detached from reality”).
Last week, our fellow consultant Hector Isoldi ran an experiment using AI. He asked it for a description of something we know inside out—because we created it. Instead of saying “I don’t have that information,” the AI provided a “creative” answer based on the logic of the question. It was an interesting response… but incorrect.
Does that mean AI just makes things up when it doesn’t know?
Well, I’ll leave that answer to AI theorists. In my view, AI neither invents nor reasons—it simply applies a certain type of logic.
AI has no conscious awareness, no intention, no sense of doubt, and no search for purpose.
But it can simulate logical processes, apply inferences, solve complex problems, and link ideas together by following rules.
AI is trained on linguistic, mathematical, and philosophical structures and can follow logical principles with precision.
But it can also make logical mistakes—especially when its training data is insufficient or when the question is ambiguous.
It’s not perfect or infallible, even if it may seem that way.
AI doesn’t truly “know” anything. It predicts the most probable word to follow the last one. It’s like having a memory of millions of conversations, responding with what seems closest to your question…
But it doesn’t reason like a human being, nor does it verify its sources.
That’s why:
- It can give correct answers… without knowing why.
- It can give incorrect answers… with full confidence.
AI doesn’t replace knowledge—it amplifies it.
It’s a very powerful assistant, but one without awareness or responsibility.
- As a student: it’s a mentor.
- As a writer: it’s a helper.
- As a scientist: it’s a collaborator, not a judge.
- As a consultant: it’s a brainstorming tool, not a final decision-maker.
Which leads us to the key question:
If that’s the case, how much can we trust its answers?
AI is helping us a lot—speeding up tasks, correcting errors, organizing ideas, and suggesting alternatives. But it doesn’t always get it right, because it doesn’t have reason—it just has logic.
No doubt it will evolve rapidly, but we don’t believe we should trust its responses blindly.
We know it’s very reliable when it comes to formal definitions—like Ampère’s Law—or for translating, summarizing, and editing texts.
But it can also generate “new” information, cite invented sources, and deliver conclusions without explaining how it got there.
So what should we do?
Use AI as an additional source of information—but always cross-check with other references. Apply our own logic. Question its outputs.
Or perhaps use it only in cases where definitive answers are not required.
At mb&l we use AI every day and continue to learn its secrets.
If you’re interested in discussing this—or any other topic related to Consultative Selling—don’t hesitate to contact us. We’ll be glad to help.
(And by the way, yes—some of the insights about AI behavior in this document were written with the help of AI.)