…wait, what, that’s not true, how can you even go about claiming that?
Of course I didn’t create LEGO Serious Play.
But ChatGPT claims I did…
It’s just so damn eager to please sometimes, it’ll say anything.
Not that I’m sitting all day asking LLMs who I am (now there’s an identity crisis waiting to happen).
Instead, I was rebuilding the Artefact Shop, and just as an experiment, was using ChatGPT as my copilot to do so. Initially of course, I was just using it as a sub-editor – how would you improve this text, please – and getting back the sort of wind-tunnelled prose you might expect. It’s fine. I’m going to leave it up for a while, see how folk respond.
Then I idly wondered how far I could take things? What sort of response might I get if I asked about Artefact Cards generally?
It starts well enough. There’s clearly enough online for it to get a sense of the core idea.
Then the drift begins, and it starts assuming some things Artefact Cards might do, perhaps based on what other design card decks do? Finally, it disappears into an alternative dimension, where an agency called More Than Minutes created them (they are real, I checked, but mostly do conference visualisations and the like).
The glaring errors in ChatGPT, and any other LLM, are easy to spot.
It’s the small ones that are harder.
If you just give it free rein to make associations, you can only expect it to make connections freely, and need to double, triple check what it produces (like the first diagram on the left). Whereas if you give it a bit more structure, bound by connections you know exist, maybe there’s less wiggle room to go off elsewhere.
Perhaps that’s a useful way to thinking about it. It’s not presenting you with a paragraph or two of opinion and facts; it doesn’t know anything. Instead, it’s bringing you back a cluster of proximate things which could be stitched together in a particular way, which can pass you by if you don’t know any better. Sometimes it gets lucky. Often it doesn’t. And the onus is on you to know the difference.