"The Creator." Current centre of gravity.

Trained on enormous text, image and code corpora, it produces fluent new content in response to a prompt. Every brand-name AI tool sits here.

Prompt in. Fluent content out.

A transformer model tokenizes, embeds, attends. 175 billion to over a trillion parameters. It predicts the next token in a loop until the response is complete. Text, code, structured data, images, plans. All from the same underlying mechanism.

Strengths and limits, plainly.

Strengths

Speed, breadth, fluency in any domain it has seen training data for. Drafts, summaries, translations, code, analysis. In seconds, in any tone.

Limits

No real-time knowledge unless given tools. Confabulates when uncertain (hallucination). Cost scales linearly with output length. Cannot truly reason from first principles.

Governance need

Human-in-the-loop review on anything customer-facing, regulated, or financial. Audit trail of prompts. Output verification before consequential action.

Hallucination.

The model produces text that sounds correct but is not grounded in fact. Mitigations: RAG (give the model your documents), explicit "I do not know" instructions, and verification against authoritative sources before acting.

What stops a bad answer from shipping?

If the answer is "the user", you have an audit problem waiting to happen. Build the verification step into the workflow, not the human reading habit.

Want the boardroom version of this?