You Can’t Trust What You Can’t Verify
It’s been wild watching the AI boom unfold in real time. Whether I’m on X, attending events, or just following the money, it’s clear: AI is dominating the tech landscape.
If you’ve noticed I’ve been posting less lately, it’s because I’ve been head-down — working on real-world implementations and trying to make a difference where it counts. I’ve also been spending time thinking through the deeper shifts underway, and this post is my attempt to reflect and share some of those thoughts. For those asking about new episodes of the SSI Orbit Podcast — don’t worry, there are some great things in the works. But building takes time, and that’s where I’ve been focused for most of this year.
The current energy reminds me of the 2017–2018 crypto wave, when blockchain started popping up in every investor pitch deck. Slap “decentralized” on your homepage and funding followed. But this time, it feels different. Not only is AI moving faster — it’s already landed in enterprise, hard. And beyond productivity apps and copilots, AI is becoming critical infrastructure: fuelling the next generation of robotics and automation, defence systems, pharma breakthroughs, and national R&D priorities.
From Big Tech to banks, AI is being embedded into every layer of operations. Employees are using tools powered by models trained on their own company’s data. Productivity apps like Microsoft 365, Google Workspace, and Zoom have generative features now baked in by default. Even design tools and customer service platforms have shipped copilots. Distribution isn’t just solved — it’s ubiquitous.
And with that speed comes a fundamental shift in how value is created, distributed, and consumed across the internet. It’s exciting — full of potential — but also deeply transformative. We’re watching new models emerge in real time, and many of them are here to stay.
Disruption is Already Here
We’re not talking about theoretical futures. The fallout is happening now. Entire sectors are being redefined — or quietly sunsetted — in real time.
Take SEO. I was recently speaking to someone at a boutique SEO firm who said, bluntly, “This industry’s already dead — it just doesn’t know it yet.” And I get it. If search results are being bypassed in favour of direct, AI-generated answers, then the value of ranking strategies and keyword optimization drops fast. Add to that the shifting economics of online advertising, and you’ve got a business model teetering on the edge.
Another area where we’re already seeing pressure is in what I’d call “process ownership” roles — the kind of work typically assigned to junior employees or associates in large enterprises. These roles often aren’t strategic; they’re about keeping things moving — compiling market research, preparing reports, running checklists. And frankly, most of it can now be done better, faster, and cheaper by AI.
If I need a recurring market scan on trends in a sector, I don’t need to assign that to someone anymore. I can spin up an AI agent that monitors new filings, product announcements, funding rounds, and news — and delivers me an executive-ready brief.
It raises a question: What happens to that broad layer of the org chart — the wide base of the pyramid? Most large companies are structured with many people at the bottom and fewer as you go up. But what if the decision-makers at the top can now directly interact with intelligent systems that give them the answers, insights, and summaries they need — without layers of manual interpretation?
We’re already seeing hiring freezes, restructured teams, and a shift in what “entry level” even means. It’s not that these jobs are going away overnight — but the path into an organization, the type of work you do early on, and the number of people needed to do it — all of that is changing.
And that organizational shift is just one part of a much larger realignment. Beyond how teams are staffed, we’re seeing major changes in how information flows, how audiences are reached, and how people decide what to trust. AI might be reshaping industries and disrupting existing models — but one thing it’s not doing is slowing down our use of digital services. In fact, demand is growing.
People want faster, simpler, more seamless access to services — from shopping and payments to government services. The push for low-friction, real-time digital interactions is only accelerating. But as more and more of life moves online, we still lack a fundamental tool: the ability for individuals to verify what they’re seeing — to evaluate the authenticity of a message, a source, or a request.
Most authentication systems today are designed for enterprises to verify their users. But when it comes to the reverse — giving everyday people the ability to verify the legitimacy of an organization, a website, or a payment request — we’ve made very little progress. And that asymmetry is where much of today’s online harm originates.
If a person gets tricked by a fraudulent site or scam ad, they pay the price — not the platform, not the brand, and not the attacker. And with AI making it cheaper and faster to generate false representations at scale, we need to equip individuals with tools to make their own trust decisions, not rely on platforms to police the problem after the fact.
We still rely on representations.
Every digital interaction involves a claim — or a representation of something. It might be a person asserting their identity, a file suggesting who authored it, a website representing itself as a legitimate business, or even a message requesting a payment or transfer of value. Whether structured or implicit, these are all forms of digital assertions — and without a verifiable link to something trustworthy, they’re just noise.
Throughout the rest of this post, I’ll use the word “representation” instead of “claim.” I think it’s a more intuitive frame: every time we receive a request, interact with content, or make a decision online, it helps to ask — what representation is being made here, and do I trust it?
When Representations Can’t Be Trusted
The internet has never been great at separating signal from noise. And with AI accelerating the volume of content and interactions — while simultaneously lowering the cost of production — the noise is growing exponentially.
We’re already seeing an explosion in:
- AI-generated content impersonating real people.
- Phishing websites using cloned branding to steal credentials.
- Fake job offers, invoices, and social accounts distributed at scale.
This isn’t speculation — it’s happening. In Canada, reported fraud cases have nearly doubled over the past decade, with losses expected to exceed $500 million this year alone 1. Across the EU, attempted payment fraud rose by 43% in 2024 2, with deepfakes and AI-generated phishing playing a growing role. As the cost of making convincing digital representations drops, the volume of false ones is only going up.
And yet, our collective strategy to combat this still hinges on after-the-fact detection. We blacklist domains. We try to verify deepfakes after they’re viral. We analyze behaviour post-transaction to flag fraud. But these approaches are losing the race.
If a representation isn’t signed at the moment it’s made, it’s already too late. That’s not just a nice-to-have — it’s a foundational principle we need to build around.
What Signing Really Means
To trust any representation, it needs to be signed. Not metaphorically. I mean literally — it needs to be tied to a signature that can be resolved against a trust framework. Whether it’s a cryptographic signature or another verifiable mechanism, the point is the same: without a link to an authority or context that the person on the receiving end considers trustworthy, the representation floats in a sea of doubt.
Let’s make that concrete. Here are examples of everyday digital representations:
- An email presents itself as being authored by a government department.
- A website presents itself as a legitimate bank.
- A piece of content claims to be human-generated or consists of real humans.
- An ad on a social media platform claims to represent a known brand.
- A message requests a payment or transfer of value, claiming to come from a trusted source.
In all of these, what matters is whether that representation is backed by something more than just pixels on a screen. Has someone we trust staked their reputation — or infrastructure — on this being true?
Without that, we’re forced to rely on surface-level indicators. And as my TLD blog made clear earlier this year, even foundational internet markers like domain names have become unreliable trust signals. A .com doesn’t mean anything if phishing kits can spoof Fortune 500 brands with ease.
You Can’t Fix Trust After the Fact
Another comparison I often make is to the world of anti-money laundering (AML). Financial institutions have spent over $274 billion per year globally on compliance, much of it trying to detect suspicious activity after it’s occurred 3. Yet the UN still estimates less than 1% of laundered money is ever recovered 4.
We’re applying the same flawed logic to the internet. We’re building huge analytical infrastructures to detect fraud, spam, impersonation, and abuse after the representation or transaction is out in the wild. And with AI ramping up the volume, velocity, and believability of false representations, that model breaks down.
It’s like trying to bail out a sinking boat while someone drills new holes in the hull every second.
The Solution: Trust Must Be Built Into the Representation
Here’s the idea I keep coming back to — across sectors, use cases, and formats:
Every representation must be signed and resolvable to a trust framework.
That’s the foundation. That’s the infrastructure we’ve been building and maturing at Northern Block. And that’s what will allow the digital economy to scale with integrity, especially in an AI-dominated future.
Whether it’s a verifiable credential, a signed message, or a structured authority statement, the principle holds: trust isn’t something you can layer on later. It has to travel with the representation.
Think about:
- Verifying that an invoice is from the right supplier before it hits your accounting system.
- Knowing that a customer support agent is actually affiliated with the company before you hand over personal data.
- Ensuring that an article or video came from a known creator before you reshare it or embed it.
This is not about centralizing trust. It’s about distributing authority in a way that lets people act as their own verifiers — making decisions based on verifiable signals, not blind faith. What matters is that the authority behind a representation is one they choose to trust, not one imposed on them.
Sources
¹ BioCatch, “Generative AI and the Fight Against Fraud,” 2024. ² Tietoevry Banking, “2024 Payment Fraud Report.” ³ LexisNexis Risk Solutions, True Cost of AML Compliance, 2023. ⁴ UNODC, Global Report on Money Laundering, 2021.

Founder & CEO
Mathieu Glaude is a prominent figure in the Canadian technology industry, with extensive experience in building and operating decentralized systems for governments, enterprises, and SMEs.








