← Blog

Why Ghostwalk Over Automated Testing

25 March 2026

Automated tests are essential. If you're shipping a product without them, that's a separate problem. This isn't about replacing your test suite — it's about a question your test suite isn't designed to answer.

What automated tests verify

Automated tests verify implementation. Does the button fire the right event? Does the form submit to the correct endpoint? Does the redirect land on the right page? Does the flow not crash when a field is left empty?

They confirm that your code does what you intended it to do. That's their job, and they do it well. A green test suite means the product works as designed.

What they don't

Whether the design is the problem.

A test suite can be entirely green while the experience is entirely broken. Every field validates, every route resolves, every mutation writes to the correct table — and a real person still can't figure out how to get from signup to the thing they came for.

Automated tests verify your intentions. They don't test whether your intentions were right. No e2e test checks whether a first-time user would understand what "workspace" means on step two of onboarding, or whether they'd notice the secondary CTA buried below the fold, or whether the settings page feels like a dead end.

That's not a limitation of your test framework. It's a category difference. Test suites check behaviour against specifications. Nobody writes a spec for "a new user should feel confident they're on the right track."

A different axis

This is not a spectrum where automated tests are at one end and Ghostwalk is at the other. They're on different axes.

Automated tests ask: does it work? Ghostwalk asks: does it make sense?

One is functional correctness. The other is experiential quality. A product needs both, and being strong on one tells you nothing about the other. A perfectly tested app can have perfectly confusing UX. A delightful experience can have broken edge cases under the hood.

And the gap is wider than it sounds. Your test suite follows a script — even the most sophisticated e2e test is a programmed path through a known flow. The computer use models underneath Ghostwalk are trained on actual human computer use: millions of examples of how real people click, scroll, hesitate, and navigate. They don't follow a predetermined path through your product. They behave like someone encountering it, because the behaviour they've learned from is real human behaviour. That's a fundamentally different kind of signal than "the selector was found and the assertion passed."

Your test suite belongs in your CI pipeline. Ghostwalk belongs in the gap between "all tests pass" and "a real person tries to use this."

What this looks like

A signup flow. Five steps, fully tested. Every field validates correctly. The password strength meter works. The redirect after email confirmation lands on the dashboard. The test suite is green across three browsers.

A synthetic persona hits the same flow. They don't know what "workspace name" means on step two — is that their company? Their project? Their username? They type their own name. On step three they're asked to invite team members, but they're a solo founder. There's no skip button — or there is, but it says "I'll do this later" in grey text that looks disabled. They click the back button. They land on step two again with their data cleared. They close the tab.

Every test passed. The experience failed.

Your first run is free. Paste a URL, pick your personas, and see your product through fresh eyes.