Methodology & Transparency
How this works — and what it can and can't tell you
This survey is designed to be useful, not impressive. Here is exactly how it works, what it measures, and where the limits are.
What the survey actually measures

The survey measures how you work with AI today — not how smart you are, not how technical you are, and not how productive you are. It looks at five specific things: how well you understand and evaluate what AI produces, how deep your expertise is in your domain, how much accountability you keep for AI output, how well you stay focused on what matters, and how emotionally stable your relationship with AI is under pressure.

It also looks at your job — what types of work you do, how exposed those tasks are to automation, and what will stay distinctly human for a long time. These two things together — your capability and your job situation — are what the report is built on.


How the reports are generated

Your report is generated by AI — specifically by Claude, Anthropic's model. When you complete the survey, your answers are scored, and your role description is analysed to assess job exposure. Then Claude combines both and writes the narrative text you see in the report.

We give Claude specific instructions to write in plain language, combine your capability position with your job exposure, and say something that only makes sense when you put both together. We also tell it to be honest — not to soften real problems or overclaim strengths.

What this means in practice
The scoring is calculated — fixed formulas, applied consistently. The narrative text is AI-generated based on those scores and your role description. No human has read your specific answers before the report is generated. That is why the debrief call matters: a person will read it with you.

We are talking about the future — and the future is uncertain

The job exposure analysis is based on what AI can do today and where current capability trajectories are heading. It is not a certainty. AI is moving fast, and we update our models as things change. But we cannot tell you exactly what your job will look like in three years — nobody can.

What we can do is give you an honest read of which parts of your work are most at risk of being automated, which parts are likely to be supported by AI without being replaced, and which parts are likely to remain distinctly human for a long time. That distinction is useful for making decisions today, even if the exact timeline is not fixed.

On prediction
Job exposure ratings — High, Medium, Low — reflect current AI capability trends applied to your role description. They are a starting point for thinking, not a forecast you should treat as definitive. Treat them as a provocation: does this ring true for your experience? Where does your own judgment disagree?

What this report is good for

The report is designed to be food for thought — a structured way to look at yourself in relation to AI that most people haven't done before. It is useful if you want to understand where you actually stand, not where you think you stand. It is useful as a preparation for a real conversation about what to do next. And it is useful as a baseline — something to come back to after six months and see what has changed.


What this report is not

It is not a performance review. It does not say whether you are good at your job. It does not measure your intelligence, your creativity, or your potential. A low score on any capability means you have room to develop that area — not that you are behind or at risk.

It is not a scientific assessment in the academic sense. There is no peer-reviewed validation study behind the scoring system. It is built on real experience working with people navigating AI change, and it is refined continuously — but it is a practical tool, not a clinical instrument.


How to use it well

Read it with curiosity, not defensiveness. When something rings true, note it. When something does not, push back on it — your disagreement is useful information too. Bring it to the debrief conversation and tell the coach what landed and what didn't. The report is the preparation. The conversation is where it becomes real.

If you come back to it in six months, the most interesting thing will not be the score — it will be what has changed in how you think about the questions.