The survey measures how you work with AI today — not how smart you are, not how technical you are, and not how productive you are. It looks at five specific things: how well you understand and evaluate what AI produces, how deep your expertise is in your domain, how much accountability you keep for AI output, how well you stay focused on what matters, and how emotionally stable your relationship with AI is under pressure.
It also looks at your job — what types of work you do, how exposed those tasks are to automation, and what will stay distinctly human for a long time. These two things together — your capability and your job situation — are what the report is built on.
Your report is generated by AI — specifically by Claude, Anthropic's model. When you complete the survey, your answers are scored, and your role description is analysed to assess job exposure. Then Claude combines both and writes the narrative text you see in the report.
We give Claude specific instructions to write in plain language, combine your capability position with your job exposure, and say something that only makes sense when you put both together. We also tell it to be honest — not to soften real problems or overclaim strengths.
The job exposure analysis is based on what AI can do today and where current capability trajectories are heading. It is not a certainty. AI is moving fast, and we update our models as things change. But we cannot tell you exactly what your job will look like in three years — nobody can.
What we can do is give you an honest read of which parts of your work are most at risk of being automated, which parts are likely to be supported by AI without being replaced, and which parts are likely to remain distinctly human for a long time. That distinction is useful for making decisions today, even if the exact timeline is not fixed.
The report is designed to be food for thought — a structured way to look at yourself in relation to AI that most people haven't done before. It is useful if you want to understand where you actually stand, not where you think you stand. It is useful as a preparation for a real conversation about what to do next. And it is useful as a baseline — something to come back to after six months and see what has changed.
- A starting point for thinking about your AI readiness — not a final verdict
- A provocation — do the results ring true? Where does your own experience push back?
- Preparation for a coaching conversation — the debrief is where the report becomes real
- A snapshot in time — your situation will change, and you can reassess
It is not a performance review. It does not say whether you are good at your job. It does not measure your intelligence, your creativity, or your potential. A low score on any capability means you have room to develop that area — not that you are behind or at risk.
It is not a scientific assessment in the academic sense. There is no peer-reviewed validation study behind the scoring system. It is built on real experience working with people navigating AI change, and it is refined continuously — but it is a practical tool, not a clinical instrument.
- The AI-generated narrative is based on patterns in your answers — it will not always be perfectly calibrated to your specific situation
- The job exposure analysis is based on your role description, which may not capture the full nuance of how you actually work
- The report is more useful the more honestly you answered — if you answered what you thought you should say rather than what is true, the results will be less useful
- Like any self-reported assessment, it reflects your self-perception — which may differ from how others experience you
Read it with curiosity, not defensiveness. When something rings true, note it. When something does not, push back on it — your disagreement is useful information too. Bring it to the debrief conversation and tell the coach what landed and what didn't. The report is the preparation. The conversation is where it becomes real.
If you come back to it in six months, the most interesting thing will not be the score — it will be what has changed in how you think about the questions.