I've been paying attention to who's actually thriving with AI. Since we started rebuilding Method Garage on AI and working with clients doing the same, a pattern keeps showing up.
The people who thrive share certain underlying attributes. And those attributes don't always correlate with who's crushing it in their current role.
When I talk to people in companies about this, they describe surface behaviors. "They experiment on weekends." "They can't get enough of it." "They take to it like a duck to water."
But what's underneath that? What causes people to act that way in the first place?
Technical Skills Help. But.
Little things like familiarity with the command line, having lived in a terminal on Linux or Unix at some point... that all accelerates picking up tools like Claude Code and helps in the later AI maturity stages.
But being technical doesn't make you an AI rockstar. There are tons of highly technical coders and engineers who are complete laggards when it comes to actually leveraging AI. The skills that made them great at their craft don't automatically translate.
So what does predict success?
Three Attributes
Abstract thinkers who are comfortable with ambiguity.
These tools are abstract. Andrej Karpathy, former Tesla AI director and OpenAI co-founder, called them "alien tools without a manual." In a late 2025 post, he explained that modern AI tools are "fundamentally stochastic, fallible, unintelligible and changing entities." That's not hyperbole. The capabilities are broad. The best use cases aren't obvious. You have to think deeply and non-linearly to make use of them in a big way.
When someone says "AI can't do X," the abstract thinker asks "what if we approached it differently?" They're comfortable with weird. They actually enjoy exploring the edges.
Someone from a client we're working with summed it up: "You need to have different skill sets and different mindsets when you're using these tools to be prepared to deal with some very weird stuff."
Alien tools without a manual. That's exactly right.
Risk-tolerant experimenters.
Willing to try, fail, iterate. Not paralyzed by "what if this doesn't work?"
This isn't recklessness. It's the willingness to try something, see what happens, refine. The AI-native workflow is inherently experimental. People who need certainty before starting struggle with this loop.
Had I worried about failing, I never would be where I am today. I've failed far more times than I've succeeded in my experiments with AI. That's actually the point.
John Jimenez, our technical advisor, summed it up best: "Struggle is where all the learning happens."
He's right. It's the four hours late at night, banging my head against the wall, trying to figure something out. Not solving it. Going to bed frustrated. Then burning another several hours the next day before finally cracking it. Not the successes. Not the things that just worked. It's the failures, and finding ways around them, that create real understanding.
If you're not struggling and failing, you're not learning.
People who find or make time to explore.
Some people carve out this time no matter what, even when buried at work. But let's be honest: when you're in back-to-back meetings, finding another 40 hours a week to experiment is really hard.
The people with room to explore have an advantage. Often they're slightly bored in their current situation. Between jobs. On parental leave. Current role isn't stimulating them. They have cycles and motivation to go deep.
There's another type here too: the people who hate routine. Always looking for ways to hack the system, to make their life easier, to avoid doing the same boring task twice. Some would call them lazy. Actually, they're the ones who will automate their boring tasks so they have more time for exciting exploratory work. That instinct is gold in an AI-native world.
The Performance Trap
Here's what we're not saying: that low performers magically become AI rockstars. That's not the pattern.
Here's what we are saying: top performer today doesn't automatically equal AI rockstar. And average performer doesn't mean you can't be one.
Why do some top performers struggle? They're optimized for the old way of working. Their identity is tied to doing things the way they've always done them. They're rewarded for the current system. Why would they want to blow it up?
But this isn't about performance level. It's about the underlying attributes. A top performer with curiosity, risk tolerance, and time to explore will thrive. An average performer with those same attributes will also thrive. It's the attributes that matter, not where someone sits on the current performance curve.
Finding Your AI People
The old hiring caution: "Don't hire someone who's currently out of work."
This may be flipping.
People between jobs are in a different situation. They're actively working on themselves, self-improving. They know they're not getting their next job without AI experience. So they can actually dedicate serious time to exploring AI fully.
Here's the full circle part. If AI is the reason they were laid off in the first place, in a weird way it's doing them a favor. For those who actually lean in during their time away from work, that's the very skill that lands them their next job. There's no interview now where candidates won't be asked to demonstrate what they've accomplished with AI.
The people who use that time to go deep? They come out ahead.
Three interview questions worth asking:
What have you actually built with AI tools? Not "what have you tried" or "what have you experimented with." What have you built? Building requires commitment, iteration, finishing something.
Walk me through your last struggle or failure when working with AI. Not a hypothetical. Your last AI failure. Specificity forces honesty.
How did you approach [Claude Code / ChatGPT / whatever tool they mentioned] when you didn't know everything it could do? You're looking for how they explore unfamiliar capabilities. The answer reveals how they learn.
For finding AI champions internally, here's the thing: you're not picking someone and blessing them to experiment. The people who fit this mold are already experimenting. They're already doing. You just may not be aware of it. It's happening in pockets. On weekends. In side projects no one asked for.
Your job isn't to anoint AI champions. It's to find the ones who already are.
Ask around. What are they doing with AI? What have they learned this week? How has it changed their approach? The right people will light up at these questions. They'll have stories. They'll have failures. They'll have opinions about which tools work for what.
The Bottom Line
If you're building an AI-native team, cast a wider net than your usual hiring profile. Look for experimentation evidence, not credentials. Consider people with time gaps who've been exploring. Don't assume your current stars will automatically lead the transformation.
If you're trying to become AI-native yourself, give yourself permission to experiment without outcomes. Make time for exploration. This won't happen in the margins. Let go of the identity tied to doing things the old way.
The people who thrive with AI are the ones who approach it with curiosity, tolerance for uncertainty, and space to explore. Everything else can be learned.
Method Garage is a design and engineering firm building AI agents for B2B Customer Success teams. We spent 10 years mapping workflows. Now we automate what we mapped.
