I Check My Agents Before Email

Six months ago, I wrote about feeling paralyzed by AI. My co-founder was rebuilding our company with agents while I watched, unsure how to contribute. The fog lifted faster than I expected. But I didn't anticipate what came next.

Nowadays, I check my agents before my email.

That's not a metaphor. Every morning, I open three agentic systems - that I myself built - before I open Gmail. Each functions like a specialized team: status updates, task recommendations, deliverable reviews, working sessions on whatever needs attention. The interactions feel more like managing direct reports than using software.

This isn't a flex about being technical. Six months ago, I couldn't have built any of this. The point is: I'm a business leader who now operates this way. And if I can get here, so can you.

My Three AI Teams

Team 1: Marketing Strategy & Ops Agent

Here's a confession: my co-founder and I have been pretty awful at marketing for ten years. It's not our core strength. We're consultants who know how to deliver, but getting the word out? We've tried and failed more times than I can count.

So I built an agent to make us better marketers.

It knows our GTM plan, our content calendar, our target audience, and our voice. When I start a session, it pulls the latest from our shared repo, shows me what's been done since my last session (my co-founder works with it too), and tells me exactly what to focus my limited time on.

This morning's check-in:

The prioritization alone is worth everything. I have maybe 2-4 hours a week for marketing. The agent makes sure those hours go to the highest-leverage activities, not whatever feels urgent.

But it does more than prioritize. When I'm drafting content like this post it runs critiques through different lenses. It pushes back when my ideas conflict with our stated strategy. It remembers decisions we made three weeks ago that I've already forgotten.

We're not suddenly marketing geniuses. But we're consistently executing a coherent plan for the first time in a decade.

Team 2: Consulting Project Agent

Our consulting project agent helps manage client engagements. It tracks deliverables, maintains interview notes, synthesizes findings, and drafts outputs. Think of it as a project manager who also happens to be a junior consultant.

On a recent client assessment:

  • Conducted 18 stakeholder interviews (I ran them, the agent synthesized)
  • Generated a 40-page findings deck from interview transcripts
  • Flagged contradictions between what executives said and what practitioners reported
  • Drafted recommendations that built on patterns from our previous engagements

The synthesis that used to take me 4-6 hours now takes 15 minutes of review and refinement. More importantly, the agent catches things I miss. It remembers what the VP of CS said in interview 3 when I'm reviewing interview 11.

Team 3: Strategy & Framework Agent

As a custom AI design and build shop, our "offerings" run on robust maturity models, frameworks and diagnostic tools. The strategy & framework agent helps me develop and refine these intellectual assets.

Working with this agent in particular has taught me an important lesson about staying in command.

For example, I asked this agent to do deep research, scan our client engagement data, and come up with a foundational AI maturity framework. It came back with something that sounded like it was written by Deloitte circa 2015. Jargon-heavy. Generic stages. Corporate consultant-speak that wouldn't help any real practitioner.

I had to push back hard. "This is way too academic. Make it more relevant to how actual CS leaders talk about their problems." We went back and forth for hours. I fed it real quotes from client interviews and podcast episodes with real business operators sharing their AI maturity journeys. Eventually, we landed on something grounded in practitioner language and real-world examples.

The lesson: AI will default to patterns it's seen before - and it's seen a lot of generic business frameworks. You have to push it toward specificity and authenticity. You have to stay in command.

Here's a high level view of one of our framework iterations.

What This Actually Feels Like

The closest analogy: managing direct reports who have perfect memory.

Last Tuesday, I walked into a client presentation convinced I knew what to recommend. My consultant agent had synthesized 18 interviews and quietly surfaced a pattern I'd missed: the executives and practitioners were telling completely different stories about the same process. I would have walked in there with a recommendation that solved the wrong problem.

That's the real value. Not just speed - though yes, it's faster. The agent didn't just save me time. It saved me from looking, or at least feeling incompetent.

But they're not autonomous. I'm still making the decisions. They're extending my capacity to gather information, synthesize complexity, and execute with consistency. The judgment is mine. The leverage is theirs.

What I'm Not Showing You

These three are my daily drivers. But they're not our only agentic systems.

We also have agents for finance and accounting, development and engineering, security and privacy review, sales research and outreach. Even more, my co-founder has built an entire reference architecture for how we create, maintain and coordinate these systems.

I focus on these three because they represent my core responsibilities: marketing our firm, delivering client work, and building our products. They're where I spend my cognitive cycles. And they're where AI assistance has most transformed how I operate.

For the Skeptical Leader

If you're reading this thinking "that sounds great but I could never build that," I want to push back.

Six months ago, I couldn't either. What changed wasn't just a technical skill - it was a shift in how I thought about the problem.

I stopped thinking "I need to learn to code" and started thinking "I need to clearly articulate how I work." Once I could explain my workflows, my decision criteria, my context - the AI could actually help. The main barrier wasn't technical. It was clarity.

You don't need to build three teams at once. Start with one area where you (or your team) spend significant time on synthesis, coordination, or repeated decisions. Write down how you'd brief a smart new hire on that area. That document is 80% of what you need.

This is what AI-first leadership looks like. Not "I use AI tools." Rather: "AI has changed the structure of my thinking and my work."

The clarity comes faster than you think.


If you're a leader wondering how to actually work this way - not just read about it - shoot me a note. Happy to share more about how we set this up.

AI Refactored 7,800 Lines While I Was Away

Last Friday, I ran an experiment I'd been building toward for months. I typed one command, walked away from my computer, and came back to find 7,808 lines of code refactored across 36 files. The whole thing took 21 minutes.

This wasn't a demo. It was real code for a client project. And it worked.

Well, mostly. There was one bug. Took about 30 seconds to fix.

How I Got Here

If there's a 101 class for AI-assisted coding, the first lesson is: actively manage your context window. Don't rely on auto-compacts. Don't let the conversation grow until the model forgets what you're building. Plan upfront. Break work into context-window-sized chunks. Burn the tokens on planning so execution stays sharp.

I learned this the hard way. The advanced nuance: AI gets worse even within a single context window. The first 50% of a conversation is sharp. After that, quality degrades, even for the best of the bunch (Opus 4.5). For real work, this is a disaster.

So I became ruthless about planning. And planning means more than just planning the build. I spend my cycles, and I spend my tokens, proportionately on architecture and planning before any coding begins. I pit AI against AI during the architectural stages, using different models to critique and compare approaches. I run security reviews upfront. I run plans through our Pattern Police, a reference architecture agent that evaluates for consistency and adherence to our standards. Only after all of that do I break work into phases. Each phase gets fresh context. Self-contained prompts with everything included, no "see previous conversation" references. I think through sequencing, identify what can run in parallel. Progress is tracked in files that survive restarts.

Plan, plan, plan. Then build.

Here's what I'd already automated: the architectural design agents that pit models against each other. The build planning agents that break work into phases. The Pattern Police that enforce our standards. All of that runs without me in the middle. Those were game-changing improvements.

But when it came time to actually execute the build, I was still the bottleneck. For a five-phase build, I'd spend an hour opening and closing sessions, manually launching each phase, babysitting the process. The planning was agentic. The build was not.

Then I found Geoffrey Huntley's Ralph.

The Missing Piece

Ralph's insight was embarrassingly simple: write a shell script that does what I was doing manually.

Loop through the phases. Invoke Claude for each one. Check if it succeeded. If it failed, retry with more reasoning. Track progress in a YAML file. That's it.

I'd read Anthropic's post on effective harnesses for long-running agents and absorbed some of those ideas already. But Ralph showed me the execution layer I was missing. Not a framework. Not infrastructure. Just a bash script that replaces me as the orchestrator.

Here's a peek behind the curtain: we've built our entire company on Claude Code. We have a reference architecture that documents our standards, automates our processes, and ensures everything is done according to our best practices. When it came time to incorporate Ralph into how we work, I updated that reference architecture and the associated agents we've already built: the architectural design agent, the build planning agent, the build execution agent. I added a run-build.sh script that implements Ralph's approach. The script handles sequential execution, failure detection, context exhaustion recovery, all the edge cases I was handling manually.

This experiment was the first real test. Could the script actually replace me?

The Experiment

I had a working prototype: about 4,700 lines of Python spread across a flat src/ directory. Well-written code, but we'd decided to refactor it into a modular, component-based architecture for reuse across additional use cases down the road.

BEFORE: src/*.py (monolithic codebase)
AFTER:  core_lib/{models,processing,analysis,output}/*.py
        agents/review/*.py

This refactoring would touch the entire application. Not super complicated work, but time-consuming. Even babysitting AI through the build was going to eat up a couple hours. I'd been postponing it for days.

This time, I wanted to walk away.

The Plan

I already had a detailed refactoring plan: 570 lines of markdown covering five sessions of work. My Ralph-enabled build agent converted this into a machine-readable YAML file with phases, dependencies, and validation commands. Self-contained prompts for each phase. A progress tracker that would survive interruptions.

So, I typed ./run-build.sh and walked away.

First run failed. The terminal said "BUILD COMPLETE!" but the output directory was empty. Every phase had asked for file permissions and then exited. Claude Code needs approval to write files. In unattended mode, there's no human to approve. The fix: --dangerously-skip-permissions. One flag I'd forgotten to add.

Second run: 21 minutes later, actually complete. Each phase ran its validation checks: package installs, import smoke tests, architectural lints, Docker builds, health endpoint checks. 5,005 lines of Python, organized into a clean module structure.

One bug remained. Docker Compose crashed on startup: a Pydantic forward reference issue that only surfaces when all models load together at runtime. The validation was import-level, not runtime-level. 30 seconds to fix.

My Rolling Epiphany

This keeps happening. Almost weekly. I have a holy smokes breakthrough and when I reflect on it, the breakthrough is rarely because the technology just got better. It's because I wondered: could AI do this too?

Two years ago, the models were the limitation. They hallucinated. They forgot instructions mid-task. That's not the limiting factor anymore. Models and their harnesses are so good that the limitation is almost always our imagination.

I thought I was out in front. Phased builds. Parallel execution. Pattern libraries. And yet. I hadn't considered automating myself out of the build loop. It took 30 seconds of a YouTube video. Someone saying, "what if a bash script did the orchestration instead of you?" Once I heard it, building it was trivial. There's no IP here. The script is maybe 200 lines of bash. Anyone could write it. The capability was there. I just hadn't wondered.

What else am I not doing because I haven't wondered the right thing yet?

Who Thrives with AI (It's Not Always Who You'd Expect)

I've been paying attention to who's actually thriving with AI. Since we started rebuilding Method Garage on AI and working with clients doing the same, a pattern keeps showing up.

The people who thrive share certain underlying attributes. And those attributes don't always correlate with who's crushing it in their current role.

When I talk to people in companies about this, they describe surface behaviors. "They experiment on weekends." "They can't get enough of it." "They take to it like a duck to water."

But what's underneath that? What causes people to act that way in the first place?

 

Technical Skills Help. But.

Little things like familiarity with the command line, having lived in a terminal on Linux or Unix at some point... that all accelerates picking up tools like Claude Code and helps in the later AI maturity stages.

But being technical doesn't make you an AI rockstar. There are tons of highly technical coders and engineers who are complete laggards when it comes to actually leveraging AI. The skills that made them great at their craft don't automatically translate.

So what does predict success?

 

Three Attributes

Abstract thinkers who are comfortable with ambiguity.

These tools are abstract. Andrej Karpathy, former Tesla AI director and OpenAI co-founder, called them "alien tools without a manual." In a late 2025 post, he explained that modern AI tools are "fundamentally stochastic, fallible, unintelligible and changing entities." That's not hyperbole. The capabilities are broad. The best use cases aren't obvious. You have to think deeply and non-linearly to make use of them in a big way.

When someone says "AI can't do X," the abstract thinker asks "what if we approached it differently?" They're comfortable with weird. They actually enjoy exploring the edges.

Someone from a client we're working with summed it up: "You need to have different skill sets and different mindsets when you're using these tools to be prepared to deal with some very weird stuff."

Alien tools without a manual. That's exactly right.

Risk-tolerant experimenters.

Willing to try, fail, iterate. Not paralyzed by "what if this doesn't work?"

This isn't recklessness. It's the willingness to try something, see what happens, refine. The AI-native workflow is inherently experimental. People who need certainty before starting struggle with this loop.

Had I worried about failing, I never would be where I am today. I've failed far more times than I've succeeded in my experiments with AI. That's actually the point.

John Jimenez, our technical advisor, summed it up best: "Struggle is where all the learning happens."

He's right. It's the four hours late at night, banging my head against the wall, trying to figure something out. Not solving it. Going to bed frustrated. Then burning another several hours the next day before finally cracking it. Not the successes. Not the things that just worked. It's the failures, and finding ways around them, that create real understanding.

If you're not struggling and failing, you're not learning.

People who find or make time to explore.

Some people carve out this time no matter what, even when buried at work. But let's be honest: when you're in back-to-back meetings, finding another 40 hours a week to experiment is really hard.

The people with room to explore have an advantage. Often they're slightly bored in their current situation. Between jobs. On parental leave. Current role isn't stimulating them. They have cycles and motivation to go deep.

There's another type here too: the people who hate routine. Always looking for ways to hack the system, to make their life easier, to avoid doing the same boring task twice. Some would call them lazy. Actually, they're the ones who will automate their boring tasks so they have more time for exciting exploratory work. That instinct is gold in an AI-native world.

 

The Performance Trap

Here's what we're not saying: that low performers magically become AI rockstars. That's not the pattern.

Here's what we are saying: top performer today doesn't automatically equal AI rockstar. And average performer doesn't mean you can't be one.

Why do some top performers struggle? They're optimized for the old way of working. Their identity is tied to doing things the way they've always done them. They're rewarded for the current system. Why would they want to blow it up?

But this isn't about performance level. It's about the underlying attributes. A top performer with curiosity, risk tolerance, and time to explore will thrive. An average performer with those same attributes will also thrive. It's the attributes that matter, not where someone sits on the current performance curve.

 

Finding Your AI People

The old hiring caution: "Don't hire someone who's currently out of work."

This may be flipping.

People between jobs are in a different situation. They're actively working on themselves, self-improving. They know they're not getting their next job without AI experience. So they can actually dedicate serious time to exploring AI fully.

Here's the full circle part. If AI is the reason they were laid off in the first place, in a weird way it's doing them a favor. For those who actually lean in during their time away from work, that's the very skill that lands them their next job. There's no interview now where candidates won't be asked to demonstrate what they've accomplished with AI.

The people who use that time to go deep? They come out ahead.

 

Three interview questions worth asking:

What have you actually built with AI tools? Not "what have you tried" or "what have you experimented with." What have you built? Building requires commitment, iteration, finishing something.

Walk me through your last struggle or failure when working with AI. Not a hypothetical. Your last AI failure. Specificity forces honesty.

How did you approach [Claude Code / ChatGPT / whatever tool they mentioned] when you didn't know everything it could do? You're looking for how they explore unfamiliar capabilities. The answer reveals how they learn.

For finding AI champions internally, here's the thing: you're not picking someone and blessing them to experiment. The people who fit this mold are already experimenting. They're already doing. You just may not be aware of it. It's happening in pockets. On weekends. In side projects no one asked for.

Your job isn't to anoint AI champions. It's to find the ones who already are.

Ask around. What are they doing with AI? What have they learned this week? How has it changed their approach? The right people will light up at these questions. They'll have stories. They'll have failures. They'll have opinions about which tools work for what.

 

The Bottom Line

If you're building an AI-native team, cast a wider net than your usual hiring profile. Look for experimentation evidence, not credentials. Consider people with time gaps who've been exploring. Don't assume your current stars will automatically lead the transformation.

If you're trying to become AI-native yourself, give yourself permission to experiment without outcomes. Make time for exploration. This won't happen in the margins. Let go of the identity tied to doing things the old way.

The people who thrive with AI are the ones who approach it with curiosity, tolerance for uncertainty, and space to explore. Everything else can be learned.


Method Garage is a design and engineering firm building AI agents for B2B Customer Success teams. We spent 10 years mapping workflows. Now we automate what we mapped.

We Automated 90% of Our Own Business Before Selling AI to Clients

If you're going to sell AI transformation, you'd better live it first.

Once we committed to building AI for clients, we made a decision that shaped everything after: be 100% AI-native from day one.

Not "adopt some AI tools." Not "experiment with automation." Full commitment. Every process. Every workflow. If it could be automated, it would be.

We weren't going to sell transformation we hadn't lived ourselves.

The Mindset Shift

Here's the thing about pivoting a 10-year-old consultancy: you have a lot of "how we've always done it" baked in. Proposals we write a certain way. Client onboarding steps we follow. Reporting templates we've used for years.

All of it was up for questioning.

We adopted a simple rule: challenge yesterday's assumptions. What was impossible six months ago might be trivial today. "Best practices" from last year might now be unnecessary constraints. The conventional wisdom about how long things take? Probably wrong.

AI requires constant re-evaluation of what's possible.

The Four-Layer Discipline

Every task at Method Garage now follows four layers of work, happening simultaneously:

Layer 1: Deliver. Use AI to accelerate the immediate work. This is table stakes.

Layer 2: Automate. While doing it, build the automation for next time. Create the template, the prompt, the agent, the pipeline. So this work never has to be done manually again.

Layer 3: Capture. Document what worked, what failed, what patterns emerged. This becomes training material for the next person and for AI. Both humans and AI learn from this.

Layer 4: Patternize. Evaluate whether components should become reusable patterns. Not everything qualifies, but when something does, extract it into the library.

The key: all four happen concurrently. You're not doing the work, then automating, then documenting. You're doing all four at once. Building the machine while doing the work.

What We Automated

Here's what 90% automation actually looks like:

Interview Synthesis

Before: Record client interview. Take notes during. Spend 4-6 hours afterward transcribing, identifying themes, writing up findings.

After: Record interview. Run audio through transcription with speaker identification. Feed transcript to AI with structured synthesis prompt. Output: themed summary, key quotes, friction points, opportunity areas. 15 minutes, mostly waiting for processing.

Lead Qualification

Before: Prospect fills out contact form. We schedule a call. Qualification/Discovery conversation. If qualified, spend 2-3 days building a business case and proposal.

After: Prospect uses our pricing calculator. Selects their use case. Inputs their context. Calculator generates instant estimate. If they want the full analysis, AI generates a personalized CFO-ready business case and emails it within minutes.

The prospect gets a ready-to-share business case before we've even had a call.

Client Decks

Before: Manually build kickoff deck in Google Slides. Copy-paste client info. Adjust formatting. 3-4 hours for a polished output.

After: Structure client data as JSON. Hit Google Slides API. Template controls all styling and layout. Output: branded, formatted deck in minutes.

Sales Pipeline

Before: New lead comes in. Research the company manually. Hop on discovery call. Take notes. Synthesize afterward. Draft follow-up email.

After: New lead triggers our sales agent. Before the call: automated market research, company briefing, suggested questions. After the call: transcription, synthesis, risk/opportunity identification, nurture strategy, drafted follow-up email. The agent tracks every interaction and updates recommendations as the relationship develops.

Proposals and SOWs

Before: Pull from old templates. Customize for this client. Review for consistency. Half a day minimum.

After: Structured inputs feed automation. Proposal and statement of work generated from templates with client-specific context injected. Review and refine, not create from scratch.

The Math That Changed Everything

Here's what surprised us: even on the first use, building the automation plus using it was faster than doing the work manually the old way.

Read that again.

We built the engine and delivered the client output in less time than it would have taken to just deliver the output.

That's the counterintuitive math of AI-native work. You're not just saving time on this project. You're building the machine that saves time on every future project. The investment pays off immediately and compounds.

The Credibility It Created

When we talk to prospects about AI transformation, they ask: "Have you actually done this?"

We can show them. Our own sales process. Our own interview synthesis. Our own proposal generation. Our own beautiful workflow visuals. Our own internal training built by AI.

We're not selling theory. We're demonstrating practice.

The best proof that transformation works is living it yourself first.

What We Learned

For companies considering this kind of internal automation:

Start with the pain. What takes the most time? What do you dread? What's repetitive but requires context? Those are your first targets.

Build while doing. Don't finish the project, then automate. Build the automation as part of the project. First use is slowest. Every use after is faster.

Document as you go. Every automation becomes training material. Every pattern becomes reusable. The documentation isn't separate work. It's embedded in the work.

Challenge your assumptions constantly. What you "know" about how long things take is probably outdated. Test it.

Expect the identity shift. This isn't just operational change. It's psychological. The instinct has to shift from "I'll spend the afternoon on this" to "I'll set up the automation and run it." From doing to orchestrating.

The Bigger Opportunity

Here's what we didn't expect: the operating system itself became intellectual property.

We're not just using AI tools. We're building an entire company operating system on Claude Code. Onboarding, automations, reference architecture, pattern libraries, decision rules. All encoded. All reusable.

Long term, we see a consulting opportunity here: helping other companies rebuild themselves AI-native. Not just adopting AI tools, but fundamentally retooling their operating system and their people.

Every company will face this transformation. Most don't know where to start.

We do. Because we did it ourselves first.

Next up: Who actually thrives with AI? It's not who you'd expect. The persona profile that's emerging from our work.

Method Garage is a design and engineering firm building AI agents for B2B Customer Success teams. We spent 10 years mapping workflows. Now we automate what we mapped.

If your team is stuck between "AI strategy" and "AI in production," let's talk: saul@methodgarage.com

How We Accidentally Demoed Our Way Into AI Services

We weren't pitching AI. We were using it. Clients noticed.

For ten years, Method Garage has run design workshops for B2B SaaS companies. Journey mapping. Customer onboarding. Digital customer success. The kind of work where you fill rooms with sticky notes, synthesize customer pain points, and help teams see their business clearly.

Somewhere along the way, we started building tools for ourselves.

The Tools We Built

It started with interviews.

Before every workshop, we conduct dozens of interviews with employees and customers. The synthesis used to take forever: listen to recordings, pull quotes, identify themes, surface the biggest problems. Hours and hours of work before we could even plan the session.

So we automated it. Transcription, speaker identification, theme extraction, problem prioritization. What used to take days now took minutes. We walked into workshop planning with synthesized insights already in hand.

Then we got more ambitious.

We built a workshop agent with a RAG database pre-loaded with all that pre-workshop synthesis. The agent already knew the painful problem areas. When participants brainstormed solutions, the agent had context. It could help them craft powerful problem statements and concept narratives grounded in real customer and employee pain points. A set of agent tools took those statements and automatically generated presentation slides.

These weren't demos. These were tools we used every day to run our own business faster.

The Moment in the Room

We started bringing these tools into client workshops.

Picture this: a room full of cross-functional stakeholders, executives down to the front line. Sales, Engineering, Support, CS, Marketing, Finance. They're halfway through a design sprint. They've just finished a brainstorming exercise. Sticky notes everywhere. Ideas scattered across whiteboards.

We had them feed their outputs into the agent themselves. 30 people in the room, all generating pitch decks in realtime. Workshops that used to span several days now took a half day. And we walked out of the room with concept finalists, already beautifully crafted in slides.

The reaction was always the same.

"Wait. What just happened?"

"I had no idea this could be automated."

"Can you build this for our team?"

The Pattern

This kept happening. Workshop after workshop. Different clients, different industries, same reaction.

They weren't impressed by a pitch deck about AI capabilities. They were watching automation work in real time, on their own content, solving their actual problem.

And then the asks started coming.

Past clients reached out. "We saw what you were using in our sessions. Can you build something like that for us?"

Current clients extended engagements. "Forget the next phase of journey mapping. Can you help us automate the insights we just uncovered?"

We said yes.

Why We Could Say Yes

Here's the thing: we'd already done the hard part.

Ten years of mapping customer journeys meant we knew exactly where the friction lived. We'd documented the handoffs, the tribal knowledge, the processes that only existed in someone's head. We knew which workflows were automation-ready and which ones were theater.

And we'd been building AI tools internally for over a year. Not experiments. Production tools we relied on daily. We knew what worked, what failed, and how to get from prototype to something people actually use.

When clients asked if we could build for them, we weren't starting from zero. We were extending what we'd already proven.

What Changed

We didn't make a strategic decision to pivot. Clients pulled us here.

Every brainstorm session, AI kept surfacing as "the solution" to whatever problem we were mapping. Meanwhile, we were using our own AI tools to run those very sessions. The gap between "advisor who recommends AI" and "builder who ships AI" was shrinking with every workshop.

So we closed it.

Method Garage still does design work. We still map journeys and run workshops (now with better tools). But increasingly, clients want more than the map. They want the thing. The agent. The automation. Working code that delivers outcomes.

That's what we build now.

The Lesson

If you're a consultancy wondering whether to add AI capabilities: stop wondering. Start building for yourself first.

Use your own tools in client work. Let them see automation in action. The best pitch for AI services isn't a pitch at all. It's the moment a client watches something happen that they didn't know was possible.

They'll ask. And when they do, you'll be ready.

Next up: Once we committed to building AI for clients, we made a decision: be 100% AI-native from day one. Automate everything. No exceptions. Here's how we rebuilt Method Garage from the ground up.

Method Garage is a design and engineering firm building AI agents for B2B Customer Success teams. We spent 10 years mapping workflows. Now we automate what we mapped.

If your team is stuck between "AI strategy" and "AI in production," let's talk: saul@methodgarage.com

Streamline your customer onboarding through early evaluation

Saul co-authored this article with Darlene Kelly at TSIA. It was originally published here.

We all know that to successfully onboard a new customer, it's crucial to meet them where they are at and guide them to their desired business outcomes. However, companies often overlook the crucial steps that need to take place before you begin the onboarding process with your customer. This can lead to a slower value realization, lower adoption, and, ultimately, a decline in retention.