I was on a call last week with a founder I've been working with. Smart guy, building something genuinely interesting - an AI-powered product with some real upside if it ships clean. He's got a developer on the team he found on Fiverr, paying a couple hundred dollars, and by most accounts the guy is cooperative, communicates well, shows up to calls. Good signs.
The call was supposed to be a quick status check. Zapier integration - done or not done? Slack integration - done or not done? UI bug on the chatbot - fixed or not fixed?
Forty-five minutes later, we were still answering those questions.
Not because the developer was incompetent. Not because the founder was disorganized. But because in software, the word done is doing way too much work, and nobody on either side of that relationship has sat down to define what it actually means.
What "Done" Actually Means in a Dev Relationship
Here's what I heard on that call. The Zapier integration? "Done" - meaning the triggers exist. A new thread trigger, a new message trigger, a new lead trigger. Set up in isolation. Not connected to the app in Bubble. Not tested end-to-end. Not usable by a single real customer.
The UI bug - a chatbot widget floating over the page content instead of sitting behind it, a z-index problem? "Done by tonight" - which, when I pushed, became "I need to look at it in a specific browser" and "I need to inject some CSS and see what's happening."
The Slack integration? "Done" - meaning the structure was built on the Slack side. The connection to Bubble? Not done. The ability for a user to authenticate, install the bot in their workspace, and actually receive messages? Not done.
So nothing was done. Everything was almost done. And almost done, in software, is the same as not done. You cannot charge a customer for almost. You cannot close a sales call with almost. You cannot build a business on a product that is perpetually one more Saturday away from working.
What made this conversation so recognizable to me is that I've lived it. Multiple times. From both sides.
The "Almost Done" Tax
When I was building at a higher budget - we had five or six developers at one point, paying between $10,000 and $14,000 a month - we built less than we were building on this leaner flat-fee structure. I said that on the call and I meant it. More developers, more meetings, more status updates, more "done" that wasn't done. The overhead of managing ambiguous completion created a compounding drag on everything.
The problem isn't headcount. The problem is that "done" becomes a social contract that nobody enforces. A developer says "done" because partial progress feels like progress. It's not a lie - it's a misalignment of definitions. The trigger is set up. In their mental model, that's a unit of work completed. In your mental model as a founder, done means a customer can use this feature today and nothing breaks.
Those two definitions are not even close to the same thing, and if you never explicitly close that gap, you will keep having the same forty-five minute call every week until you run out of runway.
I told the founder on this call: a feature is not done until a non-developer can use it end-to-end and nothing breaks. That's the definition. Write it down. Put it in your dev agreements. Make it the default meaning of the word on your team.
No partial credit. No "done on my end." No "just needs one more thing to connect." Done is done. Anything else is in progress.
Why Founders Accept This
The uncomfortable part of this pattern is that it takes two people to maintain it. Developers offer "almost done" as done because it feels like momentum. But founders accept it because pushing back feels like micromanaging.
I get that instinct. You hired someone, you trust them, you don't want to be the boss who questions every update. If you spent any time as a freelancer yourself, you know how demoralizing it is to have a client second-guess work that's genuinely complete.
But there's a difference between micromanaging and having a clear definition of done. One is you standing over someone's shoulder asking why a button is 2 pixels off. The other is you saying: before I mark this off the board, show me a user going through the full flow from start to finish, in a browser, on the real environment, and completing the task without hitting an error or a broken edge.
That's not micromanaging. That's the minimum bar for shipping software that actually works.
The founder I was talking to had been chasing the Slack integration for more than a week. He'd had calls about it. He'd gotten updates. Every update said progress was happening. But when we sat down and traced through the actual state of the feature - what a real user could actually do with it today - the answer was nothing. The trigger existed. The authentication flow didn't. The bubble connection didn't. The end-to-end experience didn't.
That's a week of "progress" that moved the feature zero percent closer to being usable.
Free Download: 7-Figure Offer Builder
Drop your email and get instant access.
You're in! Here's your download:
Access Now →The Rule That Fixes This
The fix is simple to say and slightly uncomfortable to implement, which is probably why most people don't do it.
Every feature on your roadmap needs a done condition written before development starts. Not a description of what the feature does. A specific, testable scenario that a non-technical person can walk through and either pass or fail.
For the Zapier integration in this case, a real done condition would look like this: A user connects their Zapier account through the app, creates a zap using the new lead trigger, and when a new lead is added in the app, a task is created in their connected tool without any manual intervention or error. Tested in a real environment, not localhost. Confirmed working by someone who is not the developer who built it.
That's a done condition. "Triggers have been set based on what you said" is not.
For the Slack integration: A user clicks the connect button, gets redirected to authorize the Slack app in their workspace, installs it, and the chatbot sends and receives messages in the correct channel. Tested end-to-end. Confirmed by a non-developer.
When you write done conditions that way, two things happen. First, your developer can't accidentally tell you something is done when it isn't - they either walked through the scenario or they didn't. Second, you stop having status calls about ambiguous progress and start having binary conversations: did this pass the done condition or not?
The answer is either yes or no. There is no almost.
The Communication Pattern That Hides the Problem
One thing I noticed on that call - and I want to be direct about this because I think it's the piece most founders miss - is that a lot of the status conversation was happening through a chain of intermediaries. The founder talking to the developer, the developer referencing conversations with another developer on the team, updates coming through Slack in ways that were hard to trace, follow-ups that had been sent but not responded to seriously.
That structure guarantees "almost done" will stay invisible longer than it should. When information travels through three people before it gets to you, every handoff is an opportunity for precision to get smoothed over. "We're working on the Slack integration" sounds like the same sentence whether it means 10% done or 90% done.
I told the founder I don't want to be on this many calls. And I meant that as a systems observation, not a complaint. If you're building something that requires weekly multi-person status calls just to figure out where things stand, the process is broken. The goal is a system where you can look at a board and know exactly what's done, what's in progress, and what's blocked - without a meeting.
That doesn't mean you never talk to your developers. It means that when you do talk to them, the conversation is about decisions and blockers, not status. Status lives in the done conditions. Decisions live in the calls.
When the Pressure Is Real
What made this founder's situation more acute - and what probably makes it feel familiar to a lot of people reading this - is that the product wasn't in a comfortable pre-launch holding pattern. The emails were going out in a couple of days. The closer was in place. The expectation was two, three, four clients closed in the near term, building toward a meaningful MRR target fast enough to justify the next phase of the build.
That's real pressure. And when you're under that kind of pressure, the temptation to let "almost done" become "done" gets stronger, not weaker. Because if you hold the line on the definition, you might have to delay the launch. You might have to tell the closer the product isn't ready. You might have to have an uncomfortable conversation with someone who's excited and ready to go.
All of that is true. And it's still the right call.
Shipping a product to paying customers where the Slack integration is "almost done" isn't a small risk - it's a churn factory. The customer signs up. They try to connect Slack. It doesn't work. They email support. You scramble. Your developer is now fixing bugs in production while simultaneously trying to build the next feature. The quality compounds downward and the relationship sours fast.
The short-term pain of holding the launch until done actually means done is always smaller than the cost of shipping something that isn't ready.
Need Targeted Leads?
Search unlimited B2B contacts by title, industry, location, and company size. Export to CSV instantly. $149/month, free to try.
Try the Lead Database →This Is Not Just a Developer Problem
I want to be clear about something because I've seen founders use this framework to blame their developers and then go hire a new team and repeat the exact same pattern six months later.
The done condition problem is a founder problem. You are the one accepting the ambiguous update. You are the one who hasn't written the acceptance criteria. You are the one who needs to be the non-developer running through the end-to-end test before you sign off.
Your developer - especially if they're a contractor, especially if they're remote, especially if they're working across a time zone or a language gap - is operating on whatever definition of done exists in their head. If you haven't given them yours, they'll use theirs. And theirs will almost always be narrower than what you actually need.
This is not a character flaw. This is how software gets built when the expectations aren't made explicit. The fix is on you.
On that call, I made the point that I want to be able to trust the work is progressing without needing to be the one constantly chasing it down. That's not an unreasonable ask. But it requires a structure where "I'm working on it" is never an acceptable status. The acceptable statuses are: in progress (with a specific blocker or a specific next step), and done (meaning the done condition has been verified). That's it.
The Confidence Test
There's a moment on most of these calls where I ask something like: are you confident you can do this work without constant oversight? And the answer is almost always yes. Everyone is confident. That's not the data I'm looking for.
The data I'm looking for is the pattern of what has shipped versus what was promised. If the Slack integration was supposed to be done last week and it isn't, that's a data point. If the UI bug has been "getting fixed" for multiple sessions and the chatbot is still floating over the page content, that's a data point. Confidence is cheap. Completion history is what tells you whether the system is working.
That's not cynical. That's just how you manage product development without getting burned. You watch what ships, not what's promised.
A Simple Framework You Can Use Today
If you're in a build right now and any of this sounds familiar, here's what I'd do starting today.
Go through your current feature list. For every item marked "done" or "in progress," write a one-paragraph done condition in plain English. Not a technical spec. A scenario. User does X, system does Y, result is Z, tested in environment W. If you can't write that scenario, the feature isn't ready to be built - you haven't defined it clearly enough yet.
For anything currently marked done, walk through the done condition yourself. Don't ask the developer to demo it. You run through it. You are the non-technical user. Does it pass? If it doesn't, move it back to in progress and tell your developer specifically what needs to happen before it gets marked done again.
Going forward, nothing moves to done on the board until you've personally verified the done condition. This will feel slow at first. It will feel like you're doing your developer's job. You're not - you're doing your job as the founder, which is to make sure the product is real before you sell it.
After a couple of cycles of this, your developer will internalize what done actually means in your world. The updates will get more precise. The surprises will get fewer. The calls will get shorter.
And eventually you'll get to the place every founder wants to be: a product that works, a team you trust, and status updates that are actually true.
If you're still early in figuring out your outbound system and want the frameworks I've used to build and scale - from cold email scripts to the full outbound infrastructure - the free resources page is the best place to start. And if you want me in your corner directly, Galadon Gold is where that happens.
But first, go define what done means. Everything else depends on it.
Ready to Book More Meetings?
Get the exact scripts, templates, and frameworks Alex uses across all his companies.
You're in! Here's your download:
Access Now →