Dogfooding—the disgusting and less than accurate way of suggesting a team use its own product before releasing it to others—is a common way for startups to test their product. The basic reasoning is this: If you don't like your own product, who else is going to? It makes a ton of sense, but it's not enough.
But before we launch into it, let's start by talking about what dogfooding is good for. It's good for ensuring you have a usable product. By your team using your product—and it's important to distinguish “using” from “testing”—you will get a sense of how usable it is. You'll catch immediate issues. You'll find out if the entire thing is a complete mess and needs to be scrapped.
Dogfooding your own product is also a good way to understand the market you're trying to serve. We don't all build products for markets we're intimately familiar with. It'd be great if we could, but that's not reality. So, by using your product internally, you will start to see how the market might use your tool. You'll learn the market better.
All in all, dogfooding is a good thing. A necessary thing.
But here's the problem: It's simply not enough.
Even if you know the market, even if you know the competitors, even if you know what your customers want before they even know they want it, dogfooding your product will only get you so far. As my team and I built SimpleID—a developer toolkit to add authentication and storage to applications in just a couple minutes—we discovered three main blind spots that limit the effectiveness of dogfooding a product. Those blind spots are:
- Familiarity with the code
- Familiarity with the design
- Believing edge cases are edge cases when they're really center cases
I'll step through each of these, and then I'll talk about the layer that should be added to dogfooding your product to help address these concerns.
Familiarity with the code
My team is entirely technical. We all write code. We all know the code. We all understand what's supposed to do. This is great for efficiency in development, but it's less great when trying to explore the user experience issues your product might have before releasing it to customers.
As developers, we have a built-in tendency to take the happy path. We don't always know we're taking the happy path. It's this subconscious-lighted pathway that tells us "don't click that button, dummy" or "if I enter the right text here, I know I'll proceed." Writing tests helps force us to avoid these happy path moments, but when actually using the products we build, it's hard to avoid the burden of knowledge.
We know what the product should do, so we end up using it exactly as it was designed and bypass any of the poor UX that others might find. Even worse, we miss bugs that others might find in the first five minutes of using the product.
Familiarity with the design
Very similar to the problems with understanding the code, when we understand the design, we know what the next screen looks like before it ever appears. We know that button A was designed to help get the user to page D. So, when it's supposed to happen, we click button A. We don't click it early. We know the design, we know what should happen, and we obey the design that we worked so hard to implement.
But regular users won't do that.
If you know that a form within your product expects a username, you may totally miss the fact that you forgot to label that form field. Your users won't know what the field expects, but you do because you designed the process. You designed the flow.
That, of course, is a simple example. But it illustrates how easy it is for us to miss the problems in our applications when we are intimately familiar with the design.
Believing edge cases are edge cases when they're center cases
Long heading aside, this is probably the hardest blind spot to overcome when dogfooding your product. Put simply, this means assuming people won't do what people do. This can look different for each product, but the way we get here is pretty much the same across the board.
We build for efficiency. We want to get our products to the market, so we tell ourselves "we'll handle the edge cases when they arise." What we don't know, though, is what those edge cases are. This is especially true when the product is new to the market, innovative, or something that just hasn't been tried before. We then decided for ourselves what the edge cases are based on our own knowledge of the product. So, we think we know what the edge cases are. We dogfood our product and our own confirmation bias proves to us that we were right. "See guys, we didn't hit any of those edge cases, we're good!"
Then, the product hits the public and on day one someone hits what we thought was an edge case. Then another person. Suddenly, after one day, what we thought was an edge case was actually a core use of the product… and it doesn't work.
How do we solve these problems? Do we give up internally testing our products? Do we jump straight to user testing?
The exact answers will vary based on product, but for my team, we found a balance where we build the product, dogfood the product, release the product to a few early adopters (note: this is not a public alpha or beta program), then we release widely. Call it a private beta, call it testing among friends. Whatever you name it, just do it. For SimpleID, this meant reaching out to developers we already knew and asking them to try to both use the outward facing application as well as the developer tools.
You should be doing this while you dogfood your product. Not before, not after. By letting a few trusted colleagues use your product while you are also using it, the feedback you receive resonates so much more. While we may have missed that broken link on Page Z of the application, we know exactly where it is and how our early adopters would have arrived there because we're using the product too. We may have missed an error message that gets returned from our API, but because we are also using the product, when our early adopters tell us about it, we find the problem and fix it so much faster.
Some of this article may seem obvious, especially for seasoned bootstrappers. But currently, there's such a strong emphasis on dogfooding your products in the startup world, that some teams forget to do anything other than that. The biggest takeaway is this: keep dogfooding and internally testing your product, but never use it as a substitute for early feedback from people who might one day pay you to use this product. There is virtually no way that your team can find all of the problems with your product by themselves. Covering your blind spots with trusted outside testers and avoiding the most common pitfalls among bootstrappers is vital to creating a reliable and exciting product.
Any tips of your own? Have any questions about your own process for internal testing? Leave a comment below and start a conversation with your fellow indie hackers!