6
5 Comments

software engineers: How do you approach testing your code for a pre-launch MVP?

Do you guys go all out with TDD, integration, and E2E testing or ditch it altogether for manual testing?

I'm currently working on a pre-launch MVP for https://codepusher.io/ and in the efforts of moving fast I've had to cut back on certain software engineering best practices that I feel very strongly about. At the top of that list is having a full suite of automated test cases.

More often I find myself manually testing critical paths after each new feature added while fighting the urge to implement test cases for every single module I write.

I'm interested to hear how other developers on here have approached testing their code for a pre-launch MVP?

  1. 3

    100% ditch.

    An MVP is not supposed to be an engineered end product.

    Find traction, get a budget, then be smart and don't make the mistake of building on top of your MVP. Engineer when you have the traction and growth to need it.

    1. 1

      Completely agree with @Skullclown. I'd add that, at first, your MVP is more like a hypothesis and it's 90% likely that your product will radically change (10% chance that you will just abandon the product). Since it's so likely to change, it's hard to justify engineering best practices as if you were building a real product. Just assume that, if all goes well, you'll have to ditch your codebase and data structures and start from scratch with more domain knowledge.

  2. 2

    I'm on the same situation and my approach is: Just do functional testing.

    I want to go to market asap but at the same time make sure the users data is handled properly and securely. So I'm just writing functional tests for my API Endpoints, to make sure that all of them behave as expected for edge and regular cases. At the end of the day, that's how really it will be used in production: The endpoints receiving requests.

    Postman is a cool tool that let you write a collection of requests that are executed in a given order and you can use the response of one request as a parameters in the next one. You can also write assertions after each request. So it's an easy way to set up testing workflows for each part of the API. It's faster than try to coverage 100% with unit and integration test and it also speeds me up, I just run the collection and I can check if everything it's ok, no need to manually write requests and try different parameters.
    For front end testing I'll just do manual testing.

  3. 1

    We reached out to a testing team for $10hr.

  4. 1

    Yep, I skip the stuff I know is the right thing to do. Building an app is hard enough. Building a site is hard enough. Trying to get users is hard enough. Testing is another layer that's hard to justify, esp. when you don't even know if the idea has any wings.

    But you will embarrass yourself occasionally. I almost did a push yesterday after upgrading a library, and barely noticed that it broke some critical functionality. But all but a couple beta testers (who I already know and talk with anyway) aren't using the app, so it would've have been a major problem.

Trending on Indie Hackers
After 10M+ Views, 13k+ Upvotes: The Reddit Strategy That Worked for Me! 42 comments Getting first 908 Paid Signups by Spending $353 ONLY. 24 comments I talked to 8 SaaS founders, these are the most common SaaS tools they use 20 comments What are your cold outreach conversion rates? Top 3 Metrics And Benchmarks To Track 19 comments Hero Section Copywriting Framework that Converts 3x 12 comments Join our AI video tool demo, get a cool video back! 12 comments