May 2, 2019

What tools do you use for end-to-end testing?

ConsoleFreak

It's considered good practice to test various flows through your app/site to make sure things behave as expected when users interact with them. But even so, when hacking together an early version of an idea (or sometimes even a "polished" product), it's too often the case that we build and worry about automated testing later, if at all.

Do you use any automated testing tools or testing services on your projects? If so, which ones? Or do you live in blissful ignorance that something might break in your beloved work of heart?

Would love to hear from you!

  1. 3

    I use Cypress for my indie projects because I use it at work as well. Just a couple basic tests to validate whatever core scenarios my project covers.

    1. 1

      Ooo Cypress looks great. I might give them a try. Thanks for mentioning them

    2. 1

      I just came across Cypress yesterday and I really like the look of it. Had a little play around and think it'll be very useful.

  2. 2

    I still rely heavily on just writing boring old unit and integration tests. Obviously the tech depends on the language I'm working with, but the tools exist w/o any overhead costs.

    Since these are the gifts that keep on giving, I find them to be the best long term investment of my time. With over 40k asserts covering a lot of on one of my main projects, I feel really good about any code change I make.

    I like to keep things pretty simple (and fast) so I try to avoid any additional Javascript on the page unless it's 100% necessary (not a hater, but things like Sentry or NewRelic do add to the weight of the page).

    It's terribly simple and inexpensive, but I get a ton of value out of using a service like UptimeRobot to just check that my stuff is online (100% up time right now, knock on wood). I have monitors setup for home pages (simple) and for API endpoints (more complex) to help ensure things are running as smoothly as possible.

  3. 2

    one more alternative: Headless Chrome + puppeter in docker.

    1. 1

      Puppeteer has been on my radar for some time but I've still not had an opportunity (or made time) to use it. Going this route for testing, though, it seems something like Cypress allows you to do it with less code.

  4. 2

    Hi, I'm gonna toot my own horn here, as my SaaS https://checklyhq.com kinda exactly does this.

    You can setup API monitoring and scripted browser transaction monitoring. We check it on a schedule, e.g. once per 10 minutes, but you can also call the these checks from a CI/CD tool using the trigger functionality. The scripting uses Puppeteer and runs an actual Chrome browser.

    I monitor key functionality for Checkly with Checkly, like login, navigation to key parts and the dashboard functionality. You can see these checks on our public dashboard https://status.checklyhq.com/

    1. 1

      That looks interesting, might give it a try.

  5. 2

    I used Selenium for a while but I found it a pain to set up. It assumes that you have a graphical interface, so it was always a pain to get it to run on everyone's machine and to make it work in continuous integration. Plus, I don't love the semantics of manipulating a web page with Python or Java code.

    I recently switched to Cypress. The syntax is nice because it's JavaScript, so it feels like a natural way to manipulate the DOM. It's Docker-friendly, so you don't need to do a lot of work to make it run in continuous integration.

    Yesterday, I published a tutorial that shows a flexible way to add Cypress to any web app (including projects that don't use npm):

    1. 2

      Oh hey, I came across your tutorial already actually, found it very informative.

      1. 1

        Oh, cool. Thanks!

  6. 1

    I've used Ghost Inspector (https://ghostinspector.com) and think it's a great tool. You just install a Chrome extension and record your testing steps. I believe it uses Selenium under the hood.

  7. 1

    I haven't written a single test and I think my product is on the more complicated side. Instead, I have focused on setting up monitoring and analytics. I use NewRelic, Sentry, and Amplitude.

    Pros:

    • It catches the major issues

    • It catches the issues quickly after deploy (assuming site has steady traffic)

    • Super easy to setup and once it is setup, I don't have to think about it

    Cons:

    • It does not catch everything, especially illogical code

    • It does not catch issues before the deploy (i.e. it is a reactive approach, not proactive)

    1. 1

      How much do you pay for NewRelic? relative to other costs. It seems expensive

      1. 1

        I pay $50 / mo. But it has saved me hours of productivity / debugging so well worth it.

        1. 1

          We used New Relic in one of my previous roles for a large publisher. I can attest to the time saved debugging, was very valuable to us at the time.

  8. 1

    You can always focus on end to end testing after you are trying to scale feature development and have enough customers that would truly be affected by bugs.

    That said, for my day job we use selenium through a tool called Gauge - https://gauge.org

    1. 1

      Yea, I guess it depends on the nature the product, as well as the person developing it. If you're building a more complex product then having tests early might make sense (hopefully you're not doing that without having already validated the idea). For simpler offerings, not so much.

  9. 1

    From my experience in the 'testing world' there is so much obsession over automation (as some kind of magical solution). Sure it's useful, but it shouldn't underestimate the power of doing things from a manual, users or 'general analysis of the product' perspective.

    Indie hacker projects are perhaps more likely to be able to cope with test automation practice (compare to huge corporate environments), but even so, in most scenarios the indie hacker's focus is on validating the idea, often the quality is compromised. And if the product is constantly changing, then so should the test automation. This makes things hard.

    I've personally seen in many scenarios/comments, quality of code/product has had a negative impact on customer acquisition, ie - customers have note been acquired/converted due to bugs. But in other situations the quality has not mattered as much and allowed products to flourish.

    Going back to your question...what tools to use...testing isn't about tools. I feel sad that expertise comes down to tools. You wouldn't expect design tools to solve your design needs, sure they can help, but they won't churn out your ideal product design. You still need a (some kind of a) designer behind the scenes making the right things happen.

    Testing tools won't solve your testing needs. They can support them, but really you need people/testers looking into your product, analysing it, looking for flaws, understanding the customer, and so forth.

    Saying that...I'd love to know what indie hackers use or do from a testing perspective... :)

    [Sorry, my life has been surrounded by software testers for the past 12+ years!]

    1. 2

      Hi Rosie, thanks for your response. First off, let me clarify that I'm asking about automated testing tools or testing services. I think my wording was off, however, so will update :-)

      I agree automated tests aren't a replacement for manual testing or analysis of an application. I do consider automated testing to be an additional aid in catching problems before your users do.

      ...but even so, in most scenarios the indie hacker's focus is on validating the idea...

      I think idea validation can and does happen before building starts. Obviously not always. This is something that varies, but if you happen to be an entrepreneur who has validated their idea and is working on v1.0 of the product, it's reasonable to consider measures to maintain stability of what you put in front of your users. Sure, some of your users will tell you about bugs, but as you pointed out, in some cases you'll lose them, because they lose faith/trust in — or patience with — the product.

      Going back to your question...what tools to use...testing isn't about tools. I feel sad that expertise comes down to tools.

      I don't believe this it's the case at all that expertise comes down to tools. I'm a developer by trade but before that I was in QA. I greatly appreciate the work that the QA engineers I work with do, and I let them know it. The bugs that they catch are often not the ones my automated tests could catch.

      In my experience automated tests allow a shift of effort. There are different reasons for testing a product (as you know), whether it is software or hardware. Testing a product technically works is not the same as testing if it works well for users, if the UI is confusing, or the UX frustrating.

      To illustrate my point about shift of effort: if several different paths through an application need to be tested (i.e. they technically work), then automating that makes sense, especially if any of them are critical paths. Why? Because this type of testing is time-consuming, but necessary. And it's for the very reason of iteration on a product that these paths can break, due to frequent changes and not always being able to determine the effect of a change in one area on another. If you want to keep these critical paths working, you have to test at a rate equal to the frequency of the changes. That's a lot of testing and a lot of time and effort as a result. An automated test can save that time and effort. It won't necessarily catch peripheral bugs (e.g. UI glitches or device-specific problems), but it can save time by catching general breakages early, and prevent you unintentionally pushing a change live that significantly disrupts your users' experience.

      Testing tools won't solve your testing needs. They can support them, but really you need people/testers looking into your product, analysing it, looking for flaws, understanding the customer, and so forth.

      Agreed, you need people, because we're building for people, and they're the ultimate test of whether or not a product works. But automated tests have their place too, because when applied properly they allow for the human effort to be redirected and better utilised. At least, I think so.