I have a SaaS product I'm about to launch and I'm worried about my obligations of providing support and a reliable service. I've tried to keep things as simple as possible but the number of moving parts involved makes me nervous. My major worries holding me back are:
What do you do if you have a major outage such as your login system breaking? If you have many users, do you not get swamped with emails? Can some automated messages or a decent support page help here?
Do you have a data recovery plan if you suffer data lost?
Do you have automated tests to catch major functionality breakages when you do deploys? This can be very time consuming to set up.
Do you do any monitoring or auditing to check the security of your database?
Once you have users and data is being stored, how painful is it to make changes later? One thing I like about not having launched yet is you can make big changes to how your app works without migrating anything or worrying about what happens when you deploy and people are still using the old version. This is holding me back a lot.
My worse case scenario is I'll want to travel for a few days without a laptop nearby and getting swamped with emails about something major breaking (I'm the only person working on this).
I know people are going to say worry when you have paying users but even with a few users I'm worried it's going to be a constant mental drain having to keep an eye on if the SaaS is working as intended so I don't get bad reviews. I've launched desktop apps before but this feels different as you're responsible for the platform everyone is running your app from.
Sarah,
Getting paying users is not trivial, and there are technological answers to your questions: yet your issue, I suspect is not a technological one, its psychological.
Ultimately, you're conflicted in what you want.
Why are you launching a SaaS, what's your goal, what's your motivation?
For many of us a SaaS represents a dream: its enables us to earn monthly recurrent income, largely passively through leveraging code and servers. This income stream if large enough provides time and location independence: we can live anywhere and set our own hours.
You're absolutely right, running a SaaS is a responsibility: but this is why we charge customers monthly. This responsibility means we need to be available to fix the service as required.
Yet technology gives us a hand:
.) To solve the hassle of running bare metal, we use hosting options like Heroku or Firebase or AWS which scale automatically.
.) Worried about backing up a data, use a cloud solution - your data is backed up for you, e.g Heroku PostgreSQL.
.) Use sentry, to alert you to service outages.
.) Getting swamped by user emails is actually a positive thing, and a problem many of us would like to have. Indeed if this is a major problem you probably will have the revenue at that stage to hire help.
.Ultimately, a SaaS owner needs to have access to their laptop and email constantly. There will be occasional crises to fix early on. Yet for most people, the rewards far outweigh the costs.
If you have the desire to travel without your laptop for several days and you're on your own than a SaaS may not be for you. It's a responsibility, and you can't desire its benefits without the small duty it imposes: you can't have your cake and eat it.
Some of these problems can be outsourced either for free (with limitations) or at scalable cost.
So, for example, paid-for solutions exist for website monitoring. and database monitoring so that you are informed as soon as your domain cannot be accessed or your databases go down.
While you are just starting out, you can manually copy your databases off elsewhere. Eventually, of course, you will want a system whereby every transaction is saved in the master database and a copy database on a different server somewhere.
Database changes can be harder than functional ones. If you need to make major changes, it may be easier to add tables rather than re-structure tables. Less efficient from a pure performance point of view but possibly safer from a business continuity point of view.
Remember, too, that the technical gurus will always purse their lips and complain that you could improve performance a million times if you did X rather than Y. You are running a business first and a technical paradise second - so you structure matters to suit your business needs even if that does mean structural data changes mean your data is not normalised to the n'th degree.
Functional changes are made easier if you group functions logically and don't try to put all of your code into one or two massively massive files. Structuring your code base makes it easier to maintain and also reduces the extent of any migrations you have to make. If you have 300 functions in one file, the whole file has to be migrated each time you make a change. If you have a number of smaller files, each dealing with a specific aspect of your application, then it not only helps to clarify your thinking at code writing time but it also means you are only chanigng little parts of the application, one at a time. Wait for the change to bed down befoe you introduce further changes. Also, maintain a version control system so that you can always revert to a known, fixed-state.
And don't expect everything to go bang all at once. Before you launch, write down a list of the back-end, administrative features you need to develop, outsource and deploy. Then start working your way through them until you have functionality and strategy in place to protect yourself.
The front-end application is only part of the story but it is the revenue-earning part so get that out of the door and then building the reinforcing buttresses as part of your continued business support role.
By the way, I read yesterday that FaceAche was down again for most of the day - so, if nothing else, you will be in big company if something does happen!
Priorities? Protect your data. Everything else is merely an outage or a bug which can be restored or fixed.
Most everything you mentioned here is a side-effect of running any business. The hard truth is that running any kind of business requires careful attention, planning, and staying one step ahead of issues that may arise.
Each point you mention here has something in common: they require work. Rolling up your sleeves and putting your nose to the grindstone is the reality of running your own business.
Here's my advice: don't try to come up with the perfect solution for each possible issue, simply refer to industry experts that have been generous enough to put their knowledge out there. Research database configurations for hardening your security. Write unit tests around your most important features (ideally every feature) to catch issues early. Establish a support email separate from your personal inbox to handle outage reports, support requests, etc.
I think you can handle it and so should you!
Look at it this way - if they report bugs, it means they care :)
I'm a solo founder too, and I guess we can be hard on ourselves from time to time. You're probably juggling 8 jobs and have around 200 tasks that need to be completed. Give yourself some credit, you're doing the best you can, it's not the end of the world if a user receives a response to their email a few hours later.
Good luck
This is why my SaaS project is a monitoring project
Things don't go wrong that often, and when they do, users are forgiving.
You don't want to have to keep an eye on whether your SaaS is working. There are many uptime monitoring services you can setup quickly to tell you when your site is down, or when your latency is way up. At the dayjob, we use New Relic for this; you can go a long way on the free tier. You can connect such services to e.g. Pager Duty to call / SMS / email / push notify you. Alternatively, a lot of monitoring services have SMS alerts built in. This lets you have peace of mind: you will be notified when your SaaS isn't working, without you having to actively monitor it.
Beyond basic uptime / latency monitoring, managed database services will expose metrics that you can also set alerts on. E.g. I've configured AWS RDS to trigger an alarm whenever there are more than an average of 20 connections on the database over 5 minutes. This lets me react to performance problems when they're happening, before things get out of hand. Managed database services will also give you the ability to make regular automatic backups, and to recover from them. Yes, you need an idea of how to do this, but you don't need a detailed recovery plan. You can cross that bridge when you get there.
You have to perform a "smoke test" of your service after you deploy. Automating this helps, but if your service is simple enough a 5 minute pass can be plenty. Automated tests can indeed take a long time to set up and maintain. If you don't have them yet, and you don't have a broad feature set, they might not be worth the effort. A checklist that you run through manually on deploy can go a long way.
I wouldn't worry about the security of the database. You'll probably end up looking at it periodically during development, and this is good enough to notice anything crazy going on.
Yes, maintaining backwards compatibility takes some effort. The good thing about a web-based SaaS is that you can update the client code at the same time as you update/port your database schema / contents. If you distribute a mobile app, you can also set a limit for how long you support old versions, and build in a version check into the app, that forces the user to upgrade when the app is too old to work.
Your service won't start to break suddenly. You'll encounter most problems immediately after deploying a new version. The rest of the problems typically come with scale, but you don't need to worry about scale when starting out. Setup some basic monitoring and alerting, carry your laptop around, and don't worry about it.
I've been responsible for uptime at an enterprise sales SaaS startup for 4 years (now at ~$250,000 MRR!), and despite having very bad code and unnecessary complexity, the service rarely goes down. We get maybe one or two bad events a year, which we catch with our monitors, and typically resolve within an hour. You'll be fine -- launch!
Receiving constant emails with "It's broken" from my users about various bugs was a huge fear of mine as well as I launch products. As I was building a website for a client once he sent me about 30 emails in a 1 hour period with bugs, took me hours to go through all his feedback and it was stressful.
So I built a tool to allow me to collect this feedback in a structured, actionable way: https://www.bugfeedr.com/
Now I just add a link in the footer for all my websites and client projects that point to the submit feedback form and I'm able to track things much easier and even push tickets into Asana, Trello, Jira, Slack, Microsoft Team Services, etc. for my developers to fix if needed.
One nice piece is that my clients and users don't have to create a new account and login anywhere to submit feedback. They simple go to a web form and submit.
Providing reliable service and secure service is hard. For the first year of Moesif, it was just two people. Initially, it only had smaller companies who are using us on the Free plan, it took a few months after initial release before we have some bigger companies using us and paying us. Now we have thousands of developer and some big names using us.
There are many sleepless nights we had to go through to keep reliable, since even some smaller companies are pumping millions of API calls through us daily. Since we boils down to a Data Company, security is super important.
I think it is about commitment. Do what it takes, you can do it. You obviously thought about these things already. You are already ahead of some others who haven't thought about it or don't care.
For the last 18 months I worked as the only developer on an app that has 600K+ users, a portion of this being paying customers.
Sometimes the server goes down and within minutes there are dozens of customer support requests saying they are paying and it's not working. Some of them leave 1 star reviews at the app store that will stay there forever.
Shit happens. Even with giant companies, like Slack, Twitter, etc.
It's important that you setup tools to get notified instantly about any issue and that you (or another person) are capable to fix it very quickly. Time is money and, in this case, reputation.
This situations tend to be very rare though, so you don't need to worry about it THAT much. But be prepared for it. If you want to travel and not touch the computer at all, maybe you will need to have another person helping you with this.
You don't want to leave a broken service for your customers for a long period of time. That would destroy your credibility.
Some example of tools: Heroku alerts, New Relic, HyperPing
Yes, this is a huge commitment. Maybe you can get rid of some of the pressure by calling it "beta" and planning to keep that label for quite a while!? Also write a disclaimer about how you provide the service "as is" for now and that you provide no guarantee that it will work flawlessly in a production scenario, yet. Provide a way to report issues that is convenient for you and your users.
What you should not do is reduce the price because you do not feel confident. Start with a relatively high price even in beta ... and raise it once you leave beta.
You can read Site Reliability Engineering to get a big picture overview over things like service agreements, monitoring / alerts and operating a service. You can probably ignore many of the topics ... but you seem to have a good enough understanding to filter out what is relevant.
Then derive a Master Plan for getting to a point where you can let great system admins take care of all operation topics.