Real-time responsiveness has become a core expectation in mobile apps—from chat and presence indicators to collaborative tools and live dashboards. But building this functionality has always come with trade-offs, especially on mobile. The tools are there, but the experience of wiring them together isn’t always straightforward.
Over the last few months, we've been exploring what a real-time system could look like if it were built specifically for mobile environments and designed to avoid the usual infrastructure overhead. This post outlines some of what we learned, the architectural decisions we made, and how that shaped the real-time system now available in Calljmp.
WebSockets are the go-to tool for pushing updates from server to client. They work well in many cases, but they don’t naturally fit mobile realities. Unlike the desktop web, mobile apps contend with backgrounding, network switching, battery constraints, and more aggressive connection drops.
Maintaining a stable socket connection can be expensive—not just in terms of infrastructure, but in logic. You have to handle reconnects, buffering, retries, and failure states across diverse devices and OS versions. For most developers, that means spending significant time on problems unrelated to core product functionality.
At a basic level, "real-time" just means that clients receive updates the moment data changes, without needing to refresh or poll. This can apply to a wide range of features: messaging, dashboards, activity feeds, collaborative editing, and more.
But “real-time” also carries deeper UX expectations. Users assume that changes will propagate instantly and consistently. If messages arrive late, disappear temporarily, or show up out of order, the illusion of "live" breaks—undermining trust in the interface. That puts pressure on the backend to deliver more than just fast data; it has to be predictable and observable.
Instead of exposing raw WebSocket connections, we started thinking in terms of topics and events. The idea is simple: clients can subscribe to topics they care about—like chat.room.42 or presence.users.online—and receive structured updates when something happens.
This model lends itself well to mobile. Each subscription is scoped, filtered, and field-projected. You don’t have to listen to a firehose and manually sort out relevance. For example:

This reduces bandwidth and battery usage, while also giving the app more control over what it receives and when.
The same event-driven model applies to your data layer. Instead of polling for changes or setting up webhook triggers, clients can subscribe to inserts, updates, or deletes on a specific table.

When a new message is inserted, subscribers get notified immediately. No delay, no polling interval to tune. Just fresh data pushed directly to the app.
A lot of real-time systems fail quietly. Messages drop. Events misfire. Connections silently fail and reconnect without acknowledgment. For us, building real-time meant building observability from day one. That includes:
Event logs per topic and per connection
Metrics on message throughput and latency
Debugging tools to replay and trace real-time flows
You can’t improve what you can’t see. Especially in real-time systems, where bugs can be hard to reproduce, visibility is key.
One of the things we discussed early was whether to support native WebSocket access. We chose not to. While WebSockets are powerful, they’re also low-level and expose developers to a wide surface area of connection logic. Instead, everything flows through topic-based observers and event subscriptions.
Security was also a big factor. Because the platform already supports App Attestation (iOS) and Play Integrity (Android), we wanted to avoid the need for API keys altogether. Every real-time subscription is tied to a verified, attested mobile client.
Real-time features are often associated with hidden costs—particularly on platforms that charge per connection, per message, or per channel. That model tends to scale poorly for apps with lots of users but relatively low message volume per user.
In our case at Calljmp, we decided early not to meter real-time usage by connections or outbound messages. Instead, we focus on predictable pricing tiers with clear throughput limits. That makes it easier for teams to estimate usage, even during early-stage development.
Real-time isn’t about being flashy. It’s about giving users confidence that their actions are seen, their data is current, and their interactions matter. That requires more than just a socket connection—it requires an architecture that fits mobile conditions, development constraints, and user expectations.
We’ve only just begun rolling this out, and we’re learning from real-world usage every day. If you’re working on a mobile app where timing and presence matter, I’d be curious to hear what’s worked (or hasn’t) for your team.
You can learn more about our implementation at calljmp.com, or check out the real-time docs if you want to dive deeper.