13
44 Comments

How are you handling memory and context across AI tools?

I keep running into the same problem with AI tools:

They're great at reasoning, but terrible at remembering. Important context gets lost across sessions and I keep having to re-feed it (I guess I'm not the only one).

That became painful enough that I ended up building Kumbukum — an open source memory infrastructure for teams and AI tools.

The idea is simple: make context persistent, searchable, inspectable, and editable, so assistants can pull the right information instead of starting from scratch every time. However, and this is key, I wanted to build something that's not just for AI tools, but for teams in general. So you get a clean UI to manage your team's collective knowledge, and an API that any tool can integrate with. I wanted something teams can actually read, manage, edit, and self-host if they want.

Right now it supports things like:

• notes
• memories
• URLs (with whole site indexing)
• relationships between them
• Git sync
• and I'm currently adding email too

It also includes a browser extension that can extract information from any webpage and send it to Kumbukum with one click.

I'm curious how others here are handling this.

Are you:

• just relying on chat history?
• summarizing manually between sessions?
• using RAG on top of docs?
• building your own internal memory system?
• using MCP-based setups already?

Would genuinely love to hear what's working and what still feels broken.

If useful for context:

https://kumbukum.com
https://github.com/kumbukum/kumbukum


UPDATE on Benchmark:

Since posting this, we've been optimizing the pipeline, and here are the latest numbers (since everyone loves those benchmarks). This is done with OpenAI Codex and GPT-5.5:

Great result.

| Tool | Before | After | Saved |
|---|---:|---:|---:|
| search_knowledge | 7,804 tokens | 2,293 tokens | 5,511 tokens, 70.6% |
| recall_memory | 2,394 tokens | 1,268 tokens | 1,126 tokens, 47.0% |
| search_notes | 4,074 tokens | 1,027 tokens | 3,047 tokens, 74.8% |

Combined retrieval payload:

14,272 -> 4,588 tokens

Saved: 9,684 tokens, 67.9%

Chars dropped from 57,086 -> 18,349, saving 38,737 chars.

Latency stayed basically noise-level: 300ms -> 321ms combined across the three retrieval calls.


Update to the updated benchmark

In the words of Codex GPT-5.5 again:

Fantastic. Second-stage savings:

| Tool | Before metadata slim | After | Additional saved |
|---|---:|---:|---:|
| search_knowledge | 2,293 | 1,313 | 980, 42.7% |
| recall_memory | 1,268 | 325 | 943, 74.4% |
| search_notes | 1,027 | 326 | 701, 68.3% |

Combined:

4,588 -> 1,964 tokens

Additional saved: 2,624 tokens, 57.2%

From the original baseline:

14,272 -> 1,964 tokens

Total saved: 12,308 tokens, 86.2%

Chars went:

57,086 -> 7,853

Total char reduction: 49,233 chars, also 86.2%.

That is a genuinely excellent optimization: same retrieval intent, same semantic search, much cleaner context. Stored the exact production benchmark in Kumbukum.

posted to Icon for group Developers
Developers
on April 23, 2026
  1. 1

    I've been dealing with this exact problem. My current approach
    is keeping a structured briefing document that I paste at the
    start of each session. Not elegant, but it works until something
    better comes along. Curious if anyone has found a cleaner solution.

  2. 1

    Interesting! It’s a great idea especially when the leading models keep changing and different models are good for different things

  3. 1

    I’ve been running into the same thing — AI is great at reasoning in the moment, but context just falls apart over time.
    What’s been tricky for me isn’t just “memory” in the sense of storing everything, but actually being able to reuse specific pieces of context across sessions without dragging the whole history along.
    A lot of setups I’ve tried end up either too heavy (full context, RAG, etc.) or too fragmented (notes, bookmarks).
    Curious how you’re thinking about that tradeoff — do you see this more as a “store everything and retrieve” problem, or something closer to selectively extracting and reusing smaller bits of context?

  4. 1

    This is one of the biggest friction points in AI workflows right now. Every tool starts fresh and you end up re-explaining context constantly. I've been thinking about this too — are you using any specific tools to bridge the context gap between sessions, or mostly manual workarounds?

  5. 2

    This is a real problem. The constant context reset gets frustrating fast.
    I like that you’re treating memory as something teams can actually see and manage, not just something hidden behind prompts.

    But, how are you thinking about relevance over time though, like what gets surfaced vs ignored as things grow.

    1. 1

      Great question, and as a fact, I was just working on this today. As a further point, I'm just about to run some benchmarks to see whether my thinking was correct. Give me 30 minutes, and I'll get back to you with some numbers and an explanation of how I tackled it.

      1. 1

        Ha took a bit longer than 30 minutes :) But here are the results that we have so far:

        search_knowledge:
        Default: ~7804 tokens
        Kumbukum: <1000-1500

        recall_memory:
        Default: ~2394
        Kumbukum: <500

        search_notes:
        Default: ~4074
        Kumbukum: <500

        Tested on the same number of documents, size, etc. One is the raw ".md" memory in Codex, and the other is the same memory documents in Kumbukum with optimized collections, MCP server, and semantic search.

        We continue to tweak it right now.

  6. 2

    Sharp problem to build around.
    A lot of AI tools compete on intelligence, while users quietly suffer from continuity loss between sessions. Would try it.

    1. 1

      Yep totally. Everything is built by developers for developers. Kumbukum's approach is a bit different. Users are at the center. Easy ways to get data in, edit it, and understand it. AI tools crunch it.

      1. 1

        Strong direction. A lot of AI workflows don’t fail because the model is weak, they fail because context retrieval is noisy, bloated, or missing. Better memory systems often create more value than switching to a better model.

        Those token reduction numbers make that pretty clear.

  7. 1

    I understand that pain perfectly, it's exactly what led me to design my own system. To solve that AI "amnesia", I built an infrastructure that I call OMEN, and it's basically the "hippocampus" that my tools were missing.
    Here I share with you how I am handling it, in case it helps you to contrast with Kumbukum:

    1. Persistent and Structured Memory
      Instead of re-feeding the AI in each session, I designed a local database called Omen with an architecture of 10 relational tables. I don't just save text; I organize knowledge into Projects, Modules, and Input Types. This allows that, instead of starting from scratch, the wizard consults a structure where each piece of information has a logical and permanent place.
    2. The "Logical Weight" and the Hierarchy
      Something that seems vital to me and that I integrated in my Stage 6 is the logic_weight (a weight from 0 to 100) and the classification by Functional Type (Axioms, Evidences and Hypotheses).
    3. Axioms: These are immovable technical truths.
    4. Hypotheses: These are temporary tests or ideas. This allows the AI to know what information is "the law" and what is just an experiment, preventing it from getting lost in a sea of irrelevant data.
    5. Total Sovereignty (Offline First)
      Unlike many cloud tools, my priority is Digital Sovereignty. My system is 100% local and offline; in my network tests, the indicator shows 0 Kbps input and output. This allows me to manage sensitive or technical knowledge without a single byte leaving my hardware.
    6. Massive "Intake" Capacity
      For memory to be useful, it has to be large. I developed a ZIM file importer (like those from Wikipedia or Kiwix) and automatic PDF, Word, and Markdown extractors. The system cleans up junk code (HTML/CSS) using a filter_data_zim function so that the AI receives only pure, readable context.
    7. Future-Proof (Vectors) I've already left a column of embedding_vector ready in my central table. This means that my database is already prepared for an AI model (which im integrating at this moment) to perform semantic searches, finding the information by its "meaning" and not just by keywords.
      Bottom line: My focus has been to build a sovereign memory infrastructure. It's not just a database; it's a system where I design the logic and architecture, and I use AI as an execution team that queries that technical memory so I never forget the context of my mission.
  8. 1

    most of my activity has been manual, I ask Claude to create a md when I believe I'm at key milestones and due to my OCD I keep asking it to update the md more times than necessary (not sure if that's efficient!), then when I need to move to another thread (run out of context window) I ask the new thread to read off the relevant section in the md, this has been working for me so far...If there are tools that claim to handle context across threads my fear is that they will spit out something wrong (silent failure), which is a different kind of problem which means significant time and effort needs to be spent in validation before getting any confidence on it

  9. 1

    Hitting this exact problem right now. I'm a researcher building a SaaS tool on the side — 1-2 hours a day after work — and the biggest overhead isn't coding, it's re-explaining context to AI every session.
    Currently using a shared project file as a manual "memory layer" — works for now, but definitely doesn't scale.
    The Git sync angle is interesting. Does Kumbukum handle non-dev context well too? My use case is more product/workflow decisions than code.

  10. 1

    This resonates a lot — the “great at reasoning, bad at remembering” issue is exactly what I keep running into as well.

    Right now I’m mostly relying on a mix of manual notes + re-feeding context when needed, which definitely doesn’t scale. I’ve looked into RAG setups, but for smaller projects it often feels like overkill compared to the actual problem.

    What I find tricky isn’t just storing context, but deciding what is worth remembering vs what should be ignored — otherwise it quickly turns into noise.

    Curious — how are you handling prioritization or filtering in Kumbukum?
    Is it mostly user-driven (manual tagging/structuring), or do you have any logic to surface “important” context automatically?

  11. 1

    Really interesting take — the pipeline benchmark numbers are wild (86% token reduction is no joke). One pattern I keep hitting while building a lightweight capture tool in a nearby space: the weakest link is almost never retrieval, it's the capture step. If logging a thought takes more than ~2 seconds, people stop doing it and the memory layer starves. I've been getting surprisingly good mileage out of piping captures into email as a "dumb" transport — every tool already parses it, dedupes for free, and survives stack changes. Curious whether you've explored non-UI capture paths for Kumbukum (email ingest, SMS, CLI pipe), or is the browser extension the primary inbox right now? Also, how are you handling stale or conflicting memories — last-write-wins, or is there a decay heuristic?

  12. 1

    Really like the approach of making context persistent + editable. The memory benchmark numbers are impressive.

    From building prompt libraries for indie makers, one pattern I keep seeing: the biggest context loss isn't between tools, it's between SESSIONS. Teams rewrite the same role setups, ICP definitions, and output specs every time. Having that layer as reusable, versioned prompts (with bracketed variables) fixes 70% of the problem before you even need a memory layer.

    Might be complementary to what you're building — Kumbukum for long-lived team memory, a solid prompt library for the deterministic parts.

  13. 1

    Treating memory as an app layer instead of a model feature has worked better here. A small per-user store with explicit facts, recent actions, and task summaries beats relying on each tool's chat history, especially once people switch models or devices. Shared memory across tools sounds nice, but in practice stale context causes weird behavior, so keeping it scoped and easy to reset matters a lot.

  14. 1

    It's an actual problem, I use to talk and research with AI specially Gemini, and it got a very bad memory, I need to tell him the same thing the day after tmrw, This app can help me in that thing,

    I have one more problem, there should be system with all these chatbots that has some way of feeding the part knowledge and it doesn't affect the chat, What I mean there are a lot of time when I need to know about a specific point or line generated by AI, But If I ask about it that get's messy and breaks the flow of main stream so there must some kind of string that originates from the particular point from that generated response and a new chat window open where we can confirm that thing without compromising the main stream generation, This is also a problem that I think can be fixed!

  15. 1

    Yes, this appears very genuine. ~

    I’ve run into the same issue; the model is not bad per se, but context seems to escape it.
    I've attempted the method of "documents and summaries" as well. It functions for a time, only to drift slowly, as it relies on you.

    Your point about what deserves to be remembered may be the hardest part.
    Everything that is forgotten becomes a noise.
    Being excessively choosy will cause you to lose access to useful context later on.

    It’s an interesting idea to think of memory layers; some things are fleeting, some stay, some are reused.
    The step you talked about refers to the promotion step.

    At some point, something must be deemed worth keeping, rather than merely tucked away in the passive storage.

    Or else, you merely replicate the same disorder somewhere else.
    I am also interested in maintenance.

    I haven’t really seen people stick with them unless they’re super lightweight; something so clearly pays off won’t get extra effort.

  16. 1

    I've run into this exact problem. The context re-feeding overhead is real, and it compounds over time.

    On the voice side, I actually think the input modality matters too. When you're constantly switching between the keyboard and AI tool windows, you lose thread continuity.

    For me, tools that let you dictate directly into the workflow, without jumping to a browser or document, cut down the friction before the memory problem even starts. You get the thought down faster and cleaner, which means less cleanup work downstream.

    Curious if you've looked at input latency as a factor too? Even 200-300ms of friction per input event adds up when you're doing hundreds per day.

    1. 1

      Yes, input is important as well. Here is a breakdown that hopefully makes sense:

      First and foremost, Kumbukum is not a memory tool; it's an infrastructure for AI and people. The main goal is to get data into one place and provide humans the option to feed it.

      To solve this:

      Retrieve (AI):

      • MCP
      • API

      Feed (AI and humans):

      • nice UI
      • Drag and drop any file to your projects (extract it, store it in AI-friendly format)
      • Browser extension to add notes, URLs, and now also emails
      • Git sync: currently just adds all readable text formats
      • Use the API to automatically feed data
      • Use the MCP to feed data

      Make sense (AI):

      • tags
      • links
      • knowledge graph

      All of the above preformat, categorize, and link content in a way that makes sense to AI. Less searching, faster AI, fewer tokens used, etc.

      Simple :) https://kumbukum.com

      1. 2

        yeah that framing resonates - infrastructure not a document pile. I've been hitting the retrieval problem hard in my own agent setup, that's usually where these things fall apart. curious whether Kumbukum handles cross-session continuity or just same-session context?

        1. 1

          I have Claude in VS Code, Codex GPT-5.5, OpenClaw in Telegram, and then tons of MCP calls from customers all going to Kumbukum at the same time.

          I've built Kumbukum as a fault-tolerant, load-balanced system. Our production setup includes multiple Caddy instances, a MongoDB ReplicaSet, a Typesense cluster, over 400 containers in Docker Swarm, Cloudflare, etc.

          AMA ? :)

  17. 1

    UPDATE on Benchmark:

    Since posting this, we've been optimizing the pipeline, and here are the latest numbers (since everyone loves those benchmarks). This is done with OpenAI Codex and GPT-5.5:

    Great result.

    | Tool | Before | After | Saved |
    |---|---:|---:|---:|
    | search_knowledge | 7,804 tokens | 2,293 tokens | 5,511 tokens, 70.6% |
    | recall_memory | 2,394 tokens | 1,268 tokens | 1,126 tokens, 47.0% |
    | search_notes | 4,074 tokens | 1,027 tokens | 3,047 tokens, 74.8% |

    Combined retrieval payload:

    14,272 -> 4,588 tokens

    Saved: 9,684 tokens, 67.9%

    Chars dropped from 57,086 -> 18,349, saving 38,737 chars.

    Latency stayed basically noise-level: 300ms -> 321ms combined across the three retrieval calls.

    1. 1

      And we have another benchmark result from our latest extensive metadata slimming without compromising quality.

      In the words of OpenAI Codex GPT-5.5:

      Fantastic. Second-stage savings:

      | Tool | Before metadata slim | After | Additional saved |
      |---|---:|---:|---:|
      | search_knowledge | 2,293 | 1,313 | 980, 42.7% |
      | recall_memory | 1,268 | 325 | 943, 74.4% |
      | search_notes | 1,027 | 326 | 701, 68.3% |

      Combined:

      4,588 -> 1,964 tokens

      Additional saved: 2,624 tokens, 57.2%

      From the original baseline:

      14,272 -> 1,964 tokens

      Total saved: 12,308 tokens, 86.2%

      Chars went:

      57,086 -> 7,853

      Total char reduction: 49,233 chars, also 86.2%.

      That is a genuinely excellent optimization: same retrieval intent, same semantic search, much cleaner context. Stored the exact production benchmark in Kumbukum.

  18. 1

    Hey Nitai!
    The retrieval problem is real, and I'd add one layer to it: context quality degrades not just across sessions, but across tools in the same session.
    I run a SaaS email rebuild tool where Claude generates production HTML from newsletter content. The context I need to preserve isn't just "what the user said" — it's brand voice, conversion patterns, structural decisions made 3 steps back. Right now I handle it by injecting a structured context block at every API call. Works, but it's manual and brittle.
    What you're describing with semantic search + tagged collections is essentially what I'm rebuilding by hand on every request. The difference is yours persists and compounds. Mine resets.
    The Git sync angle is interesting for dev context. Curious whether you see a use case for creative/marketing workflows too — or is the current focus primarily dev teams?

    1. 1

      Ha... I know exactly what you mean -> https://helpmonks.com

      Honestly, add Kumbukum. As it's not AI dependend you can create links to URL and emails (about to be released) and then the AI knows the connections between your data (that's what the memory is about).

      Use the API, MCP, and be happy :)

  19. 1

    RAG over docs helps, but it doesn’t fully solve the living knowledge problem. Notes, decisions, links, and relationships need a cleaner system around them.

    1. 1

      Yep, and that's exactly what https://kumbukum.com is. It's not another memory tool. It's a complete infrastructure.

  20. 1

    The persistent context problem is actually two distinct problems. Storage is the easier half. The harder part is retrieval precision: which context is relevant now, not which context exists. RAG approaches solve the first and often fail the second. The MCP pattern is interesting here because it moves the retrieval decision to the host, not the model.

    1. 1

      Exactly. The whole memory isn't just a bunch of markdown documents or a directory tree. This doesn't provide an AI client with any information.

      How Kumbukum does it is:

      • database collections
      • tags
      • links between the items
      • dedicated collections per item, i.e., notes are not memories, URLs are not notes, etc.
      • semantic search
      • MCP
      • optimized instruction for the AI to only retrieve what is required on the topic and not do a random wildcard search and hope for the best :)

      That's Kumbukum - https://kumbukum.com

  21. 1

    Context management is honestly the biggest bottleneck right now. I find myself constantly repeating the same 'architectural rules' to different AI agents just to keep them on track. Are you using any specific vector DBs or tools like Mem0 to handle this, or is it still a manual copy-paste game for you? I feel like we’re all just waiting for a universal 'context layer' that actually works across the stack.

    1. 2

      I'm copying what I wrote to someone here earlier.

      The whole memory isn't just a bunch of markdown documents or a directory tree.

      This doesn't provide an AI client with any information.

      How Kumbukum does it is:

      • database collections
      • tags
      • links between the items
      • dedicated collections per item, i.e., notes are not memories, URLs are not notes, etc.
      • semantic search
      • MCP
      • optimized instruction for the AI to only retrieve what is required on the topic and not do a random wildcard search and hope for the best :)

      That's Kumbukum - https://kumbukum.com

      Makes sense?

  22. 1

    ran into this when swapping providers last month - rebuilt pretty much everything except the memory layer, which survived clean. I've started thinking of it as the job contract: what the agent needs regardless of what model's underneath.

    1. 1

      Spot on. What Kumbukum does is creating a memory infrastructure instead of some tree of documents.

  23. 1

    This resonates a lot. I've been building production Flutter/Firebase apps and the memory problem hits differently when you're a solo dev — every new session with an AI tool means re-explaining your entire architecture, naming conventions, project context.
    I've been handling it by keeping a detailed markdown file with my stack decisions, Firestore structure, and key patterns that I paste in at the start of sessions. Works but feels like a hack.
    The Git sync feature you mentioned is interesting — does it index commit messages and PR descriptions too? That would be genuinely useful for keeping AI context aligned with actual codebase changes.
    Will check out Kumbukum. Self-hosting support is a big plus for anyone working with sensitive business logic.

    1. 1

      Honestly, and without saying "I solved it," but I was exactly in the same boat with our project. One loves Claude, another OpenAI, and another one Gemini. Some love the terminal, some love Cursor, some love Sublime Text, etc.

      No AI vendor solved it. Drove me completely nuts.

      Hence, I built Kumbukum. It's not a memory tool; it's an infrastructure for AI and people. The main goal is to get data into one place and provide humans the option to feed it.

      To solve this:

      Retrieve (AI):

      • MCP
      • API

      Feed (AI and humans):

      • nice UI
      • Drag and drop any file to your projects (extract it, store it in AI-friendly format)
      • Browser extension to add notes, URLs, and now also emails
      • Git sync: currently just adds all readable text formats

      Make sense (AI):

      • tags
      • links
      • knowledge graph

      All of the above preformat, categorize, and link content in a way that makes sense to AI. Less searching, faster AI, fewer tokens used, etc.

      Simple :) https://kumbukum.com

  24. 1

    Comment:
    What you’re describing is exactly where most setups start breaking — not storage, but retrieval under real usage.

    In practice, a lot of systems capture context fine, but when you actually need it, the recall depends heavily on how it was structured and labeled in the first place.

    I’ve seen cases where better naming + tighter semantic grouping outperforms heavier RAG layers, just because the system can “recognize” what matters faster.

    Curious — are you optimizing more on the storage/retrieval side right now, or starting to think about how information gets shaped at input too?

    1. 1

      Thank you for your comment and feedback. Your questions are spot on. I've been doing Digital Asset Management with Razuna - https://razuna.com - for over 20 years. This has taught me some things about distributed networks and teams (hopefully) :)

      So, with Kumbukum, I took the same approach, built a nice UI (again hopefully), and enable users (not directly developers) with an easy way to get data in and once in, to make it editable.

      On input, yes, with our browser extension, you can add notes, URLs, and now emails (will be released in a few days). To answer your question, yes, the input data is being formatted and processed. Once in the database (mongodb) we use Tyoesense to create an index and embeddings. So everything is already pre-formatted for the AI tools.

      I've been coding with Codex and the Kumbukum MCP server and it flies!

      Will most likely start making some videos.

      Let me know if this answers your questions. Happy to discuss further.

      1. 1

        That makes sense — you’ve clearly thought through the pipeline side well.

        The interesting gap I still see isn’t in how the system works, but in how it gets recognized.

        Right now “Kumbukum” doesn’t immediately signal anything about memory, context, or retrieval. So even if the system is strong, the first impression doesn’t help users (or devs) map it to the problem you’re solving.

        In something this infra-heavy, that layer matters more than usual — because people are trusting it to remember for them.

        Have you thought about tightening that mapping on the naming side, or are you treating the name as neutral for now?

  25. 1

    Nitai, the "re-feeding context" loop is exactly where AI efficiency breaks down, and building Kumbukum as open-source memory infrastructure is a massive step toward fixing that "starting from scratch" problem. By prioritizing searchability and Git sync alongside an API for tool integration, you're shifting context from a temporary chat session to a persistent team asset, ensuring that collective knowledge actually compounds over time.
    I’m currently running Tokyo Lore, a project that highlights high-utility logic and validation-focused tools like yours. Since you’re building the definitive infrastructure for persistent context and team memory, entering Kumbukum could be the perfect way to turn your own validation journey into a winning case study while your odds are at their absolute peak.

    1. 1

      You are spot on. What is Tokyo Lore about?

      1. 1

        Glad it resonated 🙂

        Tokyo Lore is a small, focused round where we highlight:
        → early-stage tools
        → strong underlying ideas/logic
        → and builders solving real problems

        It’s not a typical “launch platform” — more about getting your idea in front of thoughtful builders and seeing how it actually lands.

        For something like Kumbukum, the value would be:
        → how people react to the “persistent context” idea
        → what use cases stand out
        → where it clicks vs where it needs clarity .
        Tokyolore.com

        1. 1

          Great. Here is what I've been answering:

          The whole memory isn't just a bunch of markdown documents or a directory tree.

          This doesn't provide an AI client with any information.

          How Kumbukum does it is:

          • database collections
          • tags
          • links between the items
          • dedicated collections per item, i.e., notes are not memories, URLs are not notes, etc.
          • semantic search
          • MCP
          • optimized instruction for the AI to only retrieve what is required on the topic and not do a random wildcard search and hope for the best :)

          Honestly, and without saying "I solved it," but I was in a dilemma with my projects and team members. One loves Claude, another OpenAI, and another one Gemini. Some love the terminal, some love Cursor, some love Sublime Text, etc.

          No AI vendor solved it. Drove me completely nuts.

          Hence, I built Kumbukum. It's not a memory tool; it's an infrastructure for AI and people. The main goal is to get data into one place and provide humans the option to feed it.

          To solve this:

          Retrieve (AI):

          • MCP
          • API

          Feed (AI and humans):

          • nice UI
          • Drag and drop any file to your projects (extract it, store it in AI-friendly format)
          • Browser extension to add notes, URLs, and now also emails
          • Git sync: currently just adds all readable text formats

          Make sense (AI):

          • tags
          • links
          • knowledge graph

          All of the above preformat, categorize, and link content in a way that makes sense to AI. Less searching, faster AI, fewer tokens used, etc.

          Simple :) https://kumbukum.com

          Makes sense?

Trending on Indie Hackers
The most underrated distribution channel in SaaS is hiding in your browser toolbar User Avatar 183 comments I launched on Product Hunt today with 0 followers, 0 network, and 0 users. Here's what I learned in 12 hours. User Avatar 155 comments I gave 7 AI agents $100 each to build a startup. Here's what happened on Day 1. User Avatar 98 comments Show IH: RetryFix - Automatically recover failed Stripe payments and earn 10% on everything we win back User Avatar 34 comments HELP: Took a Shopify Job 3 Days Ago and I'm Still Not Done User Avatar 30 comments