As a creator of TreeScale.com, I know firsthand what it means to create an AI product with the LangChain framework and keep text-based Prompt templates inside code directly. While building MVP or a simple demo, things are relatively easy, and you won't see any problems. Still, when you get closer to a production version and start maintaining the codebase, problems will always begin to bug you!
I started working on Python LangChain-based products about a year ago when I had an idea of making an SEO Article generator tool. The initial version was in less than two days, which was awesome because LangChain gives many ready-to-use tools to get up and running.
Eventually, I got a product up and running, but every time I wanted to change a bit of prompt text, the problems started. Deploying code every time you want to explore more prompt ideas is not sustainable. Especially if you have non-technical people on the team who want to play with Text Prompt variations as well 🤷♂️
I kept my Python LangChain codebase but introduced new API Endpoints for my FastAPI Python server to handle Text Prompt templates from our UI-facing Next.js server, which made it possible to share an internal "Admin" like interface with the rest of the non-technical SEO specialists to tweak our SEO Article generator AI Text Prompt to be more competent over time and test new ideas.
We quickly learned that software developers are not the ones who should edit or tweak the AI Text Prompts. That's why having some UI Interface makes more sense for building production AI Features or Products.
With TreeScale.com, I've simplified this even better! Whenever we need a new Text Prompt that should work with our application, we just make a new TreeScale App Endpoint and play around with a Text Prompt template because it already has the AI Model Context with an LLM Prompt Chain execution model.
It is becoming clear that AI Text Prompts are the critical business IP for AI products. If I knew your prompt, I would be able to build a similar app within weeks. Keeping AI Text prompts inside your codebase makes your product vulnerable. It is almost the same as keeping your AWS Secret keys directly inside your code 🤯
I have been developing and selling software products for more than seven years and have often seen the clear separation of concepts within software products. Current AI-based product development happens incredibly fast. Still, when things are cooling down, we, product developers, will work more on separating logical parts to keep things in order, especially for products with some growth.
Try not to keep your AI Text Prompts inside your codebase. 🤞
100% agree that prompts belong outside of code. The pain you describe with redeploying just to tweak prompt text is real. But I think there is a deeper problem too: most prompts are just one big text blob with no internal structure. Role instructions, constraints, output format, context... all mixed together. So even when you move prompts to a UI, editing one part risks breaking another because nothing is scoped.
What helped me was thinking about prompts the way we think about code: typed, modular, each piece with a clear purpose. I built https://github.com/Nyrok/flompt to try this out. It is a visual prompt builder that decomposes prompts into 12 semantic blocks (role, constraints, output format, examples, etc.) on a canvas, then compiles them to Claude-optimized XML. Each block is independently editable, so your SEO specialists could tweak the "constraints" block without touching the "role" or "output format" blocks.
Open source, 75+ stars and growing. Would love to hear how your dynamic template approach handles the internal structure problem. Do you version individual sections or is it still one text field per template on TreeScale?