1. User provides a URL of the site to the agent, the work starts here - it scrapes the URL content and passes it to the next model - LLM learns all the details about the biz: pricing, audience, solution, problem.. - It creates a full spec and passes it on to the next model…
2. Next model creates an SEO strategy - does keyword analisis - looks at competitors - looks at existing articles and pages - comes up with full knowledge and topic tree - tests it all for potential and demand via google search console API - goes to google search to explore more
3. Once the strategy is created, the "manager" model creates 100+ tasks. - then the next level models(100s of them) create 100s of tasks for their "employees/models". - they execute tasks & report back - the manager models analyze, give feedback and iterate until they're happy
4. The results float up the stream, and every higher level agent can initiate new tasks and ask to improve/redo. - there are anti-hallucination agents who enter the game at every step to fact-check the outputs. They may initiate more tasks toopic.twitter.com/KZwass67OT
5. AI itself isn't that useful without external tools & apis - some agents work as bridges to the real world. Creative agents ask the bridge agents for work to be done. E.g "go to these sites & put together a list of these items..". (just like a human would ask their assistant )
6. When there are so many steps unsupervised by a human, even one mistake gonna break the whole thing - there are agents who adjust the plan, if some tasks cant be done. e.g. if a given website can't be scraped. - everything runs on queues & jobs based on "eventuall-consistence"
Next time, I'll share the architecture of my other AI agent https://t.co/UKFgAlqvII . It works days and nights looking for new directories and backlinks on internet and then can act in real world, register on websites, submit products, etc.pic.twitter.com/yx3KelMpXY