A few years ago, I was a delivery driver in Bangkok. I saw firsthand how inefficient "Greedy" algorithms stressed out my fellow drivers. At that time, I didn't even know what "NP-hard" meant—I just knew the system could be better. So, I started building.
The Journey of an Outsider:
My background has nothing to do with Computer Science. I hold a Vocational Diploma (High school equivalent) in Jewelry Metal-shaping (Goldsmithing) from 20 years ago. Before this project, I was unemployed and had no PC. My only tool was a $100 smartphone (3,000 THB).
I spent 16 hours a day architecting and refining the logic via Pydroid 3. Because I didn't even know standard optimization libraries existed, I had to design my own logic architecture from the ground up. I thought that was just how it was done.
The Technical Skepticism:
When I shared my work, the skepticism was purely technical. People couldn't believe that a standard Android/Snapdragon environment could solve 10,000-node VRP instances without the execution time exploding. They doubted that mobile hardware could ever handle the complexity of an NP-hard problem of this scale.
The Bridge Between Philosophy and Math:
I’ve always been obsessed with universal logic—I even published a book called "Cosmic Mind" on Amazon KDP. I used the same philosophical principles to design my VRP engine. To me, mathematics and philosophy are the same language; by staying "pure" in logic, I could overcome hardware limitations.
The Result:
Today, GSL Solver handles up to 10,000 nodes with stable, deterministic execution across international benchmarks (CVRP, VRPTW, MDVRP, MDVRPTW). I’ve kept everything transparent for anyone to inspect here: https://github.com/CT1-deMo-goG/CT1-deMo-goG
I’m a "nobody" from the streets who found a way to turn philosophy into a high-performance engine.
Explore the journey: https://gsl-solver.com
P.S. Even this website was built entirely on a single smartphone using Acode and GitHub.
Special Thanks: To Gemini, my AI Interface, for bridging the gap between my core logic and the digital world.
Really cool angle, especially doing VRP work on a low-end phone. One watchout that gets missed with routing projects is that the solver is only half the battle, bad geocodes, messy time windows, and driver-specific rules will wreck route quality faster than raw node count. If you have those pieces handled, that matters as much as the 10,000-node benchmark.
A quick update for those following the progress. Since my initial post, I’ve pushed the engine to its absolute limits to see if this mobile setup can truly act as a large scale VRP solver.
I just benchmarked the engine against the standard XL-Dataset (up to 10,000 nodes), comparing it directly against standard Metaheuristics (LNS) and Classic algorithms (Clarke-Wright). Here is the raw performance of this deterministic VRP solver running on the exact same $100 Pydroid environment:
A note on transparency: I use the BKS strictly as a baseline for measurement. I am an independent builder, not claiming official academic world records, and to be completely upfront—my engine does not achieve negative gaps on every single instance in the dataset.
However, the main objective here is the reality of enterprise logistics:
While LNS processes faster at lower iteration settings, it bleeds +4% to +5% in route costs (which translates to massive fuel/labor waste). Furthermore, LNS suffers from high variance—run it just 3 times, and you get 3 entirely different routing plans. CW, on the other hand, just produces significantly worse routes.
Because my architecture is strictly deterministic, it produces zero variance. Even if it doesn't win every dataset, it generates the exact same optimized route every single time you hit run.
For the record (Zero-Tuning Policy): This 10,000-node CVRP benchmark is just the baseline. The GSL engine fully integrates 4 core modules: CVRP, VRPTW, MDVRP, and MDVRPTW. All of them operate on a strict Zero-Tuning Policy. This means it uses one single script per module to autonomously solve instances ranging from 30 nodes up to 10,000 nodes without any manual parameter tweaking.
You can verify the data through these two repositories:
For the raw .sol files and logs of this specific 10k-node benchmark:
https://github.com/CT1-deMo-goG/GSL-Engine-SetXL-Benchmark
For the complete architecture, all 4 modules, and massive evidence covering Set A-XML, Solomon 100, Homberger 1000, Cordeau, Vidal, and Real-world instances:
https://github.com/CT1-deMo-goG/CT1-deMo-goG
I'll be breaking down the specific data for those advanced modules in upcoming discussions.
The background makes this even more interesting.
The part I’m curious about is how it performs compared to established solvers, especially on larger instances. Getting something to run is one thing, but consistency and solution quality are usually where it gets challenging.
Also interesting that you built it without using existing libraries. That probably shaped the approach quite a bit.
Have you benchmarked it directly against other VRP solvers?
Hi Rebecca, thanks for the great question. Consistency and benchmark validation are exactly what I’ve been focusing on.
To answer your question directly, I have systematically benchmarked the deterministic engine against several standard heuristics (CW, ALNS/LNS, TS, and HGS/I1) across different VRP variants:
Building without standard libraries allowed me to bypass the runtime explosions seen in algorithms like TS while maintaining strict deterministic outputs across these massive datasets.
While the core deterministic logic remains a strict trade secret, you can check out the benchmark data and validation results in my public repository here:
https://github.com/CT1-deMo-goG/CT1-deMo-goG
That’s a solid level of validation.
Interesting that you’ve pushed it up to 10,000 nodes as well, that’s where most approaches start to struggle.
Will take a look through the repo.
Thanks, Rebecca. Feel free to explore the repository. The Set XL benchmarks should give you a clear picture of how the deterministic architecture maintains stability at 10,000 nodes without runtime degradation. Looking forward to your thoughts.
Update: 469 nodes optimized in 0.45s (Reality vs. Demo UI)
I just pushed a new benchmark for the X-n469-k138 instance (469 nodes). I want to clarify a technical detail regarding the timing you see in the screenshot:
| Metric | Standard Metaheuristic (LNS) | GSL-Solver (Deterministic) |
| :--- | :--- | :--- |
| Total Distance | 241,360.45 km | 220,529.09 km |
| Distance Saved | - | 20,831.36 km (-8.6%) |
| Display Time | - | 0.8 Seconds (Hardcoded UI) |
| Actual Engine Speed | - | 0.4584 Seconds (Actual Logic) |
Note on Speed: As you’ll see in the screenshot, the summary box shows 0.8s—this is currently hardcoded in the Demo UI for display stability. However, the true deterministic logic speed (shown in the green result box) is actually 0.4584 seconds.
This means over 20,000 km of waste is eliminated in less than half a second, directly within the Browser. No heavy server-side processing, just pure algorithmic efficiency.
Try the live engine: https://gsl-solver.com
Connecting feature ideas back to the wider project context is a huge technical win, Basem. Most AI tools just spit out generic, disconnected PRDs, but maintaining that "single source of truth" across flows and test cases is what actually prevents architectural drift as a product scales.
I’m currently running a project in Tokyo (Tokyo Lore) that highlights high-utility logic and product-planning tools just like Defynit. Since you're focused on keeping technical documentation consistent and context-aware, entering your project could be the perfect way to demonstrate your "project context" logic while your odds are at their absolute peak.