After one year I've finally found some time to publish a new, more powerful version of my own web automation API. V1 couldn't manage dynamically generated dom elements (i.e: loaded via JS) and after thorough researches and architectural re-designs, it is now able to.
For this new version I have designed my own distributed, simple and flexible architecture in order to easily scale the system according to the load.
I was quite going to forget, the new api has proxies included in the plans. They dynamically change in order to provide maximum availability for the final user.
My main goals for this version, were:
Wish me luck! :D
And btw, go check it out on RapidAPI Hub: https://rapidapi.com/onipot/api/scrapingmonkey/
Best Valentine's ever... and trust me, I've had girlfriends before! :D
It all started as an underrated idea, but damn... 2 million requests?
I would never have thought it could reach so many people and getting used this intensively.
I almost have 100 active users, thank you to all!
The most funny part of it, is that it takes me almost zero maintenance or work atm, but this has been possible only because I have cured every detail of the development ahead of time.
Best side project I have ever deployed, of course a big thanks goes to RapidAPI.
Next thing to do, update the documentation.. sorry devs, been busy with studying and boring stuff :/
Yesterday I added new scraping features:
New users are trying the free plan everyday, that's good!
It would be even better to have other paying customers but, I do not mind about money at this point (I'll focus on that aspect as soon as I end up my studying next month)
The 'Advanced scraping feature' is the most used call by far, I have to improve its usability in order to make it easier to use (somehow).
I got to improve the documentation, I want to make ScrapingMonkey super easy to use. As programmer myself, I know how a good doc makes the difference. Good practical examples even more.
Let's get back to work! (Or maybe i should study, mh.. )
"A user has subscribed to your plan", 5 minutes later... "A user has subscribed to your plan"... you know, I was just shocked!
Other new developers have found ScrapingMonkey API and are trying the free version, I hope more will like it!
In order to attract more devs, I have just released a /bySelector update. At the moment it is the most powerful scraping call in the API that almost allows full flexibility. I have deeply documented it with some practical examples! For now I am satisfied with it, but I can do better.
I just hope the server will keep it up, otherwise I will have to limit the requests till I can afford a better server! :/
New cool features are coming soon, stay tuned!
I was on my bed, studying some boring subject, when all of a sudden I read a sentence about API stuff. You know how that works, ideas just come up. I began to search for API websites and I found RapidAPI. A new entire world. Loved that and I wanted to add some cool API on it.
In the past I've already realized some automated bots to scrape stuff on the web, but every time I switched platform, I had to rewrite everything from scratch, it was so lame and time consuming.
I quickly understood that I could overcome this problem turning it into a restful API. Just a web call, the same one, on all the platforms.. I was so excited about that!
I started to design the architecture and some facilities to begin with and.. here we go!
As of today, it's active on RapidAPI and I hope someone will get to it.
I know this is not something innovative, but it's different from other solutions and I want to improve it.
I am a full time student and it would be great managing an API as side-project. Also economically, it is not a big deal.
I'm already thinking about new cool features to add, so let's get back to work.. see you soon!
Just a random idea that came up while studying APIs stuff. It looked nice so, why not shaping it into some real tool that can be used by others?