JSON files for food startups
There so much projects, that using a same data. And usually collecting that data take so much time. I don't want to reinwent the wheel again and again.
I republished my old article, related to Recipe APIs. I have other articles that I can publish, but I need to review the content and get into a cycle of republishing of 5-10 articles. I need to fix English grammar, and probably make my articles less technical. I even created a glossary, that can help to navigate via my articles and be able to understand tech terms without being software engineer
We did this module 3 years ago. And as I want to be able to work with different food datasets, I want to add more features to this parser. So I sit and explained the reason for building it and why it's cool and useful for our company
https://hackernoon.com/introducing-a-simple-module-for-parsing-csv-files
I was reading a cool presentation about the need for cleaning data and also open it for use by everyone.
they put a link that didn't work, so I start googling and find out that the company was renamed. now it's frictionlessdata.io and they working on https://datahub.io/
Trying to establish new partnerships
Reviewing old code and repositories with some information - I collected so many things, but they still must be polished and presented in a way that other people can actually use them.
The goal is simple: try to find simple code modules that can be launched quickly. One per month is a pretty good pace in my opinion.
I also will publish quick articles that explain the reasons for using these modules. Hopefully, it will generate enough buzz in order to move it forward.
Hired a 15yo guy that will work on Readme.md files and help me to speed things up
Yes, yes. If someone experienced will take a look inside of our module - he will find out that our code is not complex, or our commits a lot of times pretty simple. Most of these releases are quick patches, bug fixes, that trying to solve issues that we encountered when use this module inside of our other repositories.
BUT! i'm still proud of what was done and think that it's a good accomplishment and it shows how hard we working. For sure, it's a still long road to make it looks like it should look. But i'm happy to code it and still have feeling that it can be used for good by other people in future.
We have a few people that interested to use our software. They want to import and use some free food datasets that available online. But the first step is to parse that data and convert it to JSON. Then we can pass that data to our generator script, so it can create a JSON file that later can be added to their MVPs.
We created the first version of the CSV parser script. it works for only one case now. But we have complex datasets - so it's time to improve our functionality and actually make it more universal and independent. Yes, again - I'm moving away part of our code. it's important - because it helps our interns to make more impact inside of our code. with this "separation game" interns starts to open more pull requests and apply simple changes more quickly.
Again, the new link to that repository. feel free to star it - means a lot for me.
https://github.com/GroceriStar/food-datasets-csv-parser
Yeah, in a few months we realize that generator adding a lot of complexity to our static module. The main aim for it was to return only JSON files. Now we have a lot of js scripts, that "eating" our files, loop through that arrays and add some fields inside... It's not cool. It's time to take another step and separate our modules again.
When a generator will be completely moved away and everything starts to work well - we will have a better structure inside of our repositories, less long names, less confusing tests, etc..
This is a repository link
https://github.com/GroceriStar/food-static-files-generator
With this release, we add a generator script inside of our module. The goal was pretty simple - sometimes you want to update your JSON structure, for example add an ID, or something else. So your current file cannot fit your functionality.
Before we just read that file, made loop, add missing fields and return a new version of the array. But we do it again and again.
It's a stupid way! generator can help us to create a new json files/structures for projects that we working on.
At this article, I'm trying to explain changes, that we should implement in order to to make the logic more awesome. It can make our code better and easy to scale in the future. Yes, it's not big and complex module, but by adding more and more new methods inside - things start to look pretty weird. Our tech dept is growing and only my experienced teammates can handle it. New students have troubles to get the whole picture. They need to jump between files, again and again, create new methods, overlap inside of code and being stuck pretty quickly.
https://medium.com/groceristar/static-food-data-third-part-structu-615c39dcf328
In my previous article, I was sharing my plans that will change the direction of this module. In this article, I want to focus on one important for our team milestone. I want to update our code to ES6 and make the build process more smooth and more 'cool-last-tech-way'. I'm talking about the Pros and Cons of this plan. This article is also important for our new interns because I pack a lot of data into one article, so they don't need to look through our task list in order to have a better understanding of what is actually going on.
https://medium.com/groceristar/static-food-data-plugin-transition-to-es6-559d0d941ec6
There so much projects, that using a same data. And usually collecting that data take so much time. I don't want to reinwent the wheel again and again.