I am seeking someone to write a script (preferably bash or python) to interact with the Github Search REST API.
This script should be able to be run as a cronjob and will perform a series of searches everyday using the Github REST API. It will then find any new, recently indexed files that appear in the search results and download them. The new files should be compared against old files to identify anything new (for example, files should be hashed and stored in a file, if the hash already appears in the file, no need to download the file again.)
Here is the basic flow of how the script should work:
1. Perform a search using the Github search REST API ([login to view URL])
2. All of the files returned in the search results should be hashed and the new ones should be downloaded
3. The files will be written to a directory named with the day’s date.
4. Any email will be sent with the URLS of any new files that were found
Ideally, I would like to have a web frontend to manage/view results. The script should not require any extensive setup beyond installing basic dependencies.
Please let me know if you have any questions/ need me to clarify anything.