In Progress

RFP Web-scraper/Scheduler

RFP – WEB-SCRAPING/SCHEDULING SOLUTION FOR LIMITED ENVIRONMENT

SUMMARY

The purpose is to refresh the localHost MySQL database with lottery drawing results scraped from pages as soon as they are available, as well as jackpot/total prize updates at intervals. Ideally the pages would be scraped:

(1) On a set schedule, once per hour, looking for jackpot updates

(2) At set repeating/calendarized times, looking for expected draw updates; these scrapes would continue every 60 seconds until the data is posted on the website and returned by the scrape (since there is no push notification of when results are available, will have to keep scraping until results appear)

Data returned by the scrape would be compared to existing data:

(1) New draws would be INSERTed (with logic for creating UUID)

(2) Existing draws would be compared with stored data and UPDATEd if new or ignored

(3) There is a special case for multi-state lotteries such as the Mega Millions & Power Ball drawings; a single results will be INSERTed/UPDATEd multiple times into the target table with multiple UUIDs

DETAIL

The environment only supports PERL, Python and PHP; NO JRE. The MySQL db does NOT support user-defined functions or event scheduling. cron IS available, but with the restriction that jobs cannot be scheduled more than once every 30 minutes. We can consider violating the 30-minute constraint IFF another solution can’t be found.

An administrator interface is required for modifying the calendar of scraping as well as the configuration of the scarping (which pages, what data, &c.) The interface can be as simple as editable configuration files, does NOT need to be GUI.

The MySQL db will track 300 or more different lotteries. Each lottery will have up to 1000 legacy draws but will only INSERT/UPDATE a single draw, the current-most draw. Since many of the lotteries are actually the same lottery for different locales, the calendar of scrapes will only need to track 30 – 50 different times.

These are proof-of-concept pages to be scraped:

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

The attached file “[url removed, login to view]” details the table schema

The attached files “[url removed, login to view]” and “[url removed, login to view]” are target sample pages

The attached files “[url removed, login to view]” and “[url removed, login to view]” contain data mapping and logic for scraping sample pages && saving data

IF INTERESTED

Reply to this posting with the following info:

The phrase “Confidence is high” in the subject or 1st sentence

Summary of experience in similar projects

Time and cost

Manpower to be committed (single dev, dev + PM, &c.)

Details of solution; milestones, deliverables, language, scheduling, security, communication, &c.

Skills: Engineering, MySQL, Perl, PHP, Software Architecture

See more: www manpower com, what jobs are available, website scraping projects, website dev cost, web scraping solution, web scraping python 3, web scarping, web rfp, web posting jobs, t sql jobs, texas state jobs, texas jobs, state of texas jobs, state of colorado jobs, state of california jobs, security logic, schema update, scheduling jobs, python updates, power up case, ok state jobs, new york times jobs, new york state jobs, new york jobs, mapping jobs

About the Employer:
( 5 reviews ) Berkeley, United States

Project ID: #5045635