RFP Web-scraper/Scheduler

This project was successfully completed by virtual7 for $789 USD in 15 days.

Get free quotes for a project like this
Project Budget
$750 - $1500 USD
Completed In
15 days
Total Bids
Project Description



The purpose is to refresh the localHost MySQL database with lottery drawing results scraped from pages as soon as they are available, as well as jackpot/total prize updates at intervals. Ideally the pages would be scraped:

(1) On a set schedule, once per hour, looking for jackpot updates

(2) At set repeating/calendarized times, looking for expected draw updates; these scrapes would continue every 60 seconds until the data is posted on the website and returned by the scrape (since there is no push notification of when results are available, will have to keep scraping until results appear)

Data returned by the scrape would be compared to existing data:

(1) New draws would be INSERTed (with logic for creating UUID)

(2) Existing draws would be compared with stored data and UPDATEd if new or ignored

(3) There is a special case for multi-state lotteries such as the Mega Millions & Power Ball drawings; a single results will be INSERTed/UPDATEd multiple times into the target table with multiple UUIDs


The environment only supports PERL, Python and PHP; NO JRE. The MySQL db does NOT support user-defined functions or event scheduling. cron IS available, but with the restriction that jobs cannot be scheduled more than once every 30 minutes. We can consider violating the 30-minute constraint IFF another solution can’t be found.

An administrator interface is required for modifying the calendar of scraping as well as the configuration of the scarping (which pages, what data, &c.) The interface can be as simple as editable configuration files, does NOT need to be GUI.

The MySQL db will track 300 or more different lotteries. Each lottery will have up to 1000 legacy draws but will only INSERT/UPDATE a single draw, the current-most draw. Since many of the lotteries are actually the same lottery for different locales, the calendar of scrapes will only need to track 30 – 50 different times.

These are proof-of-concept pages to be scraped:

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

[url removed, login to view]

The attached file “[url removed, login to view]” details the table schema

The attached files “[url removed, login to view]” and “[url removed, login to view]” are target sample pages

The attached files “[url removed, login to view]” and “[url removed, login to view]” contain data mapping and logic for scraping sample pages && saving data


Reply to this posting with the following info:

The phrase “Confidence is high” in the subject or 1st sentence

Summary of experience in similar projects

Time and cost

Manpower to be committed (single dev, dev + PM, &c.)

Details of solution; milestones, deliverables, language, scheduling, security, communication, &c.

Completed by:

Looking to make some money?

  • Set your budget and the timeframe
  • Outline your proposal
  • Get paid for your work

Hire Freelancers who also bid on this project

    • Forbes
    • The New York Times
    • Time
    • Wall Street Journal
    • Times Online