I need a web scraper written for the .xlsx file in the following directory:
[login to view URL]
The latest .xlsx file within that directory will need to be downloaded.
The name of the file is subject to change daily and will need to be identified by the latest .xlsx extension.
The output should be a pipe (|) delimited file with the following column mappings:
origin_city --> data located in the "Origin City" column (A)
origin_state --> data located in the "Origin State" column (B)
ship_date --> data located in the "Ship Date" column (G), change to the YYYY-MM-DD format
destination_city --> data located in the "Destination City" column (C)
destination_state --> data located in the "Destination State" column (D)
receive_date --> data located in the "Delivery Date" column (H)
trailer_type --> data located in the "Trailer Type" column (E)
load_size --> if data in the "Load Size" column (F) contains "Truckload" add the text "Full",
if data in the "Load Size" column (F) contains "LTL" add the text "Partial"
weight --> data located in the "Weight" column (M)
length --> data located in the "Length" column (J)
width --> data located in the "Width" column (K)
height --> data located in the "Height" column (L)
trip_miles --> leave blank
pay_rate --> data located in the "Pay Rate" column (I)
contact_phone --> data located in the "Phone Number" column (N)
contact_name --> leave blank
tarp_required --> leave blank
comment --> data located in the "Comments" column (O)
load_number --> leave blank
commodity --> leave blank
The first line of the output should contain all of the column headers.
Any field that contains no data should be left blank.
Please do not use words like "null" or "blank" in blank columns.
Below is a sample output of the first 5 columns using sample data:
The deliverable will be a Perl .pl file that must run on
Ubuntu Linux and must use Modern::Perl. The Perl .pl file
should be called '[login to view URL]' and the output file should be
called '[login to view URL]'
It will be scheduled in cron to run unattended every 15 minutes.
Please specific what language/OS/modules you plan to use.
Also, please include the word "raccoon" in your bid so I know that
you read this description.
17 freelancers are bidding on average $181 for this job
Hello? Nice to meet you. I am excited to work with you on this project. I am ready to start work immediately. So I think I can help you if you want. Thank you. Best Regards.
Hello, I have experience in web scraping with Python. I can use Selenium, Scrapy, BeautifulSoup and Requests to make the best web scrapers! I hope to work with you!
Hi, I have experience in the web scraping field. If you want, I can share my work with you. I will do it cheap in exchange for a positive review. I hope to help you, Daniel