Hi, I need a desktop scraper/parser app(for win 7) for the site [login to view URL], it should be for continual updating of the database so it's not just a fixed number of pages. I want to scrape all four sports. The data should be saved as XML files(singular file per game): [login to view URL] I need this data: Sport: Soccer Source: Hintwise Country
I am looking f...after setup, the import/export will be done by cron continually I see there are some modules for this task: Views Data Export Feeds Feeds XPath Parser view rss drupal_sync feed import feeds extensible parser both site are in long term development, if you can get this small works quick and simple, then there are so many to continue.
Hi, I am looking for someone who can write a PHP script to scrape data from several wiki pages ([login to view URL]). I need the script to: Get the name, image path, category and sub category for every item listed on each of the pages
I'm looking for someone with a proven successful track record of contributing pages to Wikipedia to create and have successfully approved a Wiki page about me. I am the author of over 60 children's books. My books have been picked up by an Italian publisher who is in the process of translating them and publishing the in Italy. I started writing in 2012
The data is in a audio stream driven by ShoutCast or Icecast server. For example: [login to view URL] you can read the artist + song title form the meta data in the url. S I want to read the meta data from multiple streams and store in the MySQL database with time stamp and radio station name.
Hello! We need a Django developer who will help us to setup and extend this framework: [login to view URL] Our goal is to create a specifi...setup and extend this framework: [login to view URL] Our goal is to create a specific socialnetwork-like service which includes user profiles, forum, wiki, etc...
I created an extremely powerful HTML parser framework for JAVA, the univocity HTML parser: [login to view URL] As this is a library we prepared all documentation and tutorials to demonstrate how it works. The problem is that making people READ a lot of stuff is not exactly appealing and it's hard to demonstrate the value
...Process HTML page through parser that inserts into the database. Remove from failed list Every day start from last succesful download CI number + 1 Get the cookie, add it to the session, do the GET request, If HTTP 200 Process HTML page through parser