We have a central web site (HQ), and a number of linked web sites (as a business) though domains are all separate.
On the HQ are a number of information pages eg terms and conditions; privacy etc.
Rather than re-creating this information on every web site, we thought about getting some sort of screen scraper developed that will simply duplicate the content on the local site. Then if we update the HQ site with, say, a policy wording change, this will automatically be replicated on the other sites.
We had a play with using iframe, and although this met our needs to an extent, everyone seems to regard this as a big no no!
Looking for an alternate solution. We have set up a couple of test pages for this to be trialled. The content in the cl_body of this page: [url removed, login to view]
needs to be scraped into the box on this page: [url removed, login to view]
1) this test is being undertaken on pages within the same domain, but the solution you provide must work across domains and payment will only be made when this has been proven.
2) you can prove your work by initially scraping the content on to a page of your own. If happy, you will then provide us any code/instructions to set up on the page above.
3) We must be able to use the solution to set up further screen scrapes. We would expect to do this by simply supplying a URL and, ideally, a identifier as some form of attribute or parameter, and the solution will scrape whatever is in that div at that URL on to the calling page.