The website crawler should go through the complete website, collect and download all the available resources of the website like PDF, Document, Excel format files etc. Images and Video format files are not required to be included in the resource dump and it should crawl only web pages with the same root domain. All the other similar and relevant file formats ( Macintosh or Linux compatible as well ) are to be included. The crawler should segregate all the files on the basis of the types of files they are, i.e., pdf, doc etc. The final project should be in the form of an application and should be able to execute without any other requirements other than an internet connection to just crawl the website and download the resources.