This is small Java Project to implement protocol independent crawler. I have project to allow synchronization of different Maven repositories using simple GUI application. My client provides the main repository through a Web based interface (HTTP protocol). My current approach is to index remote repository by crawling it using simple web crawler program. However due to the different business requirements, it is possible that they will not always provide a Web interface and I also want the program to be able to do the same with other common protocols (HTTP, HTTPS, FTP, SSH2, FILE). I don't have personally much time to deal with this. If you have big Java experience, you may fit for this problem. I will provide you a guadelines of the technology to use (you'll may want to see Apache Wagon implementation as a reference). Thank you.
1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done.
2) Deliverables must be in ready-to-run condition, as follows (depending on the nature of the deliverables):
a) For web sites or other server-side deliverables intended to only ever exist in one place in the Buyer's environment--Deliverables must be installed by the Seller in ready-to-run condition in the Buyer's environment.
b) For all others including desktop software or software the buyer intends to distribute: A software installation package that will install the software in ready-to-run condition on the platform(s) specified in this bid request.
3) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to all work purchased. (No GPL, GNU, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site per the coder's Seller Legal Agreement).