I have a crawling system built with [url removed, login to view] (PyQT under the covers). It logs into websites and downloads data files (usually PDF). What I need is a function written which uses the [url removed, login to view] library to dowload the file at a given url. You can assume that the HTTP header "Content-Disposition" is set to "attachment". I cannot guarantee that the file will not be chunked from the backend webserver on their end.
The function should take 3 parameters: Ghost_Webpage_Instance, File_URL and Destination_Path
If this is simply not possible with [url removed, login to view], I am OK with you slightly modifying the [url removed, login to view] library (on github) and adding in the function at that level. That way you can interact directly with the PyQT instance (even though that should be possible outside of ghost).
Please note that these files are only served to visitors with an active session, which is why they must be downloaded via the headless browser instance and not using any http lib functions.
Let me know if you have any questions. Thanks!