Selenium webdriver jobs
I need a standalone desktop program that lets me analyse horse races by pulling fresh horse-performance data directly from www.racenet.com.au. The application’s ...• One-click scrape pulls the latest horse performance data without captchas or manual intervention. • All key fields visible on racenet for each horse populate correctly in the local database. • Basic analytical views refresh in under two seconds on a typical laptop. • No paid API keys required—everything comes from the public site. I’m flexible on the tech stack: Python (BeautifulSoup/Selenium), C# (.NET), or even Electron if it stays lightweight. What matters most is reliable scraping, clean code, and a UI I can rely on race morning. Let me know your preferred approach and an...
...I need (both the visible text and the images), and also save any PDF that the page links to. The PDFs aren’t generated on the fly—each one is simply a normal hyperlink sitting in the HTML—so the job comes down to fetching the page, parsing for the data, spotting the *.pdf anchors, and downloading those files to disk. You are free to approach this with HtmlAgilityPack, AngleSharp, HttpClient, Selenium, or any other .NET-friendly library you’re comfortable with, as long as the final code is clean, asynchronous where it makes sense, and easy to extend. I will pass in the root URL (or a list of starting URLs) plus an output folder path; the tool should handle the rest: navigate, extract, download, and save. Deliverables • Visual Studio solution (targe...
...the results to CSV or Google Sheets. I mainly care about item title, price, description, photos (image URLs are fine), posting date, item location and the seller’s profile link so I can trace each record back to its source. If you can collect additional fields that Facebook exposes, even better—just keep everything neatly labelled. No hard requirement on the stack: Python with BeautifulSoup / Selenium, Node with Puppeteer, Playwright, or a headless browser solution all work for me as long as it runs on Windows or a small Linux VPS and doesn’t violate Facebook’s ToS. Please build in reasonable throttling, login handling (cookie-based or mobile API, whichever is more stable) and a simple config file where I can tune delay settings or add new accounts. ...
... • Build a clean and user-friendly dashboard to: • Manage monitoring settings • Control alerts and configurations • Implement structured and scalable automation logic. • Ensure the solution is maintainable and adaptable to future website updates. • Provide clear documentation for setup and usage. Technical Requirements • Strong experience with Python • Web automation tools such as: • Selenium / Playwright • Requests / BeautifulSoup • Backend development experience • Familiarity with notification systems (Email, Telegram, Webhooks, etc.) • Clean, well-documented, and modular code Additional Notes • This is a long-term project. • Ongoing collaboration may be required for future updates, optim...
...need a fully-automated workflow that gathers and enriches data from well over 500 LinkedIn profiles. The automation should locate the profiles that match criteria I will provide, pull the key public details, then append reliable off-platform contact information so I can reach those professionals directly. Please design the script or low-code sequence with any reliable stack you prefer—Python, Selenium, PhantomBuster, Sales Navigator API, or comparable tools are fine as long as the method is repeatable and respects rate limits. Deliverables • CSV/Excel file containing one row per person with: – Current job title – Company name – Verified email (and phone, when available) • Source code or workflow file with brief run instructions ...
...coordinates directly from Google Maps. The second will crawl a set of websites I will supply and pull out product information, on-page contact details, and any user-generated content that appears alongside those products. Please structure every field into one tidy CSV per source so I can plug the results straight into my BI dashboards. I am comfortable if you lean on Python, Scrapy, BeautifulSoup, Selenium, or similar tools, provided the script is well-commented and can run headless behind rotating proxies without tripping rate limits. Deliverables: • 4 working scripts (Maps + websites) with clear setup instructions • Sample output files proving all requested fields are captured correctly • Output data must have City Name > (Excel file with list of d...
Hello, I'm excited about the opportunity to help test and improve your AI-powered platform. Example Test Plan Approach: For an AI-powered platform, I typically structure my test plans as foll...attention to detail has consistently helped identify critical issues before production Clear Communication: I provide comprehensive bug reports and testing summaries that development teams can action immediately Flexible & Scalable: Comfortable with 10-30 hour/week commitment and can scale as needed Technical Toolkit: Proficient with Python, API testing tools (Postman, Swagger), automation frameworks (Selenium, Playwright, Cypress), and bug tracking systems (JIRA, Azure DevOps) Proactive Problem Solver: I don't just report bugs—I analyze patterns and suggest improvements...
...• Work must be legal and supportable. We are not seeking unauthorized circumvention. • We want to work with the manufacturer’s systems/entitlements, but need technical clarity to compel support. Required skills • Strong network debugging (Chrome DevTools, WebSocket protocols, XHR analysis) • Experience with embedded device web UIs / OTA update flows • Python/automation skills (Playwright/selenium, traffic capture) • Ability to reason about auth flows (cookies/tokens/401/403) • Bonus: familiarity with ROSBridge-style WebSocket messaging, nginx/autobahn stacks. What we will provide to the freelancer • Device access on LAN ( web UI + WS endpoint) • Existing sniffer scripts + JSONL logs • Screenshots and prior PDF tec...
I need clean, structured product details pulled from the ... I already have a clear idea of the attributes I want captured (title, price, SKU, description, availability, image URL). Once we agree on the target sites, you can build a scraper, run it, and hand back the CSV along with the script or notebook so I can reproduce the results later if needed. Please let me know: • Which language or framework you plan to use (Python, Scrapy, BeautifulSoup, Selenium, Playwright, etc.). • How you’ll handle pagination, anti-bot measures, and site structure changes. • An estimated turnaround and any milestones you suggest. Accuracy, deduplication, and clarity in the final CSV will be the acceptance criteria. If this sounds like your bread-and-butter, I’m ready...
...encoded in UTF-8, with consistent headers and no duplicates. The file needs to be able to be used in a mail merge or address label capability. Acceptance The file must open without errors and pull all relevant permits. Please include a brief note on your chosen approach, the approximate turnaround time, and, if you automate, the language or toolset you’ll use (Python + BeautifulSoup, Selenium, etc.). Samples of the data files are attached...
...hand histories, video explanations) Access to test accounts on the target platform Clear feedback loop — we'll review hands and flag decision errors for you to iterate on Ongoing collaboration throughout the project Skills Required Python (Advanced) Computer Vision / OpenCV Machine Learning / AI LLM API Integration (Claude API / OpenAI API) Prompt Engineering Browser Automation (Playwright / Selenium / Puppeteer) Game Theory / Poker Knowledge (Strong Plus) OCR / Image Processing...
...time set number of parallel tabs “If my current approach is incorrect, I would like to understand the correct method to handle this system. If you have the expertise, please guide me or help me build a fully functional bot script.” Skills Required Playwright / Puppeteer / Selenium (CDP mode) Strong understanding of HTTP/2, redirects, cookies, CSRF Experience with anti-bot bypasses Experience with Wicket / Java server-side frameworks --- Only Experienced Developers This is not a simple Selenium clicker job. Only message if you have real experience with strict session-based booking systems. Provide: past work samples your approach to solving redirect-based session recreation estimated timeline & cost...
I need a reliable automation script that can interact with a browser-based application I use every day. The solution must run smoothly on Windows machines and handle the full workflow without manual clicks or keystrokes. While the main focus is the web interface, I’m open to whichever stack you feel is most stable—Python + Selenium, PowerShell with headless Chrome, or a comparable approach—as long as it can be launched by a non-technical user from a desktop shortcut. What success looks like for me: • The script logs in, completes the required on-screen actions, and exits cleanly. • It reports basic status (success, failure, or any error message) in a simple log file. • Setup is straightforward: a short read-me or a one-time installer that cove...
...confirm that navigation, accessibility and touch interactions feel natural on small screens. Coverage requirements All tests must be executed on the latest stable versions of Chrome, Safari and Firefox, both mobile and desktop, so that I can release with confidence across the major browsers my audience uses. Deliverables 1. A concise test plan outlining scenarios, devices and toolsets (e.g., Selenium, BrowserStack, Lighthouse, WebPageTest). 2. A bug and issue log with severity, reproduction steps and screenshots/video where applicable. 3. Performance reports highlighting any slow points and clear, actionable recommendations to hit target metrics. 4. A deployment checklist, then hands-on assistance pushing the app to the production server (GitHub Actions + Netlify), fo...
..."Quick Scan." User Dashboard: Where clients can see their scan history and download PDF reports. Payment Integration: WooCommerce or Paid Member Subscriptions (Stripe/PayPal) for one-time reports or monthly monitoring plans. API Integration: A custom function to send the URL to the Python backend and retrieve the scan results. 2. Python Backend (The "Crawler"): Web Scraper: Use Playwright or Selenium to visit a target URL, identify active cookies, and detect tracking scripts (Google Analytics, FB Pixel, etc.). Policy Extractor: Automatically find and extract text from Privacy Policy and Terms & Conditions pages. AI Analysis: Send the extracted data to OpenAI API (using a custom expert-level prompt) to identify legal gaps or outdated clauses. PDF Generator...
I need a developer to collect data from multiple public websites a...(script or small app) that I can run on demand Basic documentation: how to run it, how to adjust settings, where outputs go Quality requirements Reliable scraping with error handling and retries Respectful request rate / throttling to avoid overloading sites Clear logging (success/fail, pages processed) Ability to adapt if page structure changes Experience with Python (Scrapy/BeautifulSoup/Selenium/Playwright) or Node.js Proxy / rotating user-agents experience (only if needed) Scheduling/automation (cron, Docker, or cloud run) Deliverables Working scraper + instructions Sample output file(s) Final dataset from agreed sources (initial run) To apply, please include Examples of similar scraping work you...
...a lightweight C# console application that logs in with a simple username-and-password flow, stores the session cookies, fetches the target pages, parses those HTML tables, and exports the results to a clean Excel file—all without launching or driving a browser. Key points • Strictly backend: please rely on HttpClient (or similar) plus a parser such as HtmlAgilityPack or AngleSharp. No Selenium, WebDriver, or hidden browser instances. • After authentication, the program should keep the cookie jar alive for subsequent requests so I can point it at multiple table URLs in one run. • Output should be an .xlsx file; ClosedXML or EPPlus are both fine. Use separate worksheets when several tables are returned. • Error handling matters: graceful lo...
I have an urgent need for a clean, well-structured dataset containing the listing agent’s first name, last name, mailing address, and phone number for well over 500 active Zillow listings. Speed is critical, but accuracy matters just as much; the final file should be ready for immediate import into my CRM. You are free to use whichever stack you prefer—Python with BeautifulSoup or Scrapy, Selenium, residential proxies, even the unofficial Zillow API—so long as rate-limits are respected and the data is complete. I don’t need property details or price history; the focus is strictly on the agent contact fields. Deliverables • CSV or XLSX with a separate column for each required field • A short read-me explaining the script or method so I can reru...
I need a small, reliable script that pings the Late Show with Stephen Colbert page on 1iota every 30 seconds and fires off an SMS the moment May 21 tickets appear. The job is straightforward but time-sensitive: • Scrape or query the specific event listing without triggering 1iota’s bot protections (Python with requests/BeautifulSoup, Playwright, or Selenium are all fine—use what keeps the check time low). • Parse the response and confirm that the date equals 21 May before treating it as a positive match. • Send a single, immediate SMS alert to my phone via Twilio (or another SMS gateway you’re comfortable with). The script must run unattended on a Mac or Linux box—so include setup instructions, any required environment variables, and ...
...table. Manual testing is the day-to-day backbone of the assignment, and I also need someone confident enough with automation to build and maintain suites that keep regression runs tight and reliable. You’ll collaborate closely with developers and product owners in our office, owning the full test cycle—from writing clear test plans to validating production fixes. If you already script with Selenium or a comparable framework, you’ll ramp up fast; if you have wider performance-testing exposure, that’s a welcome bonus but not essential. Key deliverables during the contract: • Comprehensive test plans and cases for each sprint • Well-structured automation scripts integrated into our CI pipeline • Detailed, reproducible defect reports w...
...through whatever loophole is needed to reveal the hidden contact and unit details, then replies with a single, structured template that looks something like: Property: <Title> Unit No.: <unit_number> Client: <client_phone> Owner: <owner_phone> Source: <URL> Key points • No reliance on the Bayut or Propertyfinder APIs—pure scraping with your preferred stack (Python, Node, Playwright, Selenium, BeautifulSoup, etc.). • Handle anti-scraping tactics gracefully (rotating headers, proxies, captchas if they appear). • Keep response time reasonable so a conversation still feels instant. • Deliver clean, well-commented code plus a quick guide for deploying the bot on a VPS or Docker image. Acceptance will be a short...
I need a small utility that automatically votes for the Pearl River nominee in the “Athlete of the Week” poll on (link above). The script must run on my Windows laptop, locate the correct checkbox, tick it, submit the vote, pause about 20 seconds, refresh, and repeat until I stop it. No other option should ever be selected. Any approach—Python + Selenium, AutoHotkey, Puppeteer, or similar—is fine as long as setup is light and the executable/script launches with a double-click. Deliverables • Stand-alone .exe or single-file script, plus readable source • Default 20 s delay (user-editable) • Robust handling for page timeouts, refreshes, or blocked votes • Simple start/stop (console prompt or hotkey) • Quick demo evidence th...
...historical and new financial report that appears on the “Filings & Disclosure” section of otcmarkets.com. At the moment I only care about the PDFs of annual, quarterly and interim filings, but the solution should be flexible enough that I can later extend it to press releases or historical data if required. Here’s what I expect: • A script (preferably in Python 3 using requests / BeautifulSoup or Selenium if necessary) that accepts a plain text list of symbols, checks each page once per day and downloads any financial report that is not already saved. • Folder or filename logic that organises the PDFs by ticker and date so nothing is overwritten. • A simple log or CSV that records the timestamp, ticker and URL of each file fetched, plus ...
...HTML, JSON, CSV, and PDF files * Clean and normalize messy real-world data * Write clear, maintainable utility scripts * Deliver working code (not just prototypes) --- ### Required Skills * Strong Python fundamentals * Real experience with web scraping * Data parsing and data cleaning * Comfortable working independently and async --- ### Nice to Have * BeautifulSoup, Scrapy, Playwright, or Selenium * pandas / numpy * Experience scraping government or legacy websites * Experience handling PDFs (text extraction, OCR) --- ### How We Evaluate * This role includes a **paid trial task (1–3 days)** * We care about **output and correctness**, not resumes * Clean, working code matters more than clever abstractions --- ### Important * Please include **2–3 sentences*...
...expire—according to spec. • Capture reproducible steps, screenshots or short screen-captures, and clear pass/fail notes for every test case. • Retest resolved issues once fixes are deployed so we can close the loop quickly. Environment & tools I can provide access to the staging URL, credentials for dummy accounts, and the current test case spreadsheet. You’re welcome to bring your own toolset—Selenium, Cypress, Postman, or simply a detailed manual checklist—so long as the final report is clear and exportable. Timing This is time-sensitive. I need the first complete round of results within 24–48 hours of hire, with any follow-up retests wrapped up shortly after. When you respond, highlight similar functional testing work you&...
I maintain hundreds of browser-isolated accounts inside Adspower and now need a repeatable RPA flow that performs automated actions without detection. Acceptance criteria 1. A fully working Adspower RPA script or flow file that I can import directly. 2. Source code and clear setup instructions (env...flow that performs automated actions without detection. Acceptance criteria 1. A fully working Adspower RPA script or flow file that I can import directly. 2. Source code and clear setup instructions (environment, dependencies, execution steps). 3. A short video or screenshot guide proving the script updates at least three accounts end-to-end. If you have already worked with Adspower, Selenium, or similar anti-detect browsers, let me know—the smoother the integratio...
...Extract the bio text and the profile picture, storing the image locally or saving its direct link next to the bio in a CSV/JSON file • Respect , employ modest request throttling, and handle the site’s usual edge cases—lazy-loaded images, occasional 4xx/5xx responses, and any login or cookie notices that appear for anonymous visitors I’m comfortable with Python (requests, BeautifulSoup, Selenium) or Node (Puppeteer) solutions, provided the code is clean, modular, and comes with a concise README so I can run it on macOS or a Linux VPS without guesswork. Deliverables: 1. Full source code with clear setup instructions 2. A one-line command or small runner script that launches the crawl 3. A sample output file covering at least 20 profiles to prove every...
...Indeed and HelloWork. • Captures, at minimum, the job title, full description, company name and location. • Stores everything in a structured database I can easily query or export. • Retrieves complete CVs from LinkedIn and, when possible, other social platforms, then links each profile to the same database scheme. Feel free to choose the most stable stack you trust—Python with Scrapy or Selenium, Node with Puppeteer, direct GraphQL or REST endpoints, etc.—as long as it runs unattended, copes gracefully with rate limits / captchas, and offers a simple way for me to schedule or trigger updates. Acceptance will be based on: 1. A repeatable script or service I can host (Docker image or cloud function are fine). 2. A concise setup guide plus sam...
...estimates with historical auction results found online Evaluate whether the artwork appears undervalued or attractive Generate a daily report (email, PDF, or dashboard) summarizing: Selected artworks AI-based opinion (“interesting / neutral / not interesting”) Short justification for each recommendation Technical Expectations Preferred stack: Python, APIs, web automation/scraping (Playwright, Selenium, etc.) Integration with OpenAI / ChatGPT API Clean, well-documented, and maintainable code Respect of reasonable scraping limits and platform constraints Secure handling of credentials Deliverables Fully functional automated system Setup documentation Ability to easily update keywords and filters Optional: hosting / scheduled execution (cloud or local) I...
...well-structured lead list and I already know exactly what it should contain. The task is to extract contact information—email addresses, phone numbers and full mailing addresses—from three sources: company and organisation websites, their public social-media profiles, and well-known online directories. I expect the data to be gathered with a solid scraping workflow (Python, Scrapy, BeautifulSoup, Selenium or an equivalent stack is fine) and then verified so that bounced emails and dead numbers are kept to an absolute minimum. Deliverables • One CSV or Excel file with separate columns for name, company, job title, email, phone, street address, city, state, ZIP/postcode, country, source URL and date collected. • No duplicates; every entry must be uni...
...Columns: name, first line of address, state, city, postcode • Format: every column saved as plain text (no numeric or date formatting) Delivery schedule • First 5,000 fully cleaned rows required within the first 6 hours • Remainder on a rolling basis until the full 15,000 are complete I will supply a surname list to guide the searches. A straightforward Python (requests / BeautifulSoup or Selenium) or Scrapy workflow is fine as long as the final output arrives in a single Excel file (.xlsx) that opens error-free in Microsoft Excel. Accuracy matters more than speed—random spot checks will be run. Any duplicates, blanks, or malformed addresses will be sent back for correction. Once the first 5,000 pass review, I’ll green-light the rest of the sc...
Description: - We are looking for an experienced Data Scraping / Web Scraping expert. - We will share the industry name, and the freelance...suitable websites/sources to scrape - Suggest countries/regions that can be covered - Share estimated data volume & approach - After approval, the freelancer will scrape and deliver clean, structured data. Data Required (example): - Company name - Location - Contact details (email/phone/website – if available) Requirements: - Proven experience in data scraping - Knowledge of Python, Scrapy, Selenium, APIs, etc. - Ability to scrape multi-country data (based on feasibility) Deliverables: - Data in Excel / CSV / Google Sheets - Basic info of sources used To Apply, share: - Similar scraping work - Tools you use - Your approach after ...
...including form completion, conditional steps, document access, downloads, and integrations with external systems. We already have a clear vision for the workflow and architecture and are looking for someone who can design it correctly and build it cleanly. Scope of Work You will design and implement a browser-based automation agent that: Controls a real browser (Playwright strongly preferred; Selenium or RPA acceptable with justification) Navigates authenticated websites Detects conditional flows (e.g. document gates, agreements, confirmations) Completes multi-step forms deterministically Downloads documents (PDFs and similar) Prevents duplicate work across runs Stores processing state so the system is resumable and auditable Integrates with Google Drive or a watched u...
...System activity Booking attempts Successful reservations Portal availability status Configuration & Extensibility User-friendly configuration files Support for multiple applicant profiles and visa types Extensible architecture for: Additional notification channels External CRM integrations (optional) Custom triggers and automation rules Preferred Technology Stack Python (required) Playwright or Selenium (Playwright preferred) Docker Linux-based deployment Third-Party Services No mandatory third-party paid services If any external services are required, they must be clearly stated and justified Important Notes for Applicants Please include in your application: Examples of similar automation, booking, or scraping projects Technologies you used in those projects Your propos...
I need all publicly available customer-facing email addresses extracted from a list of e-commerce websites that I will supply once the project begins. Please crawl only the domains I provide, respect where possible, and avoid triggering any rate limits or security blocks—rotating proxies or headless browsing with tools such as Python, Scrapy, BeautifulSoup, Selenium, or similar is fine as long as the result is reliable. Deliverable • One clean, de-duplicated CSV file containing the harvested email addresses, ready for direct import into my CRM. Acceptance criteria • Every email must originate from the target e-commerce domains. • No duplicates, placeholders, or obviously invalid addresses. • File encodes as UTF-8 and opens without warnings in Exc...
I need a small, fast bot that monitors Amazon Flex exclusively for Whole Foods Market “Instant Offers,” ignores every other kind of block, and snaps the offer up for me the moment it appear...solver, etc.) I’ll need to keep running Acceptance criteria • Captures at least 90 % of available Whole Foods instant offers in a live half-day test • Sends push alerts to my device in under one second after acceptance • Runs for eight straight hours without crashing, logging me out, or flagging my account If you already have experience automating Amazon Flex with Python, Node.js, Selenium, Playwright, or a similar stack, you’ll probably find this straightforward. Let me know what platform you prefer, how you’ll handle push notifications,...
...converts and calculates the raw values exactly as we define before pushing them straight into WooCommerce. My customers must only ever see the WooCommerce front end, so the sync has to feel native and instant. The portal changes frequently, so please code the extractor so that selectors and credentials can be updated without touching the core logic. I am open to Python (Scrapy, BeautifulSoup, Selenium), PHP or Node as long as the finished solution talks cleanly to the WooCommerce REST API and leaves no manual steps. Deliverables • Scraper that logs in and captures product details, stock, prices and images in real time or on a schedule we agree on • Conversion layer that performs the unit/price calculations before data enters WooCommerce • Image handler that d...
...generation Strong communication and negotiation skills Ability to work independently and remotely Experience with platforms like LinkedIn, Upwork, Freelancer, or direct outreach is a plus Compensation: 10% commission per successfully closed project Commission will be paid after client payment is received No fixed salary or upfront payment Project Scope: Manual Testing Automation Testing (Selenium, Playwright, etc.) Mobile App Testing (iOS & Android) Web Application Testing This is a long-term opportunity for the right candidate who can consistently bring projects. If interested, please apply with: Your experience in bringing IT projects Platforms or methods you use to find clients Expected commission percentage Looking forward to working with motivated sales...
I need a developer to collect data from multiple public websites a...(script or small app) that I can run on demand Basic documentation: how to run it, how to adjust settings, where outputs go Quality requirements Reliable scraping with error handling and retries Respectful request rate / throttling to avoid overloading sites Clear logging (success/fail, pages processed) Ability to adapt if page structure changes Experience with Python (Scrapy/BeautifulSoup/Selenium/Playwright) or Node.js Proxy / rotating user-agents experience (only if needed) Scheduling/automation (cron, Docker, or cloud run) Deliverables Working scraper + instructions Sample output file(s) Final dataset from agreed sources (initial run) To apply, please include Examples of similar scraping work you...
...credentials • Launching Chrome and reaching the QlikSense portal • Navigating to the bookmarked report, applying the required custom filters/selections, and triggering the export • Saving each report as .xlsx to a specified path (same folder each run, overwriting yesterday’s files) • Closing the session cleanly I’m flexible on the toolkit—UiPath, Automation Anywhere, Power Automate, or a Python/Selenium script with Windows Task Scheduler are all acceptable as long as they run headless and survive occasional VPN hiccups. Please include lightweight error handling (e.g., retry on VPN drop, email or Slack alert if a step fails) and clear configuration notes so I can adjust file paths or filter values later without touching the code. Del...
...the data to a single-sheet .xls workbook. Required columns • Ministry Name • Date • Member name • Session No • Question No • Answer Link (the full URL for the answer PDF or page) Please have the code automatically iterate through all available result pages so nothing is missed, then save everything in row order to one sheet named “Questions”. Pandas, BeautifulSoup, requests, xlwt, or Selenium are all fine as long as the final file is a standard .xls that opens in Excel without warnings. Deliverables 1. The .py file, clearly commented so I can modify the URL later if needed. 2. A sample .xls generated by the script, showing that every question currently on the site has been captured. 3. A quick README (just a few lines...
...searching and reserving a stay, to a host managing listings, messaging, and issuing refunds. Once the scenarios are approved you’ll execute them across the latest versions of Chrome, Safari, Firefox and Edge (desktop and mobile views), logging each defect with clear steps to reproduce, screenshots or short videos, and severity notes. I’m happy for you to use the toolset you’re most comfortable with—Selenium or Playwright for functional regression, Lighthouse or WebPageTest for performance metrics, plus any bug-tracking platform you prefer. The end result should give me a crisp picture of outstanding issues and the confidence to hit “go live.” Deliverables • Test plan and scenario matrix • Executed test case suite (pass/fail resu...
... (Wholesale parts - Login required) 3. (Auction history - Public) 4. (Retail catalog - Public - Complex navigation) ═══════════════════════════════════════════════════════ TECHNICAL REQUIREMENTS (Non-Negotiable) ═══════════════════════════════════════════════════════ 1. ASYNC/AWAIT ARCHITECTURE - Must use Python asyncio + Playwright - NO Selenium allowed - Clean, maintainable async code 2. CONCURRENCY - Handle 10-30 concurrent browser contexts efficiently - Proper resource management (no memory leaks) - Configurable concurrency limits 3. BANDWIDTH OPTIMIZATION (CRITICAL) - Block images, fonts, CSS, videos, media files using () - Target: Under 300KB per page load (vs 2-5MB unoptimized) - This
As discussed, I need a clean, up-to-date scrape of product information from the agreed-upon ...single CSV or Excel file containing one row per product with dedicated columns for name, description, price, current stock/lead-time, and every specification field you can capture. • All images and downloadable documents saved locally, placed in clearly labelled folders, and referenced in the dataset by filename or URL. • The scraping script itself (Python preferred—Requests/BeautifulSoup or Selenium, whichever you find more reliable for this site) along with a brief README so I can rerun it later if the catalogue changes. Accuracy is critical; I’ll spot-check several products before sign-off. Let me know the estimated turnaround once you’ve had a quick loo...
I need a discreet partner who can take over my daily hunt for Backend Developer positions and keep everything moving while I focus on interview prep. For the next month you will be my silen...not volume for its own sake: quality-matched roles, thoughtful referral outreach, and zero random blasts. Deliverables • Daily submission log with links, job titles, and status • Concise weekly summary (applications sent, referrals requested, replies) • All scripts or tools you build or tweak for this process The assignment runs one month and is locked at ₹2,000, so efficient tooling—Selenium, Playwright, LinkedIn APIs, or your own macros—is essential. If you’ve already automated LinkedIn and Naukri workflows and know how to tailor messages that get ...
I need an automated scraping solution that reliably collects product data from targeted websites and delivers it in a clean, structured file I can plug straight into my workflow. You’re free to use Python (BeautifulSoup, Scrapy, Selenium, Playwright) or a simple cloud instance, and the output lands in CSV or JSON.
I need the brochure catalogue of a JavaScript-heavy e-commerce site captured and delivered as a clean CSV. My focus is on accurate prices and every available variant, pulled from each category the site offers. Python is the language of choice and I’m flexible on tooling—Scrapy + Playwright, straight Playwright, Selenium, or another robust approach—provided the code is modular, well-documented, and easy for me to rerun when the store layout shifts. If you already have proxy rotation or rate-limit handling baked into your pipeline, that will be an advantage. What has to happen • Crawl through every category filter so no product slips through the cracks. • Render dynamic content fully to capture price and variant data, along with URL, SKU, net price an...
...PROFILE PICTURE MATCHES WHO THEY ACTUALLY ARE. A VIDEO CALL TO DISCUSS WILL BE NEEDED TO DISCUSS DETAILS PRIOR TO HIRING. **THIS IS A LONG TERM PROJECT THAT WILL BE HEAVILY FRONT LOADED HOPEFULLY MAKING YOUR LIFE EASIER OVER THE DURATION OF THE PROJECT. WITH THIS I PLAN ON PROVIDING A MONTHLY STIPEND THAT WILL EVEN OUT PAY OVER THAT TERM. Your toolkit is up to you, but Python with Scrapy or Selenium, a tidy Pandas workflow, and solid MySQL / PostgreSQL skills fit naturally with what we already run on AWS. Clean code, deduping, error logging, and clear documentation are non-negotiable; everything has to slip straight into the pipelines that feed our mobile app. ** I HAVE EXISTING SCRIPTS FROM THE PREVIOUS FREELANCER THAT WILL HELP YOU EXPEDITE THE WORK YOU DO. Deliverables ...
...each horse’s career starts, total earnings and current trainer. What I’m missing is the data itself. Two different public websites publish this information; I’m happy for you to pull from whichever source (or a mix of both) gives the most complete and accurate results. Your task is to automate the extraction—ideally with a clean, well-commented Python script that uses requests/BeautifulSoup, Selenium, Scrapy or any other library you prefer—and then populate my spreadsheet with the fetched records. When the job is done I need: • the updated Excel file, fully filled out and spot-checked for accuracy • the script and quick usage notes so I can rerun it later if the sites update That’s it. If you can turn this around quickly and ...
I need a single, clean pull of all product information from an e-commerce site. The scope is limited to product names, full descriptions, and the corresponding images; no price or contact data is required. A fresh scrape is needed only once, so scheduling or cron work is unnecessary. You may use Python with BeautifulSoup, Scrapy, Selenium, or any comparable stack—what matters is that the final dataset is complete and easy for me to consume. Deliverables • CSV or XLSX listing every product with its name and full description • A folder (or ZIP) containing every product image, with filenames mapped to the rows in the data file • A brief README outlining the scrape process and any setup steps I would need to reproduce it locally Acceptance criteria ...