Data scraping jobs
...high-performance Email Scraping & Lead Generation Tool. The goal is to extract business emails from companies in the Technology/Computer sector located in Germany, Switzerland, and the Netherlands. Key Features Required: Multi-Source Scraping: The tool must be able to pull data from: Google Search & Google Maps (using keywords like "IT Services Berlin," "Software House Amsterdam," etc.). Yellow Pages / Gelbe Seiten (Germany), (Switzerland), and (Netherlands). Company Websites: The ability to crawl a company's website found in search results to find "Contact" or "About Us" pages and extract valid email addresses. Targeted Filtering: Industry: Computers, IT, Technology, and Hardware. Geography: Germany (DE)...
...high-performance Email Scraping & Lead Generation Tool. The goal is to extract business emails from companies in the Technology/Computer sector located in Germany, Switzerland, and the Netherlands. Key Features Required: Multi-Source Scraping: The tool must be able to pull data from: Google Search & Google Maps (using keywords like "IT Services Berlin," "Software House Amsterdam," etc.). Yellow Pages / Gelbe Seiten (Germany), (Switzerland), and (Netherlands). Company Websites: The ability to crawl a company's website found in search results to find "Contact" or "About Us" pages and extract valid email addresses. Targeted Filtering: Industry: Computers, IT, Technology, and Hardware. Geography: Germany (DE)...
I need a robust scraping solution that continuously pulls price, full product descriptions, and customer reviews for electronics across roughly ten different e-commerce sites. Each time the script runs, the results should append to a master Excel workbook so that every entry is stored chronologically—allowing me to compare today’s prices with yesterday’s and build long-term trend charts without any manual work. Key expectations • The scraper must visit all target URLs, handle pagination or lazy-loaded content where it exists, and respect each site’s structure. • Collected fields: date/time stamp, site name, product name, current price, full description text, average rating, review count, and a link to the product page. • Excel output: on...
Read carefully. — Execution-Only App Build Start to Finish (Read carefully.) Read carefully. Read carefully. Read carefully. Read carefully. Read carefully. Most applicants are NOT a fit. This is NOT an AI role. NOT Python, automation, scraping, ML, bots, or architecture. If your background is primarily Python/AI/automation, do not apply. We are hiring an execution-only developer to build and to make small, precise fixes for a web/mobile app under strict instructions. What you will do: Build apps web based and mobile What you must NOT do No refactors, optimizations, or redesigns No “better solutions” unless explicitly requested No scope expansion, creativity, or initiative outside instructions We value discipline, stability, and reliability over creati...
I’m looking for a curious, data-driven analyst who can carry out rigorous market research with one clear purpose: pinpoint emerging trends and untapped opportunities. This project isn’t about compiling a generic industry overview; I want actionable insight that can guide product and go-to-market decisions. Here’s how I picture the workflow: • You’ll help me define the exact market segment we should interrogate, drawing on preliminary signals and any early hypotheses I share. • Using publicly available datasets, reputable industry reports, social-listening tools, and news/API scraping where appropriate, you’ll build a fact-base that shows where demand is growing, which customer needs are still underserved, and what macro forces are dr...
I already have a curated list of LinkedIn profile URLs and need the key networking details moved into a single Google...per row in the Google Sheet and create separate columns for: • Profile URL • Name (as it appears) • Interests (comma-separated) • Type 1 through Type 5 (verbatim wording) • Category tag (Industry experts / Potential clients / Collaborators) Accuracy of the text you pull is more important than speed, but I do expect the work to comply with LinkedIn’s terms and avoid triggering any scraping limits. If you prefer to work manually, that’s fine; if you script with Python, Selenium, or similar, just make sure the final output lands cleanly in my shared Google Sheet. I’ll review a small sample first, then green-light...
I need a turnkey solution that watches a dedicated folder on our local server, reads every stock-dispatch PDF dropped there, and pulls out the roll-level details we receive from multiple vendors—Micron, Size, Roll Number, Gross Weight, Net Weight, Invoice Number, Vendor Name and Dispatch Date. Once captured, that data must flow straight into a master inventory database, stay synchronised, and drive all downstream reports. Key functions I expect: • Continuous PDF scraping from the local path I specify, with no manual trigger required. • Central master inventory that grows with each import; I’m open to whichever SQL back-end you feel best suits the job as long as it is industry-standard and easy for us to maintain. • Live stock dashboards that ...
...outbound marketing systems and client operations. In this role, you will assist with outbound tools such as , , Heyreach, Apollo, and scraping tools, while also helping maintain trackers, organizing data, and supporting clients with operational and tool-related questions. You must have excellent written English and strong hands-on experience with Instantly.ai. This is a long-term position with consistent work. ⸻ Required Tools Experience (Mandatory) You MUST have hands-on experience with: • (very important) • • • • LinkedIn outbound workflows • Apify scrapers or similar scraping tools Applications without experience will not be considered. ⸻ Core Responsibilities 1. Campaign Operations • Monitor
I will perform a clean, accurate, one-time web scraping and structured data extraction of the public website table and deliver the complete dataset into a Google Spreadsheet with exact column order, row integrity, and zero data loss. Using a reliable scraping stack (Python, Requests, BeautifulSoup, or JavaScript if required), I will extract all visible table rows, normalize and validate the data, and transfer it into Google Sheets with proper formatting, structured schema alignment, and clean tabular organization. Key deliverables include: • Full table scraping from public website source • Accurate data extraction with 100% row and column preservation • Clean Google Sheets integration with correct structure and formatting &b...
Webscraping of job portal (including detail pages) Main website: [ Fachangestellte/r (MFA)&job_location&radius=30&employer_type=['praxen','mvz']/]((MFA)&job_location&radius=30&employer_type=%5B%27praxen%27,%27mvz%27%5D/) Detail page (first result) Fields that I need in the Excel: - Title of each entry (job title) - Name of the company - Date - Address - Detail page: I need the full job desription in one field (can be html) This is about 1k results. What is the price and what would be the delivery time?
...Python Developer for US Data Pipeline and iOS Verification System (Phase 1) Project Description Suggestion: Overview: > We are looking for a senior Python developer to build an automated data scraping and iOS verification pipeline based in the US. The goal for Phase 1 is to acquire over 10,000 verified leads per day. Core Tasks: 1. Data Scraping: Extract data (name, phone number, age, gender, carrier) from US people search websites. 2. Anti-detection: Must integrate the API and set render=true and super=true. 3. Data Filtering: Implement automatic filtering by wireless/phone number and age range (50-90 years old). 4. Data Verification: Integrate the LoopLookup API to verify iMessage activation status. 5. Data Exp...
Hey! I’m looking to hire an experienced developer to build a universal product-detail scraping pipeline that takes a product URL (any website) and returns a complete structured product record. This is not a “simple HTML parse.” Many target sites are React/Next/Vue, load content via XHR/GraphQL, hide details behind tabs/accordions/modals, and lazy-load images/PDFs. The solution needs to reliably extract everything a human can see on the page, plus the underlying data used to render it. What the scraper must do (high level) Given a product URL, the pipeline should: Load the page like a real user (handle cookies/overlays). Capture all content from multiple sources (DOM + network + interactions). Use GPT API strategically to increase accuracy (field mappin...
...workflow. This is about extracting, comparing, and interpreting data. Excel and PowerPoint remain the source of truth. What we need: -Compare PowerPoint vs Excel and flag mismatches - Explain underwriting models and trace outputs - Compare legal/term sheets vs financial assumptions - Track document versions and changes - Summarize deal folders Automation goals: -Draft IC and board materials from templates -Standardize presentations and memos -Replace recurring analyst work -Produce management summaries -Highlight anomalies and trends - Market intelligence: - Build comp sets - Pull pricing and availability - Map assets and demand drivers - Provide macro context Mandatory data inputs: Internal databases and Excel Dropbox Web scraping Open to Claude, OpenAI, or...
...financial records from a set of business websites and turn them into a clean, structured dataset that my team can work with immediately. The job calls for a blend of precise web scraping and careful data entry so every figure—revenue, expenses, balance-sheet items, year-on-year comparisons—lands in the correct column and remains faithful to the source. Here’s what the work looks like from my side: • You’ll navigate each designated site, locate the target financial tables or statements, and pull every required number. • For transparency, I also want the source URL and the date you captured each record logged beside the data. • Consistency matters: please apply uniform naming conventions (e.g., “FY2023 Gross Profit&...
I need a clean, reliable web-scraping script built either in Node.js or Python. The goal is simple: pull fresh data every day and make it immediately available for display on my website. If you have questions about target sites, anti-bot measures, or preferred hosting, let me know and we’ll refine before you start.
Task : Extract Emails form a list of 65158 K Websites client will give a list of urls. This list is having 65158 urls. freelancer need to write script to fetch emails from those urls using automation web scraping techniques. Delivery time 5 days Delivery format : Data in Excel
I need a reliable solution that can pull data from LinkedIn and insert it straight into a database I specify. The core requirement is the automated transfer—once the tool finishes scraping, every captured field should already be sitting in the database ready for queries and reporting, no manual copy-paste. You’ll advise me on the best approach to authenticate, respect rate limits, and minimise the risk of blocks while still collecting the typical profile-level details (name, headline, company, location, experience, education, skills and anything else you can legally obtain). I will confirm the final field list before you begin. Key objectives • Build or configure a scraper / API wrapper that logs in, navigates to each target profile and captures the agreed-...
...an experienced AutoHotkey (AHK) developer to build a clean, reliable script that automates the repetitive navigation and clicking I perform every day inside my web application. Here’s the core scenario: the macro will launch a browser tab, step through a predictable series of pages, click specific buttons or links, wait for elements to load, and continue until the end of the workflow—no data scraping or form filling is required, just fast, accurate page-to-page movement and element selection. I’ll provide: • A screen-recording that shows the exact click path and timing cues • XPaths, CSS selectors, or unique element IDs where available • Any login credentials needed for testing (in a secure manner) You’ll deliver: &bu...
I have a list of real-estate agencies operating in Melbourne, Victoria and need every staff member’s direct phone number captured—agents, managers, administrative staff, everyone on the roster. You’ll work through th...number • One row per staff member; separate rows even if people share the same office line • A brief note/log for any agency where no direct numbers could be sourced despite reasonable effort Acceptance criteria • At least 90 % of listed staff have a direct number • Random sample of 20 entries must ring through to the named individual or their personal voicemail If you’re used to data-scraping tools but comfortable jumping on the phone to fill gaps, this should be quick work. Let me know your timefram...
I need a clean one-off scrape of tabular data that sits openly on a public website and have that entire dataset placed into a Google Spreadsheet. Because it is only a single extraction, I am not looking for a recurring script or scheduler—just an accurate pull of everything that appears in the table on the page today. Feel free to use your preferred stack—Python with BeautifulSoup/Requests, Apps Script, or any reliable web-scraping tool—as long as the final result lands neatly in the sheet, keeping the same column order and row count that appears online. Before we wrap up, I’ll quickly check row totals and a handful of random cells against the site to confirm accuracy; once those spot checks pass, the job is done.
I have a single public website that lists companies and I need their basic contact details pulled immediately. As soon as we agree, I’ll send you the URL; from there I expect yo...for each result—company name, address, website, phone number, and email—nothing more. The final deliverable is a clean, well-structured Excel file ready for me to review. Speed is the priority here: please be able to start right away and turn the file around as fast as possible while still double-checking that every row is accurate and complete. If this timeline works for you and you have solid scraping experience with tools like Python, BeautifulSoup, or Scrapy, let’s move forward now. Budget small as simple Task so Low budget bidder 1st priority. But start now. Simple Task. Star...
I have two source spreadsheets that I need merged and enriched through automated scraping: • “File 1” – 170 k Spanish local businesses with emails • “File 2” – 65 k additional businesses with websites only Phase 1 – Email extraction Using a Python script and well-known libraries (requests, BeautifulSoup, Scrapy or similar), scan every site listed in File 2, capture all working email addresses you can locate, then append them to the corresponding rows so I can produce a unified “File 3”. Phase 2 – Offer harvesting Next, visit each live site in File 3. Where an offer, deal or promotion is publicly displayed, record the details in a fresh Excel sheet with these exact columns: Business ID | Business Name ...
...processes such as: Automatically scraping a new client’s website and relevant public social profiles upon signup Structuring and exporting that data into organized files (Google Drive/Docs/Sheets) Creating standardized client folder structures in Google Drive Connecting onboarding forms to project management tools Automating internal task creation for our team Integrating AI tools (e.g. GPT workflows) into onboarding and research processes This is just the starting point, we want someone who can think strategically about workflow architecture, not just execute isolated zaps. Ideal Candidate Strong experience with tools like n8n, Zapier, Make (Integromat), or similar Comfortable working with APIs where needed Experience with web scraping tools and ...
I need a data scraping expert to help generate leads from a list of websites. Requirements: - Scrape contact information, product listings, or user reviews (to be specified). - Work from a provided list of URLs. Ideal Skills: - Experience with data scraping tools and techniques. - Ability to handle multiple URLs and extract data accurately. - Attention to detail and reliability. Please share your portfolio and relevant experience.
...me how to run the script and change the target URL or output path if needed. Code quality matters to me: no hard-coded absolute paths, clear variable names, and graceful error handling so the run doesn’t stop if a single page fails. The entire job should fit comfortably within one to two days of focused work; total compensation is a fixed $40. If everything runs smoothly, I’ll have similar scraping mini-projects to pass along in the near future....
ob: Extract US Multi-Location Restaurant Brands from OpenTable I need a data researcher to extract restaurant brands from OpenTable () that meet the following criteria: Requirements US-based restaurant brands only Brands must operate between 15 and 50 total locations (company-wide, not just OpenTable listings) At least one location must be listed on OpenTable Must be an actual restaurant operator (no tech companies, media, associations, suppliers, or consultants) Deliverable A clean CSV file with the following columns: Brand Name Official Website URL Total Number of Locations (verified from website) Source URL confirming location count OpenTable URL (at least one listing link) Notes (if clarification needed) No duplicates. No single-site independents. No chains over
I have a backlog of paper-based invoices and receipts that must be keyed into an existing Excel template. Every figure needs to be captured exactly as it appears, with correct dates, vendor names, GST fields and reference numbers, so manual data-entry accuracy is critical. Because these records ultimately feed our Tally ledger, you should understand basic accounting concepts—debits, credits, tax codes—and be comfortable cross-checking your work against Tally reports to be sure totals match. No automated scraping is possible here; it is straight keyboard entry followed by a brief reconciliation step. Deliverable • Completed Excel workbook, fully populated and auto-sum balances matching the physical documents and my Tally control totals. Acceptance cri...
...related in the last month and note the information I specify below. Here is exactly what I want verified on every profile: • Recent posts activity – record the date of the most recent post so I can see at a glance who is active and who is dormant. • Availability of contact information – confirm whether an email, phone number, or “email” button is visible in the bio or contact section. No bots or scraping tools, please; I want a manual check for accuracy. Deliverables • The original Excel file returned with new columns for Last Post Date, Active Yes/No, and phone number still the same Yes/No, if it’s not the same, create a column next to the old number, & write the new number. • Highlight the ones not active anymore, meani...
Industrial Automation Product Data Extraction, Deduplication & Structured Image Collection Project Overview We are an industrial automation parts distributor building a structured product database to support inbound enquiries and SEO growth. We require an experienced data extraction specialist to: Extract structured product data from major industrial / electronic component distributor websites Identify duplicate manufacturer part numbers across multiple sources Merge all unique information into a single consolidated dataset Extract and organise all available product images per part number Deliver a clean, deduplicated, production-ready dataset This project includes: Data extraction Normalization Deduplication Intelligent merging Structured image...
...Maintain real-time marketplace monitoring and ensure stable 24/7 operation with reliable VPN rotation. Finding 2–3 profitable cards per week is considered success. Current Status (95% Complete): Deployed on DigitalOcean with live Vinted & Wallapop scraping, MySQL database, Bootstrap dashboard, email alerts, CSV/Excel export, automated deployment scripts, and full documentation. Remaining Work: - Improve VPN stability & Surfshark rotation (fix SSH drops, add failover) - Increase scraping success rate to 70–85% - Enhance HTML parsing & structured data extraction (~90% accuracy target) - Strengthen arbitrage detection via eBay price comparison - Final end-to-end production testing for 24/7 reliability Goal: Deliver a fully hardened, pro...
...and want solid data to guide each step. Specifically, I need a combined customer and market analysis that tells me who is most likely to buy, how big the demand really is, which segments offer the highest growth potential, and what positioning will make the products stand out. Because I don’t yet have any usable customer data, the first task is to source it—whether through publicly available datasets, targeted surveys, social-listening tools, or other methods you prefer. From there, I’ll need you to interpret the findings, spot patterns, and translate them into clear, actionable recommendations I can use for pricing, messaging, and channel selection. Deliverables • A concise research plan showing where and how you’ll gather relevant data...
I’m looking for a designer–developer who can turn my data-scraping and logistics support business into a single, modern and sleek landing page that makes visitors hit the “contact” button. The page must look polished on desktop and mobile alike, load fast, and present our value proposition in seconds. Key sections I need built in: • Hero strip with a concise headline, background image or graphic that hints at data automation / freight flow • Services Offered – short, punchy blurbs outlining data scraping, data enrichment, route optimisation and 24/7 logistic support • About Us – a brief narrative plus one photo or icon set to add personality • Contact Form – name, email, message; ...
I’m looking for an experienced developer or small team to build a reliable real-time data scraping system for Evolution Gaming live game shows, using an authenticated LeoVegas session, and integrate it into my existing website SpinLytics.com. This project is strictly focused on data reliability. Evolution Gaming does not provide public APIs, so the system must extract live round results directly from the authenticated client session. A working and stable data scraper is mandatory and must be guaranteed by the developer. The system must collect live round data for Evolution game shows such as Crazy Time, Monopoly Live, Dream Catcher, Funky Time and similar. Each round result must be captured immediately when officially published, without missing ro...
...and want solid data to guide each step. Specifically, I need a combined customer and market analysis that tells me who is most likely to buy, how big the demand really is, which segments offer the highest growth potential, and what positioning will make the products stand out. Because I don’t yet have any usable customer data, the first task is to source it—whether through publicly available datasets, targeted surveys, social-listening tools, or other methods you prefer. From there, I’ll need you to interpret the findings, spot patterns, and translate them into clear, actionable recommendations I can use for pricing, messaging, and channel selection. Deliverables • A concise research plan showing where and how you’ll gather relevant data...
...has ever been shared inside my private Facebook group and hand it back to me as easy-to-open PDF files. The scope covers all posts, their comments, plus every photo and video that appears in the timeline. No member data is required—just the conversations and media themselves. Organisation matters: the final PDFs must follow pure chronological order from the very first post to the most recent, so that I can scroll through the archive like a living timeline. I am fine with you using Facebook’s native export, CrowdTangle, custom scripts with the Graph API, or another ethical scraping workflow as long as it stays within Facebook’s terms and my admin permissions. Please deliver: • One zipped folder containing PDFs of every post, its comment thread, a...
I want a comprehensive, ready-to-use database of transport-related businesses that operate anywhere in New South Wales. The goal is roughly 15,000 unique, verified email contacts pulled from Google Maps, company websites or any other reliable public source you normally trust for web-scraping. The scope covers every organisation in these six categories: • Freight & Transport / Haulage / Trucking Companies • Courier Services & Delivery Providers • Bus & Coach Operators, Charters, Hire and School Bus Services • Logistics Services / 3PL operators • Taxi, Hire-Car, Rideshare Fleet Operators plus Car Rental & Hire Firms • Removalists and Furniture Movers For each listing I need the following fields, all separated into...
...vendor for a large list of domains. I am seeking an experienced web scraping specialist to improve our Python script to analyze a large list of school district websites (approximately 4000+ URLs) and identify the ones who show a specific link on any page found in their sitemap. The primary method of identification must be to scan the website's for specific, known vendor links. Deliverables Required 1. A Production-Ready Python Script (.py file): The script must be commented, easily configurable, and capable of reading the provided CSV list, performing the scan, and generating the output CSV. It should handle timeouts and basic error handling gracefully. 2. The Final Results (CSV/Excel File): A clean data file containing the results for all URLs provided. The...
I need the entire history of a specific Facebook Group captured—every post along with all...would inside the browser, with working links to the images and videos placed in clearly named folders. I don't want folders or links. Just one huge continuous page that has everything. This is for a court case and I have to give this to the other side. I want them to have to scroll through however many hundred pages there are. Just as if they were actually on FB. Please outline: • your scraping approach (Python + Selenium, Go, node-puppeteer, etc.), • how you’ll handle media downloads and folder structure, • estimated turnaround time. I’ll review a short sample export before we proceed with the full run to confirm the layout meets my &ldquo...
I need assistance merging my current football dataset with a new one. This new dataset will be sourced from online scraping of weather and expected goals (Xg) data. Requirements: - Scrape data from official weather and football statistics websites. - Integrate the following weather data: temperature, humidity, and precipitation. - Work with datasets in Excel format. - Correlate this new data with historical football match data in my existing dataset. Ideal Skills and Experience: - Proficiency in data scraping and data manipulation. - Experience with Excel and handling large datasets. - Familiarity with weather and football data. - Strong analytical skills to ensure accurate correlation of datasets. Looking forward to ...
.../ outbound automation specialist to build and deploy a scalable, largely autopilot outbound system. This is not an AI development project. We are specifically looking for a practitioner experienced in building high-volume cold email engines using tools like Instantly, Smartlead, Clay, Apollo, Make, etc. The system should automatically: • Source and enrich qualified SMB prospects (no manual scraping or static purchased lists) • Personalize outreach at scale using AI-assisted dynamic fields • Manage multi-domain, multi-inbox infrastructure with proper warm-up and deliverability • Run automated sequences with intelligent follow-ups • Parse replies, qualify interest, and route prospects directly to our calendar • Provide clear reporting on sends, repl...
...three skill sets in one workflow: • Web scraping to pull fresh data from public sources, listing services ,Like ( Tender Portel) • Market research to verify decision-makers’ names, roles, emails and direct numbers • Light sales-data analysis to spot buying signals and rank prospects by potential value The end product I’m expecting is a clean, de-duplicated spreadsheet (Excel or Google Sheets) that lists qualified real-estate companies, key contacts and essential firmographics, along with a short insights tab that explains notable trends you uncovered while analysing the data. Because these leads feed our real-estate outreach campaigns, it’s critical that you understand industry terminology When you reply, please highl...
...publicly available data from competitor product pages. - Price, rating, review count, order count (if available), “new” status. - Historical tracking of changes over time. - Sales and price comparison with competitors by selected period. 4. Analytical dashboards - KPI cards for key metrics. - Sales trend charts over time. - Weekly sales analysis. - Category-level distribution of sales and revenue. - SKU-level analysis with the ability to switch between units and monetary values. 5. Reporting and exports - Excel export generated directly from the system. - Ability to modify and customize Excel report templates. - Consistent column structure and formatting. Technical Requirements - Python-based backend. - Web scraping and automation for Uzum and Yandex Market....
I have a backlog of paper-based invoices and receipts that must be keyed into an existing Excel template. Every figure needs to be captured exactly as it appears, with correct dates, vendor names, GST fields and reference numbers, so manual data-entry accuracy is critical. Because these records ultimately feed our Tally ledger, you should understand basic accounting concepts—debits, credits, tax codes—and be comfortable cross-checking your work against Tally reports to be sure totals match. No automated scraping is possible here; it is straight keyboard entry followed by a brief reconciliation step. Deliverable • Completed Excel workbook, fully populated and auto-sum balances matching the physical documents and my Tally control totals. Acceptance cri...
...This is not a trading bot, not a dashboard, and not a scraping project. It is a backend-only system that: Automatically ingests official NSE/BSE filings Downloads and stores PDFs Computes SHA256 hashes Validates and parses documents Enforces BUY/HOLD rules in Python (not prompts) Generates research briefs with evidence Runs with zero manual data handling PoC Scope (2 Weeks) The PoC will cover 3–5 Indian defence stocks only. You must deliver: Automated PDF ingestion (no manual uploads, no scraping) Hashing + timestamping + duplicate prevention Structured parsing with schema validation Simple Python rule engine (example: BUY blocked if <2 filings) One research brief per stock System health report (jobs run, failures, missing data) Tec...
...2. System Features 2.1. Historical Data Collection and Update The system must automatically download complete historical results (drawn numbers, draw dates, prize breakdowns by category, accumulated jackpots) from the first draw of each lottery, directly from or reliable associated sources. Specific sources: Euromillones: (since Feb 13, 2004) La Primitiva: (since Oct 17, 1985 – modern version) El Gordo de la Primitiva: (since Oct 31, 1993) Updates automatic at exactly 00:02 the day after each draw, using ethical scraping (BeautifulSoup/Scrapy) with proper user-agent headers to mimic human behavior. Store data in PostgreSQL (structured) or MongoDB
I’m looking for a data engineer who can take full ownership of a daily web-scraping workflow aimed at ongoing market research. The job centers on extracting selected data points from public web pages, transforming them into a clean, structured format, and making them available for analysis every 24 hours. Here’s what I need you to handle from end to end: • Source acquisition – fetch HTML from the URLs I provide, even when content is hidden behind JavaScript (a headless browser such as Playwright or Selenium is fine). • Parsing & cleansing – pull the specific fields I’ll list (product name, price, SKU, availability, and a time-stamp), remove duplicates, and standardize values. • Storage & delivery – load t...
...(and any related tags you know work well), vet each profile for genuine, recent activity—at least one post within the past 30 days—and capture four data points: • Name (or the public display name) • City • State • Direct link to the Instagram profile Please skip anyone who looks inactive, spammy, or clearly headquartered outside the United States. A quick scroll through their feed should confirm they are taking clients and posting new work regularly. Drop everything into a clean Google Sheet or Excel file so I can filter and import straight into my CRM. If you already use tools like IG search operators, Creator Studio, or simple scraping extensions, feel free, but manual verification is essential; quality over volume matters here....
I have a working Python script that talks to the Kalshi prediction-market API, pulls live data, and fires off trades automatically through simple web-request helpers. Functionally it looks solid from my end, but I’m not a developer and would like an expert eye on it before I trust it with larger positions. The review should cover every critical angle—accuracy of the trading logic, efficiency of each call or loop, and robust error-handling so a bad response or network hiccup never leaves an order hanging. Because the script relies heavily on APIs and a small amount of web-scraping, please verify that authentication, rate-limit handling, and data parsing follow best practices and won’t put the account at risk. Deliverables • A line-by-line code...
...week. The role involves a mix of marketing and admin-related support tasks. The ideal candidate should be skilled in creating pitch decks and PowerPoint presentations, branding and design using Figma, and video editing. Additionally, the role includes web scraping, bookkeeping specific to Australia, and tasks requiring excellent written English. Key Requirements: - Proficiency in Figma for branding and design - Experience in creating engaging pitch decks and PowerPoint presentations - Video editing skills - Ability to perform web scraping tasks efficiently - Knowledge of Australian bookkeeping practices - Strong written English for various tasks Ideal Skills and Experience: - Previous experience as a virtual assistant in a marketing or administrative role - Strong organi...