Your guide to getting data entry done for your business
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
From 129,474 reviews, clients rate our Data Extractors 4.9 out of 5 stars.Data Extraction is the process of extracting data from a variety of sources for further analysis. A Data Extractor is someone who helps businesses and organizations gain insight from their data and create descriptive and predictive models. They specialize in finding patterns and relationships that guide decisions and uncover meaningful information. Through carefully crafted queries and processes, our Data Extractors can transform raw data into a useful format that can be used for reporting, analytics, machine learning and more.
Here's some projects that our expert Data Extractors made real:
When you partner with an experienced team of Freelancer's Data Extractors you can access valuable insights from your data that can guide decisions, uncover opportunities and create predictive models with new data sources. Our experts can help you unlock deeper insights with advanced filtering methods and complex coding. Explore the full range of possibilities with our talented community of professionals, capable of delivering comprehensive solutions tailored to your needs.
Ready to launch your very own project on Freelancer.com? We invite you to try us out and hire our experienced Data Extractors to make your design goals a reality. Let their creativity, skill, and proficiency bring something special to your project!
From 129,474 reviews, clients rate our Data Extractors 4.9 out of 5 stars.I need to extract SMS messages from an iPhone. Requirements: - Experience with data extraction from iPhones. - Ability to securely handle and transfer sensitive information. Ideal Skills: - Familiarity with iOS and iPhone data extraction tools. - Attention to detail and reliability.
Need to scrap videos from a website . If this is something you are good and familiar doing please do respond to this project . I do multiple projects with similar help I need so am looking for a log term relationship so I dont have to keep on posting the same ad. Let me know if you have any questions .
Building a simplified AI-powered CV generator web app for a university pilot program. Students will be able to: • Log in using their school email (magic link, restricted domain) • Upload their CV (PDF) • Automatically extract and structure their profile (experience, education, skills, languages) • Edit and complete their profile • Paste a job offer (text or link) • Generate an adapted CV based on the job requirements • Generate a tailored cover letter • Preview the result • Export a clean professional PDF This is a closed Proof of Concept for partner schools. No payments, no subscriptions, no mobile app at this stage. Preferred stack: + simple backend (Node/Supabase) + OpenAI + HTML-to-PDF generation. Clean, documented, fully transferable ...
Seeking a skilled professional to assist in refreshing our Capital IQ spreadsheets. The successful candidate will have access to Capital IQ. Just need to open the spreadsheet and let it refresh, then copy / paste values. The file is about 20MB in size.
I need a robust Python function that can log in to a password-protected site, navigate to a given page, locate the primary table, and convert it into a clean Pandas DataFrame before writing the result to CSV. The same function must work on each URL I provide, and ideally on any future page built on the same template, so please keep the approach modular and scalable. Because the pages sit behind authentication, the username and password can be hard-coded directly in the script; no interactive prompts or external files are necessary this time. Anti-blocking tactics (session persistence, realistic headers, controlled request pacing, etc.) are mandatory—I want to be able to run the notebook repeatedly without getting shut out. Deliverables • A Jupyter notebook (.ipynb) containin...
extract customer data (by customer first name, last name, email address, phone number, city) from facebook page
I have a stack of handwritten notes that I need accurately keyed into an Excel spreadsheet. The task is strictly data entry: you’ll be reading the physical pages I scan and transferring every detail—spelling, numbers, punctuation—exactly as they appear. No formatting tricks or content cleanup is required; fidelity to the original writing is my priority. I will supply high-resolution scans in PDF format. You return one .xlsx file with each note on its own row and the fields I specify (date, title, body text, and any numerical values) in separate columns. A quick turnaround and solid attention to detail are more important to me than fancy software; basic Excel proficiency and sharp eyes will do the job. Before we start, I’ll send a short sample of five notes. Enter ...
I have about 200 PDF pages filled with mixed text-and-numeric information that must be transferred into a clean, well-structured Excel workbook. I need the data laid out in a standard table format, letting the content itself dictate the column headers so the spreadsheet mirrors the source logically. Accuracy is critical—every figure, word, and cell must match the original without typos or transposition errors. I will be spot-checking against the PDFs, so please take the time to proof your own work before delivery. The full file set and a naming convention guide will be supplied as soon as we start, and I will be available for quick clarification if anything in the documents is unclear. Deliverable (due in five days): • One Excel file containing all 200 pages’ data, error...
Our Pain Point: We are currently receiving industry update emails (DIARY'S LATEST NEWS - DIARY directory), which include important information for our business outreach strategy, we wish to be able to organise and utilise this information for our client outreach. Our current process is incredibly manual, it involves setting aside time for an employee to review the emails we have received, click through into the links for the articles provided and pull the relevant information from the article, they then will try and find the person mentioned or referenced in the article on LinkedIn and connect with them on the platform. However, reading and scanning all emails on a regular basis is very time-consuming and not possible for our team to perform on a weekly basis. Hence, our goal would be...
I have to pull many thousands of PDF files from a publicly available but poorly structured online database. The pages are slow, there are no clear download links, and navigation relies on clunky JavaScript forms, so a straightforward “save as” approach will take far too long. You will receive a text file that contains the exact filenames for every document I need. Those filenames appear in the HTML once the record is loaded, so they can be used as reliable anchors for the scrape. The order in which the files arrive does not matter; accuracy and completeness do. I expect an automated approach—Python with Selenium, Playwright, Scrapy, or any comparable tool is fine—as long as it can work around the site’s fragile structure and occasional timeouts. If headles...
PROJECT TITLE: Extract Historical Hourly Forecast Temperature Data (Open-Meteo) PROJECT DESCRIPTION: I require extraction of historical hourly forecast temperature data from the Open-Meteo Previous Runs API. The goal is to retrieve archived hourly forecast temperatures for a fixed list of US locations from: START DATE: 2024-01-01 END DATE: [INSERT TODAY’S DATE] No UI, no dashboard, no hosting required. This is a data extraction and structuring task only. ------------------------------------------------------------ DATA SOURCE: Open-Meteo Previous Runs API Base endpoint: ------------------------------------------------------------ REQUIRED VARIABLES (HOURLY): Retrieve the following hourly fields: - temperature_2m - temperature_2m_previous_day1 - temperature_2m_pr...
I'm seeking a professional to recover lost photos and videos from a computer. The recovery is for personal use, and there's no rush on the timeline. Ideal Skills and Experience: - Expertise in data recovery, specifically for photos and videos - Experience with recovery tools and software - Ability to handle various file formats and systems - Attention to detail and reliability Please share your recovery process and any relevant experience. Disk 1 Unknown not initialized
Project Description: We are looking for an experienced developer to build an automated solution that will: Receive customer orders sent by email (PDF format). Automatically extract relevant data from the PDF (customer name, products, quantities, prices, references, etc.). Automatically integrate this data into our management software, kiubi.com. Ensure a reliable, secure, and stable system (error handling, logging, notifications in case of issues). Technical Requirements: Automatic connection to a dedicated email inbox. Automatic detection and processing of PDF attachments. Intelligent data extraction (OCR if required). Integration via API (if available) or any compatible method with Qubee.com. Testing and validation system. Clear documentation of the solution. Possibility for future maint...
I have a batch of PDF files—about 10 – 20 pages in total—holding roughly 800 – 1000 purely numeric entries that must be transferred into a well-formatted Excel sheet. The whole task has to be wrapped up within 5 – 7 days, and I cannot tolerate transcription errors. Solid Excel skills are therefore non-negotiable; you’ll need to set up proper number formats, run quick validations, and leave the file ready for immediate analysis. My budget for this job is ₹1,500 – ₹2,000. To help me pick the right person fast, please mention similar PDF-to-Excel work you’ve completed and attach a brief sample that demonstrates your accuracy on numeric data. Deliverables: • One error-free Excel workbook containing all 800–1000 numeric records from t...
I have id information in excel spreadsheet need to gather evidence for those ids from one portal and save the evidence (screenshot) in share folder
I need 1-10 CSF files converted to vCard 2.1 format. The CSF files contain contact information. Ideal Skills and Experience: - Familiarity with CSF and vCard formats - Experience in file conversion tasks - Attention to detail to ensure accuracy in contact information
Hello, I have an Excel file with 5000 rows. This file contains the names and websites of 5000 companies. What I need are the personal email addresses for the import, export departments. You can use apollo, rocketreach, hunter etc. Can you help me? Thank you.
I need a robust, repeatable scraper that gathers every English-language review for roughly one million hotel listings and stores them in a clean JSON database. The first milestone covers the initial 10 000 reviews: once those entries are parsed accurately I will release payment and we can scale the same solution across the remaining content. Database requirements • JSON output only, one record per review • Mandatory fields: hotel_id, reviewer_name, review_date, rating_score, full_review_text Source sites The exact platform will be shared privately after award, so structure the code to accept multiple endpoints with minimal change. Deliverables for each milestone 1. Fully documented scraping/parsing script (Python, Node, or another well-supported language) 2. Corres...
I already have a Zapier automation that hands a submitted Tally-form URL to a ChatGPT step, but right now GPT is not able to see everything on the website. I need the workflow upgraded so the ChatGPT step can “see” absolutely everything on that URL—headers, body copy, layout structure, CTAs, images, and any other visual element—before it starts writing the internal report that follows. Here is what I’m after: once a new Tally response triggers the Zap, an additional step (or steps) should grab the complete HTML of the provided link, break it out in a way GPT can handle its token limits, and then pass the full context into my existing ChatGPT action. The end result is richer, more accurate analysis that I’ll feed straight into our internal reports. Feel...
I have a collection of websites that hold the textual information I need consolidated into a single, well-structured dataset. Rather than copying the material manually, I want the process handled through reliable web-scraping tools so the capture is fast, consistent, and repeatable. Your task is straightforward: • Build (or adapt) a scraper that targets the pages I specify, pulls only the relevant text, and skips ads, navigation links, and other noise. • Deliver the harvested content in a clean CSV or Excel file with clear column headings; if you prefer a database export, let me know and we can adjust. • Include the finished script or notebook so I can rerun the extraction later. Accuracy and formatting matter more to me than sheer speed, so please allow time for basic...
I need investment performance data scraped from a research website and organized into an Excel file. It needs to be set up for scraping on a regular basis. Requirements: - Experience with web scraping tools and techniques - Ability to extract and format data accurately - Proficiency in Excel - Attention to detail and reliability Ideal Skills: - Familiarity with Python, BeautifulSoup, or similar scraping libraries - Prior experience with financial data - Strong data handling and manipulation skills Please provide a sample of your previous work related to web scraping.
I need around 4,000–5,000 mixed (text + numerical) records extracted from several PDFs and transferred into a single, cleanly formatted Excel workbook. I will send you the exact column headers before you begin, so the file must follow that structure precisely—no deviations, no extra or missing fields. Quality is everything here. Every value has to match the source, spelling must stay intact, and numerical figures must keep their original precision. Please allow for date, currency, and percentage formatting where it makes sense; consistency across the sheet will be checked during verification. When you reply, include: • a short sample of a similar PDF-to-Excel job you have completed, • how long you realistically need within the 5–7 day window, and • a brie...
Scrape Google For Emails For Cheap
I need around 4,000–5,000 mixed (text + numerical) records extracted from several PDFs and transferred into a single, cleanly formatted Excel workbook. I will send you the exact column headers before you begin, so the file must follow that structure precisely—no deviations, no extra or missing fields. Quality is everything here. Every value has to match the source, spelling must stay intact, and numerical figures must keep their original precision. Please allow for date, currency, and percentage formatting where it makes sense; consistency across the sheet will be checked during verification. When you reply, include: • a short sample of a similar PDF-to-Excel job you have completed, • how long you realistically need within the 5–7 day window, and • a brie...
I have a collection of single-column scanned text documents that I need converted into a clean, fully searchable Excel file. The pages are almost entirely typed, but a handful contain small handwritten notes that must be captured as well. Your task is to run accurate OCR, transfer every line of typed text into logically structured spreadsheet columns, and add the handwritten snippets in a separate field or clearly mark them so I can review later. No fancy layout reproduction is necessary—clarity and searchability are what matter. Deliverables • One Excel workbook containing all extracted text, ready for filtering and formulas • Handwritten content entered or flagged in its own column • A quick quality check to ensure the spreadsheet matches the source scans Plea...
I need help extracting text data from CSV files and inputting it into a specified format or system. Requirements: - Experience with data extraction and manipulation - Proficiency in handling CSV files - Attention to detail to ensure accuracy - Ability to work within the specified budget Ideal Skills: - Data entry - Familiarity with spreadsheet software (e.g., Excel) - Basic understanding of data organization principles Looking for freelancers who can complete the task efficiently and accurately. this is the full requirment for the project Your Operational Requirements (Summary) 1) Quote & Job Intake Create quotes (Q-files) and job packs (J-files) as you do today Save them into SharePoint folders System automatically reads them and creates: Quote records Assets Tasks per as...
I have a collection of PDFs and scanned images that contain a mix of straight text and embedded tables. I will share a clear formatting guide so you know exactly how each column and cell should look in Excel or Google Sheets. The job is to transfer roughly 500–1,000 records into a single, well-structured spreadsheet, preserving every character, number, and line break exactly as they appear in the originals. A good typing speed will certainly help, but what matters most is accuracy: no spelling slips, no misplaced decimals, and consistent formatting from the first row to the last. I’ll review the file against the source documents, so please double-check your work before handing it in. Deliverables • One Excel workbook (.xlsx) or Google Sheet, fully formatted to the guid...
I want to build an in-house AI system that lets my team drag-and-drop both KYC and Income documents and, in one click, receive a neatly formatted PDF report. The PDF must always include: • Eligibility check (based on the rules I will supply) • Salary details in a clear monthly breakdown table • Current obligations in a similar table • Pending documents list • Pending form details • Probable queries for the credit team All eligibility logic, standard document lists and a library of past queries will be provided so the model can be fine-tuned to our exact policies. Accuracy and consistency matter more than fancy UI; I simply need a reliable back-end that ingests scans or PDFs, extracts the data, applies the rules and returns a single consolidated...
About the Project (Only Mumbai-based Freelancer, no Agency ) We are building an end-to-end Purchase Order (PO) to Invoice Automation System. Customer POs are received via email in PDF format and currently processed manually into Excel and ERP. We are implementing two synchronized Robotic Process Automation (RPA) bots to: Extract data from POs Validate and structure data Enter data into ERP Generate invoices Email invoices to customers Maintain audit logs The system must operate 24/7 with minimal human intervention. Role Overview We are seeking an experienced RPA Developer to design, develop, test, and deploy automation bots that streamline the order processing lifecycle. The ideal candidate has hands-on experience in PDF data extraction, ERP automation, exception handling, and produ...
I have a Gmail account where many incoming messages show only my own address in the “To” field, yet I know the sender included additional recipients. I want a reliable way to uncover every hidden address those messages actually reached—across my whole mailbox, not just one or two samples—and export them for later use. Here’s exactly what I need: • A repeat-able solution that scans all selected emails in my Gmail inbox and programmatically pulls every address found in the full header (including any Bcc or undisclosed recipients Google preserves). • Output as a clean, deduplicated list—CSV or plain text is fine. • The process must work on Windows-based machines; I use Windows 10. If you prefer a small cloud utility or a local Python/P...
I’m ready to retire our old GoldMine setup and have every piece of customer information living smoothly inside Zoho CRM. The move must include all contacts along with their names, phone numbers, email addresses, the notes and history attached to each record, plus every custom field we have built over the years. I’ll rely on you to: • Extract the selected data from GoldMine, map it accurately to Zoho’s structure (creating the same custom fields where needed), and import it without duplicates or data loss. • Preserve the relationships so that notes and historical interactions stay linked to the right contact. • Provide a brief mapping sheet and run a post-migration check with me to confirm everything landed in the right place. If you have a proven...
Hey! I’m looking to hire an experienced developer to build a universal product-detail scraping pipeline that takes a product URL (any website) and returns a complete structured product record. This is not a “simple HTML parse.” Many target sites are React/Next/Vue, load content via XHR/GraphQL, hide details behind tabs/accordions/modals, and lazy-load images/PDFs. The solution needs to reliably extract everything a human can see on the page, plus the underlying data used to render it. What the scraper must do (high level) Given a product URL, the pipeline should: Load the page like a real user (handle cookies/overlays). Capture all content from multiple sources (DOM + network + interactions). Use GPT API strategically to increase accuracy (field mapping, variant ext...
I need help collecting a clean, well-structured list of Twitter accounts that consistently post about AI and possibly category of AI (open source, ML, AI, general AI) Instead of handing you a fixed list, I’ll define the selection rules (for example: minimum follower count, specific AI-related keywords, recent activity, etc.) - min follower count 5000 and have alteast multiple posts with 100+ likes/ retweets. Once those criteria are agreed on, you’ll locate the matching profiles and extract two data points per account: • the public profile bio • the direct profile link (around 1M+ profiles) Please return everything in a single CSV file, one row per influencer. Feel free to use Python, Tweepy, Twitter API v2, ScraperAPI, or another reliable method—as long...
I have a single Instagram Reel that was publicly available for roughly a year before being removed or placed in archive. I saved every trace I could—direct links, full-length screen recordings, and the search-engine cache hits that still reference the post. What I now need is a technical reconstruction of its viewership data. Your objective is to extract and corroborate: • Number of views over time (ideally plotted or tabled) • Any available demographic clues about who watched it • Engagement rates the Reel achieved while live Because the original URL now returns a 404, I expect most of the intel will come from open-source techniques: exploring Web archives (Wayback Machine snapshots, Google cache, ), digging into any residual JSON, and cross-referencing with Ins...
I have two source spreadsheets that I need merged and enriched through automated scraping: • “File 1” – 170 k Spanish local businesses with emails • “File 2” – 65 k additional businesses with websites only Phase 1 – Email extraction Using a Python script and well-known libraries (requests, BeautifulSoup, Scrapy or similar), scan every site listed in File 2, capture all working email addresses you can locate, then append them to the corresponding rows so I can produce a unified “File 3”. Phase 2 – Offer harvesting Next, visit each live site in File 3. Where an offer, deal or promotion is publicly displayed, record the details in a fresh Excel sheet with these exact columns: Business ID | Business Name | Offer...
I have a stack of paper documents that contain both text and numerical information. I need every line carefully transcribed and organised into a clean, well-structured Excel spreadsheet. Please keep the original order of the records, label the columns clearly, and double-check totals or calculations so the numbers match exactly what appears on the page. Accuracy and legibility are more important to me than speed, but I would like regular updates so I can spot-check your work as we go. Once complete, I expect a single .xlsx file that is ready for analysis with no stray characters or formatting issues.
I need assistance entering debtor financial records into my Online Bankruptcy portal. This includes information from both paperwork and a thumbdrive. Requirements: - Accurate data entry - Organizing and categorizing financial records - Experience with online portals and financial documents Ideal Skills: - Attention to detail - Experience with financial data and legal documents - Proficiency in handling multiple file formats Please provide a clear and organized upload of all required information.
I need a data scraping expert to help generate leads from a list of websites. Requirements: - Scrape contact information, product listings, or user reviews (to be specified). - Work from a provided list of URLs. Ideal Skills: - Experience with data scraping tools and techniques. - Ability to handle multiple URLs and extract data accurately. - Attention to detail and reliability. Please share your portfolio and relevant experience.
Industrial Automation Product Data Extraction, Deduplication & Structured Image Collection Project Overview We are an industrial automation parts distributor building a structured product database to support inbound enquiries and SEO growth. We require an experienced data extraction specialist to: Extract structured product data from major industrial / electronic component distributor websites Identify duplicate manufacturer part numbers across multiple sources Merge all unique information into a single consolidated dataset Extract and organise all available product images per part number Deliver a clean, deduplicated, production-ready dataset This project includes: Data extraction Normalization Deduplication Intelligent merging Structured image collection and organisation...
Every event that I have, I take an average of 30 event orders, pull them apart and chronologically construct a production plan for moving in and setting up an event and moving it out at the end of the event. I would like to drop a text-based Event Order PDF into a dedicated local folder. The moment that file lands, I need an agent to wake up, read the document, and do three things automatically: 1. Pull out the essentials—event dates and times, the title of each order, and the Event Order Number. 2. Duplicate each order so it appears "chronologically" twice in the output: once for the install date and once again for the strike / move-out date. 3. Feed all rows into a single, combined Excel sheet that becomes my Production Plan (this is already formatted and I can send...
Project Description: Find school districts and charter schools who use a specific vendor for a large list of domains. I am seeking an experienced web scraping specialist to improve our Python script to analyze a large list of school district websites (approximately 4000+ URLs) and identify the ones who show a specific link on any page found in their sitemap. The primary method of identification must be to scan the website's for specific, known vendor links. Deliverables Required 1. A Production-Ready Python Script (.py file): The script must be commented, easily configurable, and capable of reading the provided CSV list, performing the scan, and generating the output CSV. It should handle timeouts and basic error handling gracefully. 2. The Final Results (CSV/Excel File): A c...
I need to confirm whether my wife's iPhone placed a voice call immediately before a video I shot in Hawaii just after midnight on September 7, 2025. The most likely destination is, but I want definitive proof—exact dialed number, start-to-end timestamps, and total duration. You may work with the raw data in whichever way is easiest: I can grant temporary, read-only access to my T-Mobile account, or pull the usage log myself and send you the file. Because I’m unsure of the precise moment the video begins, I’ll also need help extracting or confirming the video’s internal timestamp so we can align it with the carrier records. (A forensic review of the video itself was done by Md Nazmul H. in October 2025; that report is available if it helps.) Deliverables: &bu...
I need a small utility that takes the Daily Racing Form in PDF format for a chosen date, scans every race at every track, and pulls four pieces of information for every horse: • horse name • number of starters in that race • second-call position • final time With those values the script must run a straightforward (A + B) / C calculation that I will define exactly once the job begins. When the data for all horses is processed, the program should compile everything into a clean, easy-to-read PDF report. Please build the solution so I can drop in new Racing Form PDFs and generate fresh reports without extra tweaking. A Python workflow that uses libraries such as pdfplumber, Camelot or similar is fine, provided it copes with the occasional formatting quirk that R...
I need an .xlsm workbook whose VBA macro fetches product data from both and lowes.com. When I type a valid item or model number into a row, the code should automatically pull back: product name, full description, regular price, sale price (if available), brand, product type/category, and the main image (inserted into the sheet or stored in an Image column). I work comfortably with VBA, so a concise, well-commented routine is all I need—no step-by-step user guide. The workbook must stay self-contained, relying only on standard references such as Microsoft XML, HTML, or WinHTTP libraries; please avoid external add-ins or Python bridges. Deliverables: • Finished macro-enabled Excel file (.xlsm) ready to test with my own SKU list • Clearly commented VBA code so I can...
We are building a full internal marketplace analytics web system, not just a reporting script. The system is designed to combine competitive intelligence with internal sales and stock analytics in a single interface. Functional Requirements The system must provide the following capabilities: 1. Product and SKU structure - Each product must be split into individual SKUs based on flavor and volume. - All analytics and reports are built at the SKU level. 2. Our product analytics (primary focus) - Current stock levels (total and per SKU). - Sales volume for selected periods (daily / weekly / monthly). - Reorder recommendations based on stock thresholds and sales dynamics. - Revenue calculations per product and per SKU with period filtering. 3. Competitive analytics - Automated collection o...
Project Background We are building a financial data aggregation and risk control verification system. We need to retrieve account balances, UPI IDs, and transaction history from major Indian UPI wallet accounts for multi-account fund monitoring and transaction status verification. ⸻ Core Requirements All data retrieval must be done using API-level HTTP requests: • For wallets with a Web interface: analyze and call their Web API • For App-only wallets: capture and replicate API calls via traffic analysis API-level implementations only. ⸻ Responsibilities 1. Implement mobile number + OTP login flow (manual input allowed) with session persistence (to reduce repeated logins) 2. Analyze the API call flow of the target wallet (Web or App) 3. Extract account balance, UPI ID, and ...
I have an SD card full of irreplaceable shots that suddenly refuse to open—no thumbnails, no previews, nothing. A professional shop already ran their standard recovery tools but came back empty-handed, so I’m looking for deeper-level expertise. The card mounts and can be cloned; the issue is strictly file-level corruption, not physical damage. I can supply a raw image (dd) or, if you prefer, ship the card itself. What I need from you is straightforward: use whatever combination of low-level hex work, PhotoRec/TestDisk, R-Studio, or your preferred forensics workflow to extract as many intact, viewable photos as possible and return them to me in their original resolution and format. Deliverables • Recovered image files, clearly organised • A short report ou...
IM TYRING TO RUN THE ATTACHED JPNY SCRIPT TO GET INFO FROM A WEBSITE BUT I CANT UNDERSTAND IT DOESN'T WORK. I NEED THIS SCRIPT TO BE FIX + PAGINATION TO FETCH AROUND 2400 RECORDS FOR YELLOWPAGES I ONLY USE JUPYTER
Necesito automatizar la consulta de la siguiente página del SRI: Al ingresar un RUC o cédula debo obtener y guardar en un archivo JSON estos campos exactos: • Estado de Contribuyente • Razón social • Indicador de “Contribuyente fantasma” • Actividad económica principal Requerimientos técnicos: – El script debe funcionar bajo llamada, es decir, pueda ejecutarlo manualmente cada vez que lo necesite con una lista de RUCs como entrada. – La salida debe ser un json por cada Identificación consultada. – Incluye un breve README con instrucciones de instalación y uso. Criterios de aceptación: 2. El tiempo medio por consulta no debe exceder lo razonable para evitar bl...
Data entry is an important task, but choosing the wrong solution can seriously harm your company's productivity.
Learn how to hire and collaborate with a freelance Typeform Specialist to create impactful forms for your business.
A complete guide to finding, hiring, and working with a skilled freelance typist for your typing projects.