Web scraping Jobs

91 were found based on your criteria

  • Hourly – More than 6 months – Less than 10 hrs/week – Posted
    I would like to find someone for a long term job helping me as needed on a part time basis. For your first assignment: It's a spreadsheet of 418 addresses you need to look them up on a map to see if they have fiber internet service. Please contact me on skype: flooflangeroo and send me your google account and your paypal address. I'd like to first do a quick 30 minutes of work for $2 to see ...
  • Hourly – 1 to 3 months – 30+ hrs/week – Posted
    I need to scrape data from PDFs into a MySQL database. I currently have a perl script that is doing the job. I would like to create a new version, a PHP script to do this. * I have a PDF file and a SQL file with the correct data, the script must save data into a new database and it should match my SQL file EXACTLY * The text in the PDF is selectable. This job should not take more than ...
  • Hourly – More than 6 months – 30+ hrs/week – Posted
    I have a large amount of data that I need searching on the internet for alternative numbers for. We utilise social media, google, bing and directory sites to check for new or mobile telephone numbers for the sites we cannot contact. Applicants must be very good on the internet as we require the numbers to be found and also have the ability to check the validity of the number.
  • Hourly – 3 to 6 months – 10-30 hrs/week – Posted
    Data Scraper – Websites, Feeds, API’s We are looking for someone who has experience scraping structured data from the web. The data will have multiple sources and, depending on the source, could be accessible through an API, XML feed, JSON, or only raw HTML from the frontend. We will likely require multiple scrapers over time. Each scraper will need to be integrated into our web-based platform and any options/settings controllable from our operator dashboard. The data gathered in each ...
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    I'm interested in creating a scraping script for a Website. I'd prefer to use Ruby and Nokogiri, but am open to other options. Ideally, you would log-in to my Mac to help set up the script. I'm looking for someone with good English skills and patience to help me get more familiar with scraping.
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    We are trying to scrap together pricing information of our partners website, since they dont have an API. Once you login to the website, you can search for products and their price. We would like to scrap 3 pieces of data for each product: 1. Item No 2. Our Price 3. MRP We want to do this just once every day and get the results in an excel sheet. Please reply back with your thoughts on it and see if ...
  • Hourly – More than 6 months – 10-30 hrs/week – Posted
    Using the Yellow Pages as a guide to business categories, want to scrape websites for local business emails. This would be ongoing if the emails received are target marketed; not general in nature.
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    We need someone to compile lists of contact data to fit our specific requests. This can be done either by scraping directories or manually finding them somehow else on the Web. We currently have and use a Zoominfo pro account so we do not need someone who uses Zoominfo as this will be useless to us. We also currently use LinkedIn and SalesLoft and someone who has access to one or more LinkedIn accounts with considerable UK contacts could be ...
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    I am looking for a script in any language (preferably python) that will: (1) Login to Facebook (I need the ability to login using different accounts) (2) Prompt me for a Facebook URL (or Facebook ID) (3) Generate a CSV of all of that user's friends in the following columns: (Origin) ID, (Origin) Username/Vanity, (Origin) Name, (Friend) ID, (Friend) Username/Vanity, (Friend) Name For example (hypothetical): I enter the Facebook URL "www.facebook.com/john.doe" with userID ...