Web Crawler Jobs

33 were found based on your criteria

  • Hourly – 3 to 6 months – Less than 10 hrs/week – Posted
    We are looking for someone to source products for our Amazon business by comparing "buy" prices from online retailers to "selling" prices on Amazon's website. The leads you find will be plugged into an Excel spreadsheet and passed along to us. Please review the attachment for an example of our excel spreadsheet. If you feel you qualify, please include an Excel spreadsheet showing some of the work you have done. We are looking for people with specific experience sourcing ...
  • Hourly – 1 to 3 months – Less than 10 hrs/week – Posted
    1. Design a web scraping device to go to a web site and obtain all of their store locations. 2. Once we have the store locations- this information will be used to go to a county's website to obtain the name of the ownership entity. 3. Once we have the ownership entity- need to go to a secretary of states website to get the address of the ownership entity.
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    Hi, I am currently working on a background service website, however it uses an external service. For reasons of speeding up the website and cut costs of retrieving the data I would love to have a crawler of some kind which can fill my database with people in the USA + background info. Please give me advice on this and tell me how you would retrieve the data. Thanks, Johan
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    Looking for a content scraper for Etsy platform. I currently have a web app using https://github.com/paquettg/php-html-parser so you must use this as your method for grawling and scraping. I dont need you to perform the scrape, I only need the script.
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    Purpose is to get personal contact information (Leads) from recruiters, especially from companies currently hiring. A. Instruction ----------------- I. Company should fulfill the following: 1. Companies does not use a online management tool. A hint could be if there is not link create a profile for the candidate. 2. The Company should be German speaking, thus having a location in Germany, Austria or Switzerland II. Personal contact information shall include at least: 1. Source of contact information, e.g. stepstone.com ...
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    Looking for someone with strong skills in website scraping and preferably also some experience using ElasticSearch to create a cloud based web scraping solution to regularly scrape specific sites and put the normalised data into cloud hosted ElasticSearch instance. Initially thinking of using Scrapy (Python) hosted on ScrapingHub but would consider other cloud hosted scraping alternatives. Initial requirement for 10 sites but there are plans for a much larger number.
  • Hourly – More than 6 months – 10-30 hrs/week – Posted
    We are a Real Estate Company looking for a tech-savvy virtual assistant who can support us while we are expanding. You should have Fluent English speaking. Since we're based in US we have clientele from all over States, we require an American accent. Besides speaking, following are the key qualities we are looking for in a virtual assistant. - Quick Learner and Punctual (We value quality time spent, irregular logins will not be acceptable) - Excellent writing skills - Be learned with ...
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    I have a list of 187 urls that go to a search results list. I need each of the rows that match a specific criteria clicked on, and the following data to be captured, Name, company name, address, etc. those who can program a scraper can probably get this done very quickly, or those that can transcribe is ok to. Please give your best pricing.
  • Hourly – Less than 1 week – Less than 10 hrs/week – Posted
    I need an expert for creating a reliable scrpay based crawler. This Crawler should crawl and analyse ALL Pages of a given project. for each found page within the given project different elements should be extracted and collected. e.g - all Links (<a>) on the Page and all data for each link like href, rel, css-class, title, anchortext - all headings - H1- H6 + Text within <h1>...</h1> the crawler should be very reliable in crawling a hughe amount of different websites
loading