Scrapy Jobs

3 were found based on your criteria

  • Hourly – Less than 1 month – Less than 10 hrs/week – Posted
    This should be an easy job for someone who knows what they're doing. ====== PURPOSE AND DESCRIPTION ====== We are building a database of social media users, and need a tool that will LOCATE ADDITIONAL SOCIAL PROFILES for a person, BUT ON DIFFERENT SOCIAL NETWORKS. For example, starting with Instagram "@KimKardashian", the tool would find - twitter: KimKardashian - google plus: KimKardashian - vine: KimKardashian - etc This tool should use ANY POSSIBLE MEANS to find related/co-owned accounts on separate social networks, while ensuring ...
  • Fixed-Price – Est. Budget: $85.00 Posted
    I need a scrapy crawl that crawl all domains containing in a csv file. The csv file is attached in this job. The crawler should crawl every url on this domains and try to find a specifc div class The div class is: abstractcomponent openinghours here are some example urls with this div-class: http://www.mercedes-benz-berlin.de/content/deutschland/retail-plz1/berlin/de/desktop/passenger-cars/about-us/locations/location.6150.html http://www.europa.mercedes-benz.be/content/belgium/retail-m-r/mercedes-europa/fr/desktop ...
  • Fixed-Price – Est. Budget: $250.00 Posted
    I am looking for someone who to write a script that will grab all the data from the website "diy solyent" (https://diy.soylent.me), and put them into data files (text files, CSV, etc) that can be read by common statistical programs (Stata, SAS, etc). I can work with you to construct the data structure (variable fields, unique rows, and tables) to be produced or I can provide it explicitly. Its not a complicated website, however.
loading