Data mining Jobs

21 were found based on your criteria

  • Hourly – 1 to 3 months – 30+ hrs/week – Posted
    Experienced Data Scrapper needed to RIP all the businesses (per category) www.whitepages.com.au And give me in .csv - Business Name - Business Addrsss - Suburb - State - Postcode - Phone Number - Industry Listed in Please add me on skype and gmail chat to discuss further Regards, David
  • Hourly – Less than 1 month – 10-30 hrs/week – Posted
    please go to http://www.dinobuy.com/ Collect and add the following details to excel file along with images in zip file. Submit to me with your delivery time and flat rate for complete job. Sample Details( Example Attach) product_no category (all categories and sub categoriesl) product_name product_desc market price wholesale price options (Size & Colours) details image url (jpg or gif) image_name (jpg or gif) Brand Manufacturer (if any) Shipping Weight Shipping Weight Unit of Measure Condition (new refurbished etc ...
  • Hourly – 1 to 3 months – 30+ hrs/week – Posted
    This project requires * Use of credit cards that will work on a recurring payment that can be considered disposable (since the account may be banned, and each account should have separate payment) If you don't have the above requirement, please do not bid. * Need for distribution of searches between a number of accounts (each each with a static IP that is considered valid in the United States) * Randomization of paging * Browser automation * Virtual PC scaling (Amazon etc) This constitutes ...
  • Fixed-Price – Est. Budget: $120.00 Posted
    Hi, I need some help in scraping content from a website has https and user login require. The jobs is: Part1: - Login into a website using user/password that I provided - Collect the content after filter some kind of forms on that website user interface every 5 minute ( cron job ) - Save /import the contents into a mysql database or export it into rss/json format with all informations you was collected Part2: - Create a simple interface from this data struct ...
  • Hourly – 1 to 3 months – 10-30 hrs/week – Posted
    NOTE: this is for legal scraping only! public records and public info! I need a scraping expert! someone that is very organized and has the capabilty to scrape and spider and download compled websites and datasets. 1) please tell me the largest and most compled datasets you have scraped and compiled? 2) please tell me the tools you use? 3) PLEASE SEND YOUR EMAIL SO I CAN FOLLOW UP WITH SPECIFIC TECHNICAL QUESTIONS
  • Hourly – 1 to 3 months – 10-30 hrs/week – Posted
    We need a freelancer who can take product data from various sources and enter it into excel or directly into a big commerce admin panel. We are looking to add thousands of products over the next few months and would be happy to work with someone who has experience with Big commerce. Some data sources will be from databases others from websites. We need someone who can do both with a decent understanding of english to work efficiently and with ...
  • Hourly – Less than 1 month – Less than 10 hrs/week – Posted
    We need an experienced professional in Lucene Search Engine Library to design a web search engine for various forms of web content with the following characteristics: - Optimized search across large dataset of web content (mostly content across blogs, social media, etc) - Custom Relevance ranking mechanism using vector space model and keyword weightage (both positive and negative feedback) We need an indexing design that will facilitate our web application. Scalability and speed of search is essential. If you have worked on ...
  • Fixed-Price – Est. Budget: $125.00 Posted
    I need someone to scrape a PHP and AJAX based ecommerce store of the price list. I need the scraper setup so that I can scrape the site once a month. Only people with EXTENSIVE PHP and AJAX knowledge. Must speak fluent English and be able to start immediately. To ensure you read this and understand, please begin your reply with the words: theprintscrape
  • Hourly – Less than 1 month – 10-30 hrs/week – Posted
    I would like to pull data available on the web into a hosted database. The data is mainly related to job trajectories, something like the following: Most Commonly Current Job Most Commonly Held Job Prior Held Jobs After ------------------------------ --------------------------- --------------------------- Job p1 (%p1) Job X Job a1 (%a1) Job p2 (%p2) Job a2 (%a2) Job p3 (%p3) Job a3 (%a3) I would like to pull that data into a database that keeps track of all the connections and weights, schematically like this ...
loading