Hi Sir,Mam, I have complete B.E. (Computer Science Engineering) 2008 pass out i am work experience 2.8 month, last 6 month experience hadoop big data on the hive and pig in SUMTWO SOFTWARE PVT CHENNAI so i will see hadoop big data job, please replay me sir Project details Loading text format data from web server log file to SQL server through ETL tools(Informatics or BODS) than Sql i get structure data from sqoop which transaction loading default daily or weekly because easy to find ip_address, time in, time-out, user_id, than Loading data from oracle database to Hadoop cluster (which has massive parallel processing capability) – Hive (database on top of Hadoop) after for aggregating hadoop key values of data loading to SAP-BO
HBase Job Cost Overview
Typical total cost of oDesk HBase projects based on completed and fixed-price jobs.
oDesk HBase Jobs Completed Quarterly
On average, 1 HBase projects are completed every quarter on oDesk.
Time to Complete oDesk HBase Jobs
Time needed to complete a HBase project on oDesk.
Average HBase Freelancer Feedback Score
HBase oDesk freelancers typically receive a client rating of 4.45.
I have been developing software using Java and Scala for 3 last years. I'm experienced in highload systems (game server, service oriented architecture) and Big Data (writing jobs for Hadoop and Spark, using machine learning). Skill List: • Scala, Java SE for server side apps - excellent • OOP principles and design patterns - good • Database management systems (MySQL, PostgreSQL, Microsoft SQL Server) - good • NoSQL storages (Hbase - good, ElasticSearch - excellent) • Maven, sbt - good • Hadoop, spark - good • Version control (SVN, Mercurial, Git - good) • IDE (Eclipse, IntelliJ IDEA - excellent, NetBeans IDE, SoapUI - good) • Machine learning (Xelopes - satisfactory, Spark MLib - good) • Java EE 6, Spring - satisfactory • Javadoc - good • JUnit testing - good • С\C++ - satisfactory
Software Programming I am vice-professor of Omsk State University, mathematical department. More than 15 years experience in Windows programming (Windows API, MFC), C/C++,Java,.Net,Basic and Fortran programming, Have good communication skills. Ready for remote work for 20-30 hours per week.
I have total 5 year experience of programming including 3,5 years of hands on experience into Hadoop & Big Data Technology + Proficient use of Linux OS, deploy open source and distributed system on Linux OS include setup , deploy hadoop cluster , hbase, hive, pig , spark and run mapreduce job + Using Nutch, Apache droid for crawler, messages queue as kafka + Using Maven, ant build Java project.
I am a software developer, specialized in developing web applications. Since 2011, I have developed the server as well as the client side of web applications using the Java EE platform. I have administered and configured Glassfish and Tomcat servers, and have a good experience with EJBs. For the client side I have used the JSF technology, more specifically, Primefaces. I also have a good experience with designing and implementing RESTful web services. In some of the projects that I have worked on, I have used Apache Lucene as an indexing and searching engine. Additionally I have experience with the Hadoop file system, more specifically HBase, and a fairly good experience with Natural language processing. I am very flexible and willing to learn new technologies if required.
Certified Big Data Consultant with good experience in BigData, Hadoop, Splunk, Database Management system, Data-warehouse and Business Intelligence solutions. My core competency lies in providing consulting towards Big Data implementations in an organization starting from identifying use cases, PoC, setting up Hadoop cluster, training and implementation of project. I have implemented Hadoop/Big Data projects/PoCs for Nissan Motors, Qantas Airlines, Aeris Communications, AT&T, Bell Canada.I am seeking opportunities to provide you my services in Big Data and Hadoop implementations and initiatives in your organization. Given below is brief detail of my work experience and skills. •Strong experience with Hadoop, Hbase, Hive, Pig, Flume, Sqoop and MapReduce. • Strong experience in setting up 18 large Hadoop clusters on-premise as well as on AWS cloud using EC2, and EMR. • Good experience in Big Data consulting, customer engagement, presentations and leadership. • Experience in multiple Big Data projects implementation from customer engagement, presentations, resource development, PoC development to production. •Administration, Capacity planning, performance tuning of multi-node Hadoop clusters using different Hadoop distributions i.e. Cloudera, MapR, Apache and Hortonworks. •Strong experience with relational and parallel databases (Oracle, MySQL, PL/SQL, Teradata), HTTP/HTML, and UNIX system administration.
Hadoop development • Analyze the specific categorical data across geography and time series to get meaningful insight. • Designed Social media listening frameworks for social media websites to gauze customer behavior. • Implemented Data Integration capability of Hadoop eco-system. • Work and had my hands dirty on other Big Data areas such as Predictive Analysis/Sentiment Analysis, Taming Text, R Statistical Computing, Machine Learning (Mahout, Open NLP). • Administration of Hadoop clusters.