Posted 3 weeks ago
  • Hourly
  • Intermediate
  • Est. time: 3 to 6 months, 30+ hrs/week

SEEKING EXPERIENCED DATA ENGINEER: We're a data consulting & analytics business that offers clients services across the entire data ecosystem. We're located in Southern California and have a small team of great engineers who do excellent work and have a partnership approach with the clients we serve. Currently we have an immediate project requirement for an experienced Data Platform Engineer, please contact us if interested. PROJECT: • Immediate requirement for a current project, we need to get started asap • 40 hrs/Week for a 3-month period, will likely extend to 40 hrs/week on-going • Open to contract or contract-to-hire WHAT WE WILL BE DOING: • Data Infrastructure and Data Modeling - Help design and implement Data Lakes, assist backend teams modeling their upstream transactional data, and physical data modeling for modern data warehouses • Data Pipelines - build end-to-end ETL/LT pipelines to populate the Data Warehouse and empower downstream analytical data consumers WHAT WE ARE LOOKING FOR: • BS degree in Computer Science or related technical field, or equivalent practical experience • 5+ years proven work experience as a Data Engineer, working with at least one programming language (Python, Scala, Go) plus SQL expertise • 5+ years experience with ETL/ELT design and implementation, including development best practices in testing, logging, monitoring • Extensive experience with infrastructure as code (terraform) and implementing data platforms from the group up on cloud ecosystems • Background working with distributed big data technologies such as Spark, Presto, Hive, etc. as well as data warehouse technologies including Snowflake, BigQuery, or Redshift • Extensive data modeling knowledge including in NoSQL, transactional dBs, columnar and row based distributed analytical data environments, as well as prior experience designing and implementing best practice Data Lakes • Knowledge of agile software development and continuous integration / deployment principles

ScalaApache SparkSnowflakeBigQueryPythonNoSQL DatabaseTerraformHiveAmazon RedshiftData LakeData Warehousing
Posted 3 weeks ago
  • Hourly
  • Intermediate
  • Est. time: 3 to 6 months, 30+ hrs/week

SEEKING EXPERIENCED DATA ARCHITECT We're a data consulting & analytics business that offers clients services across the entire data ecosystem. We're located in Southern California and have a small team of great engineers who do excellent work and have a partnership approach with the clients we serve. Currently we have an immediate project requirement for a qualified Data Architect, please contact us if interested. PROJECT: • Immediate requirement for a current project, we need to get started asap • 40 hrs/Week for a 3-month period, will likely extend to 40 hrs/week on-going • Open to contract or contract-to-hire WHAT WE WILL BE DOING: • Data Architecture and Infrastructure - Help design and implement end-to-end data platforms, including the design and execution of microservices for data ingestion, data transformation, alerting, monitoring, and other key components of a data platform ecosystem. • Data Modeling - physical data modeling for distributed, columnar warehouses as well as data modeling for transactional back-end services • Data Pipelines - build end-to-end ETL/LT pipelines to populate Data Warehouses and empower downstream analytical data consumers WHATE WE ARE LOOKING FOR: • BS degree in Computer Science or related technical field, or equivalent practical experience • 2+ years experience as a Data Architect and 5+ years proven work experience as a Data Engineer, working with at least one programming language (Python, Scala, Go) plus SQL expertise • 5+ years experience developing data platform infrastructure and ETL/ELT design and implementation, including development best practices in testing, logging, monitoring • Extensive experience implementing CICD practices and infrastructure as code (terraform) as a foundation for developing data platforms from the group up on cloud ecosystems • Background working with distributed big data technologies such as Spark, Presto, Hive, etc. as well as data warehouse technologies including Snowflake, BigQuery, or Redshift • Experience with implementing microservice architectures, event based processing, and streaming pipelines, including expertise with Kafka and Spark Streaming • Extensive data modeling knowledge including in NoSQL, transactional dBs, columnar and row based distributed analytical data environments, as well as prior experience designing and implementing best practice Data Lakes • Knowledge of agile software development and continuous integration / deployment principles

Data EngineeringApache SparkApache KafkaData TransformationData IntegrationDatabase ArchitectureSnowflakeBigQueryPythonSQLData ModelingAmazon RedshiftNoSQL DatabaseData LakeData Warehousing
  • Hourly: $40.00 - $80.00
  • Intermediate
  • Est. time: 1 to 3 months, Less than 30 hrs/week

Help needed with commercializing IoT product. Remote monitoring and predictive maintenance of mechanical equipment. We have developed the MVP and started deploying it internally. We are using InfluxDB to store time series data and visualize it with Grafana. Main objectives: 1. Help the team with developing and optimizing Flux/SQL queries. 2. Develop Grafana plugins. 3. Help with overall architecture of the project.

JavaScriptAPIWeb DevelopmentGrafanaNoSQL Database
Posted last month
  • Hourly: $70.00 - $90.00
  • Expert
  • Est. time: 1 to 3 months, Less than 30 hrs/week

Position Summary Join Elementium, a trailblazer in Integrated Risk Management (IRM) SaaS, as our Neo4j Database Architect. We are pioneering data-driven decision-making through quantitative risk analyses in risk management. Your expertise in database architecture will be instrumental in guiding Elementium to achieve Technology Readiness Level (TRL) 3, focusing on the scalability and efficacy of our data models within Neo4j. Collaborate closely with our Principal Investigator and development team, enriching your skills in a scientifically demanding and innovative project environment. Key Responsibilities •Collaborate with the Principal Investigator to refine and optimize the Neo4j data model for risk management, ensuring scalability and alignment with best practices. •Design and implement database structures that support efficient data storage, retrieval, and manipulation for large-scale SaaS applications. •Provide expertise on Neo4j to guide the development of knowledge graphs, ensuring the data model effectively represents complex interdependencies of risk factors. •Work with the development team to integrate the Neo4j database seamlessly with other system components, ensuring high performance and reliability. •Lead the evaluation and incorporation of Neo4j advancements to continuously improve database architecture. Required Qualifications •Bachelor’s Degree in Computer Science, Information Systems, or a related field. •5+ years of experience in database architecture, with a strong focus on Neo4j. •Demonstrated expertise in designing and implementing scalable and high-performance databases in a cloud environment. •Proficiency in data modeling techniques and understanding complex data relationships. •Experience with integrating database technologies with Python applications. Preferred Qualifications •Master’s Degree in Computer Science, Information Systems, or a related field. •Experience with AWS and other cloud computing platforms. •Knowledge of Python, particularly in the context of database interactions. •Familiarity with IRM concepts and quantitative risk analysis methodologies. •Strong analytical and problem-solving skills, with a collaborative team spirit.

Neo4jNoSQL DatabaseDatabase DesignDatabase ArchitectureETLPython
Jobs Per Page: