Posted 3 weeks ago
  • Hourly
  • Intermediate
  • Est. time: 3 to 6 months, 30+ hrs/week

SEEKING EXPERIENCED DATA ENGINEER: We're a data consulting & analytics business that offers clients services across the entire data ecosystem. We're located in Southern California and have a small team of great engineers who do excellent work and have a partnership approach with the clients we serve. Currently we have an immediate project requirement for an experienced Data Platform Engineer, please contact us if interested. PROJECT: • Immediate requirement for a current project, we need to get started asap • 40 hrs/Week for a 3-month period, will likely extend to 40 hrs/week on-going • Open to contract or contract-to-hire WHAT WE WILL BE DOING: • Data Infrastructure and Data Modeling - Help design and implement Data Lakes, assist backend teams modeling their upstream transactional data, and physical data modeling for modern data warehouses • Data Pipelines - build end-to-end ETL/LT pipelines to populate the Data Warehouse and empower downstream analytical data consumers WHAT WE ARE LOOKING FOR: • BS degree in Computer Science or related technical field, or equivalent practical experience • 5+ years proven work experience as a Data Engineer, working with at least one programming language (Python, Scala, Go) plus SQL expertise • 5+ years experience with ETL/ELT design and implementation, including development best practices in testing, logging, monitoring • Extensive experience with infrastructure as code (terraform) and implementing data platforms from the group up on cloud ecosystems • Background working with distributed big data technologies such as Spark, Presto, Hive, etc. as well as data warehouse technologies including Snowflake, BigQuery, or Redshift • Extensive data modeling knowledge including in NoSQL, transactional dBs, columnar and row based distributed analytical data environments, as well as prior experience designing and implementing best practice Data Lakes • Knowledge of agile software development and continuous integration / deployment principles

ScalaApache SparkSnowflakeBigQueryPythonNoSQL DatabaseTerraformHiveAmazon RedshiftData LakeData Warehousing
Posted 3 weeks ago
  • Hourly
  • Intermediate
  • Est. time: 3 to 6 months, 30+ hrs/week

SEEKING EXPERIENCED DATA ARCHITECT We're a data consulting & analytics business that offers clients services across the entire data ecosystem. We're located in Southern California and have a small team of great engineers who do excellent work and have a partnership approach with the clients we serve. Currently we have an immediate project requirement for a qualified Data Architect, please contact us if interested. PROJECT: • Immediate requirement for a current project, we need to get started asap • 40 hrs/Week for a 3-month period, will likely extend to 40 hrs/week on-going • Open to contract or contract-to-hire WHAT WE WILL BE DOING: • Data Architecture and Infrastructure - Help design and implement end-to-end data platforms, including the design and execution of microservices for data ingestion, data transformation, alerting, monitoring, and other key components of a data platform ecosystem. • Data Modeling - physical data modeling for distributed, columnar warehouses as well as data modeling for transactional back-end services • Data Pipelines - build end-to-end ETL/LT pipelines to populate Data Warehouses and empower downstream analytical data consumers WHATE WE ARE LOOKING FOR: • BS degree in Computer Science or related technical field, or equivalent practical experience • 2+ years experience as a Data Architect and 5+ years proven work experience as a Data Engineer, working with at least one programming language (Python, Scala, Go) plus SQL expertise • 5+ years experience developing data platform infrastructure and ETL/ELT design and implementation, including development best practices in testing, logging, monitoring • Extensive experience implementing CICD practices and infrastructure as code (terraform) as a foundation for developing data platforms from the group up on cloud ecosystems • Background working with distributed big data technologies such as Spark, Presto, Hive, etc. as well as data warehouse technologies including Snowflake, BigQuery, or Redshift • Experience with implementing microservice architectures, event based processing, and streaming pipelines, including expertise with Kafka and Spark Streaming • Extensive data modeling knowledge including in NoSQL, transactional dBs, columnar and row based distributed analytical data environments, as well as prior experience designing and implementing best practice Data Lakes • Knowledge of agile software development and continuous integration / deployment principles

Data EngineeringApache SparkApache KafkaData TransformationData IntegrationDatabase ArchitectureSnowflakeBigQueryPythonSQLData ModelingAmazon RedshiftNoSQL DatabaseData LakeData Warehousing
  • Hourly: $40.00 - $80.00
  • Intermediate
  • Est. time: 1 to 3 months, Less than 30 hrs/week

Help needed with commercializing IoT product. Remote monitoring and predictive maintenance of mechanical equipment. We have developed the MVP and started deploying it internally. We are using InfluxDB to store time series data and visualize it with Grafana. Main objectives: 1. Help the team with developing and optimizing Flux/SQL queries. 2. Develop Grafana plugins. 3. Help with overall architecture of the project.

JavaScriptAPIWeb DevelopmentGrafanaNoSQL Database
Jobs Per Page: