Sr. Data Engineer

Glendale, CA 91206

Posted: 09/16/2021 Employment Type: Direct Hire Skill Set: Engineering Job Number: 1737

Job Description

100 percent REMOTE
salary up to 200K - can be worked as a contract at 150.00/hr - NEED TO BE GC OR CITIZEN

  • - Know how to maintain / operate the BI ETL pipelines
  • - Proactively respond whenever a failure occurs
  • - Re-run ETL pipelines in case of failure
  • - Help migrate the BI ETL pipelines to the new Data Platform
  • - Help implement all new requirements to the new Data Platform (No new feature addition to the BI ETL)
  • - Help separate the BI ETL pipelines into buckets:

- What needs to be migrated to the new data platform
- What needs to be handled by other systems like NetSuite, or teams e.g. Finance - What needs to be deprecated / discarded
  • - Help answer why we are using given Tableau dashboards / key metrics / pipelines? And suggest refactoring solutions, and implement the best solution.

Job Description

Technical Skills:
  • Bachelor' s Degree in computers science or applied mathematics
  • 7+ years of software development in startups or companies working on big data technologies
  • Working experience with Redshift and know the best practices for tuning Redshift’ s performance
  • Advanced knowledge of Bash scripts and experience in refactoring them
  • Advanced knowledge of SQL and experience writing, restructuring, and documenting complex queries
  • Advanced knowledge of Python and experience writing python scripts for data pipelines
  • Advanced knowledge of AWS Big Data services, such as EMR, Glue, Athena
  • Working experience orchestrating data pipelines with tools like Airflow
  • Experience with stream-processing systems such as Apache Spark Streaming, Kinesis, Apache Kafka
  • Working experience with RDS, Elasticsearch, Lambda functions, EC2, S3
  • Working knowledge of messaging and data pipeline tools like SNS, SQS
  • Working experience with logging and monitoring tools like Elasticsearch Service, Cloudwatch
  • Working experience in integrating to the data pipelines using Slack, Google Chat, PagerDuty or similar tools
  • Knowledge of infrastructure as code and CloudFormation
  • Working experience in automating the deployment and operation of data pipelines

  • Have Amazon Web Services certificate(s)
  • Experience working with Tableau

Nice to have:
  • Experience working with NoSQL databases like Apache Solr, MongoDB
  • Have knowledge of HDFS, Flume, Hive, MapReduce
  • Have worked with one of the data warehouse tools like Google BigQuery, Snowflake

Industry Experience:
  • You have been an integral part of a team working with structured, semi-structured and unstructured large data sets from real time/batch streaming data feeds.

Send an email reminder to:

Share This Job:

Related Jobs:

Login to save this search and get notified of similar positions.