Staff Data Engineer (Remote Eligible)
- IT, Software development, System Engineering Jobs
Description
This is an opportunity to join our fast-growing Engineering Data Science team to develop cutting-edge data pipelines and platforms to help augment our product offerings in security, authentication, applications, and customer experience. We are looking for senior data engineers who can help architect and own the platform for deploying and optimizing the data lake and data streams used by our machine learning models to protect user authentication and security. They will also own the pipeline which needs to process hundreds of millions of events per day and provide results back to the authentication system to make real-time risk evaluation during user authentication. This project has a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering.
We hope you will share our passion and great pride in the work we do and will join an engineering team that strongly believes in automated testing and an iterative process to build high-quality next-generation cloud platforms.
Our elite team is fast, innovative, and flexible. We expect great things from our engineers and reward them with stimulating new projects and emerging technologies.
Job Duties And Responsibilities
- Overall ownership of the data collection pipeline and data lakeokaydatalake used for developing, deploying, and maintaining machine learning models in production.
- Work with Data Scientists to help improve their productivity and implement their ideas
- Design and maintain new data processing pipelines to support new decision and scoring models
- Analyze performance metrics and logs to identify inefficiencies and opportunities to improve scalability and performance
- SQL Query Tuning: complex query plan analysis and optimization and Schema (re-) design
- Research production issues using tools such as Splunk, Wavefront, CloudWatch, etc
- Maintain and enhance our performance monitoring and analysis telemetry, frameworks, and tools
- Test-driven development, design and code reviews
Responsibilities
Minimum Required Knowledge, Skills, And Abilities
- 6+ years experience building enterprise grade highly reliable, mission-critical software or big data systems
- 3+ years of experience in production SaaS deployment
- 3+ years experience with streaming systems: MQ, Kafka, Storm, Spark, etc.
- Expert-level understanding of relational databases (columnar and row-based), and NoSQL including mongo, Cassandra or similar
- Experience with the data toolchains: EMR, Kinesis, Redshift, Glue
- Advanced Python programming
- Java or Scala development
- Experience deploying data streams for use by ML models in production environments with low latency requirements.
Preferred Skills
- Experience with Flink/KDA, Snowflake, Redis, and/or ElasticSearch
- Working knowledge of AWS Sagemaker, Lambda, and API Gateway including production deployment
- Experience with Docker, Terraform, Chef, Jenkins, or similar build tools
- Jupyter Notebook Kernel maintenance
- IPython, TensorFlow, PyTorch
Hard Skills
- Coding and Programming (Python, C#, Java, PHP, etc
- Data Analytics
- Operating Systems
- Software development
Soft Skills
- Communication
- Leadership
- Adaptability
- Strategic thinker
- Skilled Collaborator