Pindrop research team drives and enables ML usage across several domains in heterogeneous language environments and at all stages of a project's life cycle, including ad-hoc exploration, preparing training data, model development, and robust production deployment. The team is invested in the ML infrastructure's continual innovation to carefully orchestrate a continuous cycle of learning, inference, and observation while maintaining high system availability and reliability. We seek to find new ways to deliver innovation to the ever-growing need for ML and experiments.
who you are
4+ years of relevant software engineering experienceFamiliarity with developing and deploying Spark and ML pipelinesExpertise in multiple programming languages such as Python, Go, or C++ Knowledge of Docker and container orchestration frameworks such as KubernetesIdeally, experience with AWS managed services such as S3, ElasticSearch, and DynamoDBHands-on experience with big data technologies such as Airflow, JupyterExperience building microservices and RESTful APIs
what you'll do
Design and develop infrastructure for the full cycle of machine learning, such as workflow orchestration and management interfaces, data discovery tools, data quality, and feature libraries.Experience designing and developing backend microservices for large scale distributed systems using gRPC or REST.Ability to guide multi-faceted projects with engineers from diverse backgrounds, heterogeneous skills, and across teams.Develop applications in Golang and Python on top of a modern cloud-focused platformExperience with large-scale distributed data processing systems, cloud infrastructure such as AWS, and container systems such as Docker or Kubernetes.Drive and maintain a culture of quality, innovation, and experimentationDeliver production-ready code from start to finishWork in an Agile environment composed of software engineers, test engineers, research scientists, and product managers