Sr. Big Data Engineer

Posted 10 Oct 2018

Grid Dynamics

San Jose, CA United States


bash java hadoop

We are looking for experienced Big Data engineers to add to our team in Milpitas, CA. In this position, you will be working with one of the top 10 US retail stores with a high-priority demand for building continuous integration, delivery, and quality control processes.


Responsibilities:



  • Participate in design and development of Big Data analytical applications

  • Design, support, and continuously enhance the project code base, continuous integration pipeline, etc.

  • Write complex ETL processes and frameworks for analytics and data management

  • Implement large-scale real-time streaming data processing pipelines

  • Work inside a team of industry experts on cutting-edge Big Data technologies to develop solutions for deployment at a massive scale


Requirements:



  • Hadoop v2, MapReduce, HDFS on Google Cloud

  • Building stream-processing systems, using solutions such as Storm or Spark-Streaming

  • Big Data querying tools, such as Pig, Hive, and Impala

  • Integration of data from multiple data sources

  • Spark, Scala, Java, Python, Bash, BigQuery, Azkaban, Airflow, and Dataflow

  • NoSQL databases, such as HBase, Cassandra, MongoDB

  • Messaging systems Kafka, RabitMQ, etc.


What will be a plus:



  • Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.)

  • Experience with Github-based development processes

  • Experience with JVM build systems (SBT, Maven, Gradle)


What we offer:



  • Work in the Bay Area with terrific customers on large, innovative projects.

  • High-energy atmosphere of exponentially & successfully growing company.

  • An attractive compensation package with generous benefits (medical, dental, vision, and life)

  • 401K and Section 125 pre-tax offerings (POP and FSA plans). 

We are looking for experienced Big Data engineers to add to our team in Milpitas, CA. In this position, you will be working with one of the top 10 US retail stores with a high-priority demand for building continuous integration, delivery, and quality control processes.


Responsibilities:



  • Participate in design and development of Big Data analytical applications

  • Design, support, and continuously enhance the project code base, continuous integration pipeline, etc.

  • Write complex ETL processes and frameworks for analytics and data management

  • Implement large-scale real-time streaming data processing pipelines

  • Work inside a team of industry experts on cutting-edge Big Data technologies to develop solutions for deployment at a massive scale


Requirements:



  • Hadoop v2, MapReduce, HDFS on Google Cloud

  • Building stream-processing systems, using solutions such as Storm or Spark-Streaming

  • Big Data querying tools, such as Pig, Hive, and Impala

  • Integration of data from multiple data sources

  • Spark, Scala, Java, Python, Bash, BigQuery, Azkaban, Airflow, and Dataflow

  • NoSQL databases, such as HBase, Cassandra, MongoDB

  • Messaging systems Kafka, RabitMQ, etc.


What will be a plus:



  • Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.)

  • Experience with Github-based development processes

  • Experience with JVM build systems (SBT, Maven, Gradle)


What we offer:



  • Work in the Bay Area with terrific customers on large, innovative projects.

  • High-energy atmosphere of exponentially & successfully growing company.

  • An attractive compensation package with generous benefits (medical, dental, vision, and life)

  • 401K and Section 125 pre-tax offerings (POP and FSA plans). 

Job Source: Stackoverflow
Job Source: Stackoverflow

Related jobs

© Techie Jobs 2017. All rights reserved.