Data Scientist Analyst

Optimal, Detroit, MI (United States)

Category: Programming

Posted on: 20 Nov 2020

Apache Hadoop SQL

Job Description

Position Description:

Responsible for building the data monitoring solution for the Enterprise data sources/products that are fully or partially transformed, Decoded and standardized to support the data operational activities. Must work on all the data types across the enterprise including Mobility & Vehicle. Continuously increasing data monitoring coverage by working closely with the Engineers, Data Scientists and other Subject Matter Experts to understand and evaluate their data/KPI requirements and delivering the feature they need. Perform data analysis to derive KPI insights using tools like Hive, Apache Pig, Hadoop, HDFS, MapReduce, Shark, PySpark, SQL and Alteryx. Build a scalable and robust end to end data monitoring solution from data in table to insights on dashboards.

Skills Required:

Candidates should have experience with using data analysis tools such as:

    Hive, PySpark, Apache Pig, Hadoop, SQL, QlikView, ETL tools. Post-graduate degree in Engineering / Computer Science or academic equivalent.Experience of working within a complex business environment, including at least 5 years in a single function, with deep understanding of the information constructs of that business.Demonstrated experience and expertise in conceptual thinking of how to apply information solutions to a business challenge.Experience of applying problem solving capabilities. Proven analytics capability to robustly examine large data sets and highlight patterns, anomalies, relationships and trends. Self-starter, demonstrating high levels of data integrity.Excellent data and statistical analysis skills in Excel. Ability to manage deliverables according to a robust project plan.

Experience Required:

    Minimum of 5 years of experience in a Data Engineering role creating data products, writing codes/queries/scripts and building data visualizations.Minimum of 3 years of experience in data design, data architecture and data modeling (both transactional and analytic)Minimum of 2 years of experience in Hadoop Big Data technology (HDFS, Hive, PySpark, Oozie, QlikView etc.), especially PySpark, Hive, QlikView and Oozie experience transforming, visualizing and scheduling workflows

Education Required:

    Bachelor's Degree in Computer Science or related field from an accredited college or universityPrefer Master's Degree in Computer Science or related field from an accredited college or university


Want more jobs like this?

Apply Now

By clicking "Apply Now" or "Subscribe and Apply" you will be redirected to an external website. By clicking "Subscribe & Apply" you give TechieJobs consent to process and store your data, and send you job alerts to provided email.

Job Source: Ziprecruiter (Will expire by: 2021-01-04 00:00:00)

Apply Now

Quick poll
Does your employer allow you to work from home?
  • Yes
  • Yes, but not every day
  • No

* This poll is anonymous.