Big Data Developer (Hadoop, Kafka, Spark And/or Scala)
GTN Technical Staffing
Phoenix, AZ (United States)
Category: Programming
Posted on: 23 Feb 2021
Cloud DevOps Hadoop Python RabbitMQ SaaS SQL
Job Description
Staff Engineer - Data Engineering - Big Data
HIGHLIGHTS
Location: Scottsdale, AZ 85260 (REMOTE DURING COVID)
Position Type: 6-9 month contract with potential to convert to FTE
Hourly / Salary: Excellent
Residency Status: US Citizen, Green Card Holder OR GC-EAD ONLY (no third parties)
Overall Purpose
This position is a key role in the development, test, and deployment of complex solutions.
Essential Functions
- Build data strategy for broad or complex requirements with insightful and forward-looking approaches that go beyond the direct team and solve large open-ended problems.Participate in the strategic development of methods, techniques, and evaluation criteria for projects and programs.Drive all aspects of technical and data architecture, design, prototyping and implementation in support of both product needs as well as overall technology data strategy.Provide leadership and technical expertise in support of building a technical plan and backlog of stories, and then follow through on execution of design and build process through to production delivery.Guide a broad functional area and lead efforts through the functional team members along with the team’s overall planning.Represent engineering in cross-functional team sessions and able to present sound and thoughtful arguments to persuade others. Adapts to the situation and can draw from a range of strategies to influence people in a way that results in agreement or behavior change.Collaborate and partner with product managers, designers, and other engineering groups to conceptualize and build new features and create product descriptions.Actively own features or systems and define their long-term health, while also improving the health of surrounding systems.Assist Support and Operations teams in identifying and quickly resolving production issues.Develop and implement tests for ensuring the quality, performance, and scalability of our application.Actively seek out ways to improve engineering and data standards, tooling, and processes.Supporting the company’s commitment to risk management and protecting the integrity and confidentiality of systems and data.
Minimum Qualifications
- Education and/or experience typically obtained through a Bachelor’s degree in computer science or related technical field.Ten or more years of relevant related experienceSeven or more years of experience in the development of complex data platform, distributed systems, SaaS, cloud solutions, micro services.Six or more years of experience in the development of Data Warehouse, Big Data – structured & unstructured platforms, real-time & batch processing, data standards.Four or more years of experience in development of Business Intelligent SolutionsTwo or more years of experience in development / operationalization of Artificial Intelligence / Machine Learning Models / Model development life cycle activities (implementing feature engineering, data pipelines, model operationalization, model monitoring).Demonstrated experience in delivering business-critical systems to the market.Ability to influence and work in a collaborative team environment.Experience designing/developing scalable systems.Extensive experience implementing Data Warehouse (Star / Snow flake schemas) using SQL Server or equivalent, Big Data – HDFS, Elastic Search, ETL process development using IBM Infosphere or equivalent, Reusable FrameworksExperience with implementing data science solutions using Python, Spark, PySpark, R, Data Robot.Experience with event-driven architecture and messaging frameworks (Pub/Sub, Kafka, RabbitMQ, etc).Working experience with cloud infrastructure (Google Cloud Platform, AWS, Azure, etc).Knowledge of mature engineering practices (CI/CD, testing, secure coding, etc).Knowledge of Software Development Lifecycle (SDLC) best practices, software development methodologies (Agile, Scrum, LEAN etc) and DevOps practices.Background and drug screen.
Preferred Qualifications
- MS or PHDExperience using AI/ML Model Frameworks like Tensorflow, Sage Maker, Scikit, PyCharmBig Data Platforms (Cloudera, S3)Database platforms (Oracle, SQL Server)Computer language experience (Python, PySpark, and R)Knowledge of Aerospike, Scality S3, Elastic SearchMonitoring and Alerting systems experience (AppDynamics)Knowledge of ACH/EFTKnowledge of real time payment networks (RTP, FedNow)Experience in development / operationalization of Artificial Intelligence / Machine Learning Models / Model development life cycle activities (implementing feature engineering, data pipelines, model operationalization, model monitoring).FinTech experienceKubernetes experienceRecent Big Data skills in: Hadoop, Kafka, Spark and/or Scala
Company DescriptionGTN provides Scalable Technical Staffing solutions encompassing SOW, staff augmentation, and direct hire placement for Fortune 2000 companies, with niche service offerings in Cyber Security, Digital, Payroll Management, and Professional Services.
GTN Technical Staffing
Why Work Here?
Solid staffing company with over 21 years in the business!
<p>GTN provides Scalable Technical Staffing solutions encompassing SOW, staff augmentation, and direct hire placement for Fortune 2000 companies, with niche service offerings in Cyber Security, Digital, Payroll Management, and Professional Services.</p>
Address
Scottsdale,
AZ
85260
USA
View all jobs at GTN Technical Staffing
Report Job
Job Source: Ziprecruiter
(Will expire by: 2021-04-09 00:00:00)
New
Techies Talent Network
Upload your resumes and connect with IT recruiters, setup personalized job alerts and receive professional tips to achieve your career goals, and even more. Join today!
Join talent network