Are you a talented and knowledgeable data engineer, and want your work to have a positive impact on the world?
At InnovaSea, we are creating the possibility for sustainable use of our ocean and freshwater ecosystems, and our VEMCO electronic fish tagging technology is used by researchers worldwide to help them understand the lives of fish, other aquatic animals, and their ecosystems. Check us out at www.vemco.com or on Twitter @vemcoteam where you can see lots of tweets from customers using our products in the field.
The volume of data collected by researchers using our technology is growing rapidly, and we have committed to help them learn more from it faster and easier. We are looking for a talented data engineer who shares our passion for this work, can hit the ground running, and wants get in on the ground floor of a major new development and shape its future.
- Develop a deep understanding of the existing and future data needs of our business and customers
- Architect, develop, construct, test and maintain software systems to enable acquisition, management, query and analysis of data, to meet current and future needs
- Choose the most effective mix of technologies, tools, services for developing a given system, based on a broad and current knowledge of the evolving technical landscape of data engineering
- Support software developers, data analysts, data scientists and other users of our data systems to ensure success
- Minimum Bachelor’s degree in Computer Science, another scientific field, or equivalent work experience
- 5+ years work experience in a data engineering role
- Experience with a variety of commercial and open source database types (e.g. relational, NoSQL, column-store)
- The ability to develop and debug portable and high performance C, C++ & Python code on Linux, Windows and Mac
- The ability to write software that interfaces with data-collection instruments
- Experience working in the UNIX/Linux environment, utilities and shell programming
- Familiarity with Data Science using Python and/or R
- The ability to read and write binary data to and from files and network connections
- A good understanding of the machine representation of different data types
- Experience working with large scientific data and metadata
- AWS cloud services (e.g. EC2, S3, Redshift)
- Big Data tools such as Hadoop, Spark, Kafka