Asante Financial Services Group is a high impact digital financial services company primarily focused on Micro, Small and Medium Sized Enterprises (MSMEs) and consumers in Sub-Saharan Africa. Incorporated in March 2016 in Mauritius, initially as a subsidiary of Atlas Mara LLC, Asante Financial Services Group has grown exponentially operating in Kenya, Rwanda, Uganda and Nigeria.
Asante FS aspires to become Africa’s leading digital financial services provider by facilitating financial independence, tapping into the best talent in the market, and harnessing the power of technology to deliver world class lending services to our valued customers.
Job Position: Senior Data Engineer
Job Location: Lagos
- As Senior Data Engineer you will make data science possible. In this role you will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams.
- The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
- The Senior Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
- They are able to lead and guide a team of Junior to mid-level Data Engineers in the day to day as well as strategic Data Engineering operations for the company. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
- The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.
- Manage team of direct reports consisting of Data Engineers (junior to mid-level)
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Be knowledgeable and able to drive Cloud infrastructure strategy for Data Science team, including proficiency in AWS Cloud.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Airflow, SQL and other ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Liaising across multiple regions, teams and clients to elucidate the requirements for each task.
- Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed.
- Reformulating existing frameworks to optimize their functioning.
- Testing such structures to ensure that they are fit for use.
- Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
- Bachelor’s / Master’s Degree in Data Engineering, Big Data Analytics, Computer Engineering or related field.
Minimum Work Experience:
- Demonstrated proficiency (3-5 years’ experience) in several of the following:
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Data pipeline design and implementation (e.g. Airflow)
- Data ETL processing
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Warehousing (data warehouse, data lake) design and implementation
- Cloud Computing (AWS, Azure)
- Databases (Relational, NoSQL, Graph Databases, etc.)
- Natural Language Processing and Text Analytics
- Graph/Network Analytics
- Big Data (Hadoop, MapReduce, Spark, etc.)
- Data Visualization (Tableau, Qlikview, Grafana, etc.).
- 3+ years’ experience in data engineering.
- Programming experience in one or more object-oriented languages, including: R, Python, C++, SQL.
- Familiarity with Windows, Linux/OS X command line processing, version control software (git), and general software development.
- Experience in programming or scripting to enable ETL development.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
How to Apply
Interested and qualified candidates should send their CV to: firstname.lastname@example.org using the Job Title as the subject of the mail.