Job Description
Key Responsibilities:
- You will be responsible for providing the required data engineering capabilities that will enable ingestion and sustaining of these new data sets
- Build data processes supporting data transformation, data structures, metadata and dependency across modern
- Developer tools used for Planning, CI/CD and Cloud Foundry
- Collect and transform unstructured data into actionable insight ETL, Machine learning , stat tool, math)
- Provide Deep expertise on data sources, required transformations, quality, consistency, velocity, access
- Establish relationships with key business partners/Points of Contact to accelerate issues fix
- Develop and Maintain self-service capabilities for Business KPI drill-down needs (MOLAP, TOLAP)
Essential Requirements:
- Education (Bachelor’s degree or higher) in Computer Science, Mathematics, or a related technical field, or equivalent practical experience.
- Strong Hands on experience into Hadoop Development
- Strong Hands on experience in Hive & Hive QL (Query Language )
- Hand on experience in SQL query writing
- Good understanding on HDFS
- Good understanding on Hadoop ecosystem
- 6+ years’ experience in Data or BI engineering dealing with large complex data scenarios.
- Database development (MS SQL, SSAS, SSIS, Hadoop) is a must.
- Good to have knowledge on python script to manipulate and ingest data.
- Proven ability to work with varied data infrastructures – including relational databases, Hadoop and file-based storage solutions.
- Expertise in optimizing queries and ETL tasks involving large amounts of data.
- Ability to work in Global team, with remote leaders and peers.
- Being self-reliant and capable of both independent work and as member of a team.