Job Description
Position: Lead Big Data Engineer
Location: Jersy City, NJ (Hybrid)
Experience: 10+ Years
Work type: Fulltime(W2) only
PETADATA is currently looking to hire for the position of Lead Big Data Engineer for one of their clients.
Roles & Responsibilities:
- The Ideal candidate should be able to define the overall strategy and roadmap for Data-driven decision-making and work towards Big Data Technology solutions implementation.
- Provide technical leadership and guidance towards creating cutting-edge technology solutions for the Bank.
- Must be able to research, evaluate and utilize new technologies/tools/frameworks centered around high-volume data processing.
- Develop a set process for Data mining, Data modeling, and Data production.
- Should lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes.
- Participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews, and unit test plan reviews.
- Understand requirements, create and review designs, validate the architecture and ensure a high level of service offerings to clients in the technology domain.
- Define guidelines and best practices to drive consistency across client implementation and make sure consistency is followed across the board.
- Work with the architecture engineering team to ensure quality solutions are implemented, and engineering best practices are adhered to.
- Work closely with cross-functional leaders (development/product management/business stakeholders/ sales etc.) to ensure a clear understanding of the solution that will be built.
Required Skills:
- The Ideal candidate should have 10+ years of experience in Data warehouse, SQL, and Cloud Platform (Azure) for the development of applications.
- Must have in-depth and strong technical knowledge on Hive, Spark, Kafka, Streaming, SQL, Hadoop platform, and Python.
- Required experience in data architecture, data quality, metadata management, ETL, analytics, reporting, and database administration
- Should be an expert in ETL/Pipeline Development using tools such as Azure Databricks and Azure Data Factory with development expertise on batch and real-time data integration
- Hands-on experience on major components of the Hadoop ecosystem such as Spark, HDFS, Hive, Sqoop, Kafka, and Streaming is a must.
- Experience working with structured and unstructured data, Orchestration tools
- Should be Expert in relational data processing technology like MS SQL, Delta Lake, Spark SQL, SQL Server, and GitHub.
- The idle candidate should be Capable of handling more than one project in parallel.
- Must have strong data analysis skills - ability to identify, analyze, and integrate various large complex data sources (Internal and external data providers) into a readily consumable data product.
- Should have extensive knowledge in cloud platforms like Azure, AWS
- Good oral and written communication skills required.
Educational Qualification:
Bachelor's/ Master’s degree in Computer Science, Engineering, or a related field.
We offer a professional work environment and are given every opportunity to grow in the Information technology world.
Note:
Candidates required to attend Phone/Video Call / In person interviews and after Selection of candidate (He/She) should go through all background checks on Education and Experience.
Please email your resume to:swaroopb@petadata.co
After carefully review on your experience and skills one of our HR team members will contact you on the next steps.