We are looking to hire a skilled hadoop developer to help build big data infrastructure and storage software. Your primary responsibility will be to design, build, and maintain hadoop infrastructure. You may also be required to evaluate existing data solutions, write scalable ETLs, develop documentation, and train staff
Job Title: Hadoop/Big Data Developer
Location: Jersey City, NJ(100% Day-1 onsite}
Position Type : Long term
Responsibilities:
Meeting with the development team to assess the company’s big data infrastructure.
Designing and coding hadoop applications to analyze data collections.
Creating data processing frameworks.
Extracting data and isolating data clusters.
Testing scripts and analyzing results.
Troubleshooting application bugs.
Maintaining the security of company data.
Creating data tracking programs.
Producing Hadoop development documentation.
Training staff on application use.
Requirements:
Bachelor’s degree in software engineering or computer science.
Previous experience as a hadoop developer or big data engineer.
Advanced knowledge of the hadoop ecosystem and its components.
In-depth knowledge of Hive, HBase, and Pig. HDFS, Data Pipeline
Familiarity with MapReduce and Pig Latin Scripts.
Knowledge of back-end programming languages including JavaScript, Node.js, and OOAD. Kafka, Spark, Scala
Familiarity with data loading tools including Squoop and Flume.
High-level analytical and problem-solving skills.
Good project management and communication skills.
Required Skills - HDFS, Hadoop, Data Pipeline, Kafka, Spark, Scala, PIG, HIVE, HBASE