Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Hyderabad |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | Aviation / Airline |
Functional Area | General / Other Software |
EmploymentType | Full-time |
About Uber We re changing the way people think about transportation. Not that long ago we were just an app to request premium black cars in a few metropolitan areas. Now we re a part of the logistical fabric of more than 600 cities around the world. Whether it s a ride, a sandwich, or a package, we use technology to give people what they want, when they want it. For the people who drive with Uber, our app represents a flexible new way to earn money. For cities, we help strengthen local economies, improve access to transportation, and make streets safer. And that s just what we re doing today. We re thinking about the future, too. With teams working on autonomous trucking and self- driving cars, we re in for the long haul. We re reimagining how people and things move from one place to the next. Risk Platform team is seeking a data engineer with experience in large scale system implementation, with a focus on complex data pipelines. The candidate must be able to design and drive large projects from inception to production. The right person will work with analysts to gather requirements and translate them into data engineering roadmap. Must be a great communicator, team player, and a technical powerhouse. What You ll Need Proficiency with databases and SQL expertise is required Proficiency in Spark/ MapReduce development and expertise with data processing (ETL) technologies is required Experience in developing large scale data warehousing and data modeling, mining or analytic systems Experience in high level programming languages such as Java, Scala, or Python Knowledge of Hadoop related technologies such as HDFS, Azkaban, Oozie, Impala, Hive, and Pig Aptitude to independently learn new technologies,
Keyskills :
pythonsqlhadoopinformaticariskjavascalaoozieataengineeringcommercialmodelsdatawarehousingprogramminglanguagesdatamodelinglargeprojectssystemimplementationdataprocessing