Prerequisites: Working with Oozie requires some basic knowledge of the Hadoop eco-system and running MapReduce jobs
Taught by a team which includes 2 Stanford-educated, ex-Googlers and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with large-scale data processing jobs.
Oozie is like the formidable, yet super-efficient admin assistant who can get things done for you, if you know how to ask
Let's parse that
formidable, yet super-efficient: Oozie is formidable because it is entirely written in XML, which is hard to debug when things go wrong. However, once you've figured out how to work with it, it's like magic. Complex dependencies, managing a multitude of jobs at different time schedules, managing entire data pipelines are all made easy with Oozie
get things done for you: Oozie allows you to manage Hadoop jobs as well as Java programs, scripts and any other executable with the same basic set up. It manages your dependencies cleanly and logically.
if you know how to ask: Knowing the right configurations parameters which gets the job done, that is the key to mastering Oozie
What's Covered:
- Workflow Management: Workflow specifications, Action nodes, Control nodes, Global configuration, real examples with MapReduce and Shell actions which you can run and tweak
- Time-based and data-based triggers for Workflows: Coordinator specification, Mimicing simple cron jobs, specifying time and data availability triggers for Workflows, dealing with backlog, running time-triggered and data-triggered coordinator actions
- Data Pipelines using Bundles: Bundle specification, the kick-off time for bundles, running a bundle on Oozie
- Using discussion forums
- Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(
- We're super small and self-funded with only 2 people developing technical video content. Our mission is to make high-quality courses available at super low prices.
- The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.
- We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.
It is a hard trade-off.
Thank you for your patience and understanding!
Who is the target audience?
- Yep! Engineers, analysts and sysadmins who are interested in big data processing on Hadoop
- Nope! Beginners who have no knowledge of the Hadoop eco-system
- Students should have basic knowledge of the Hadoop eco-system and should be able to run MapReduce jobs on Hadoop
- Install and set up Oozie
- Configure Workflows to run jobs on Hadoop
- Configure time-triggered and data-triggered Workflows
Basic Oozie component overview, and where Oozie fits in the Hadoop ecosystem.
Time to install Oozie and run some workflows. Do use the attached text file which has detailed instructions and all the commands you'll need.
Run a simple MapReduce job using the command line. If you're comfortable running MR jobs you can simply skip this!
The attached zip files has a lot of MR examples, we just run the simplest one.
Workflows are basic Oozie building blocks, a brief introduction to how Workflows work
t's real when you can run stuff! Running our very first MapReduce Workflow on Oozie.
The actual code (well it's XML, but that is code as far as Oozie is concerned)
Workflows have advanced control structures to determine which action to execute and ways to specify global configuration for all actions
Coordinators manage workflows and run them at a specified time, and frequency provided the input data is available.
A time-triggered Coordinator is very similar to a Unix cron job
Oozie allows pretty fine-grained control over the running of Workflows, you can specify timeouts, throttling, concurrency and the execution order of Workflows materialized by the same Coordinator.
Workflow actions might depend on input data. Coordinators can be configured such that workflows are not launched till the right data is available for them. Such triggers are called data availability triggers.
A running example of a Coordinator which launches multiple Workflows, some of which have input data available and others which do not.
Configuring data input triggers is slightly complicated. We have to make sure that we specify the right data instances that the Workflow is interested in.
The bundle kick-ff time can help you determine when the Bundle coordinators run on Oozie.
Hadoop has 3 different install modes - Standalone, Pseudo-distributed and Fully Distributed. Get an overview of when to use each
How to set up Hadoop in the standalone mode. Windows users need to install a Virtual Linux instance before this video.
Set up Hadoop in the Pseudo-Distributed mode. All Hadoop services will be up and running!
If you are unfamiliar with softwares that require working with a shell/command line environment, this video will be helpful for you. It explains how to update the PATH environment variable, which is needed to set up most Linux/Mac shell based softwares.
Hadoop is basically for Linux/Unix systems. If you are on Windows, you can set up a Linux Virtual Machine on your computer and use that for the install.