This is an animated, visual and spatial way to learn data structures and algorithms

- Our brains process different types of information differently - evolutionarily we are wired to absorb information best when it is visual and spatial i.e. when we can close our eyes and see it
- More than most other concepts, Data Structures and Algorithms are best learnt visually. These are incredibly easy to learn visually, very hard to understand most other ways
- This course has been put together by a team with tons of everyday experience in thinking about these concepts and using them at work at Google, Microsoft and Flipkart

What's Covered:

- Big-O notation and complexity
- Stacks
- Queues
- Trees
- Heaps
- Graphs and Graph Algorithms
- Linked lists
- Sorting
- Searching

Using discussion forums

Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(

We're super small and self-funded with only 2 people developing technical video content. Our mission is to make high-quality courses available at super low prices.

The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.

We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.

It is a hard trade-off.

Thank you for your patience and understanding!

Who is the target audience?

- Yep! Computer Science and Engineering grads who are looking to really visualise data structures, and internalise how they work
- Yep! Experienced software engineers who are looking to refresh important fundamental concepts

- Basic knowledge of programming is assumed, preferably in Java

- Visualise - really vividly imagine - the common data structures, and the algorithms applied to them
- Pick the correct tool for the job - correctly identify which data structure or algorithm makes sense in a particular situation
- Calculate the time and space complexity of code - really understand the nuances of the performance aspects of code

A short intro to this course and what you can expect at the end of the course.

Data structures and Algorithms have a symbiotic relationship. The choice of data structure significantly influences the algorithms' performance and complexity and vice versa. Also learn about abstract data types and how they relate to data structures.

What is the performance of your code? How do you measure this? What is complexity and what is its relationship with performance?

The Big O notation is used to express complexity based on the size of the input specified for any algorithm. How is Big O expressed, how is it calculated and many examples to drive the concepts home!

The Big O notation becomes much clearer when you practice find the complexity of some sample pieces of code. Let's see how many of these you get right!

Linked lists are just one way to implement lists. Linked lists are less interesting in Java then in other programming languages such as C and C++ which require the developer to manage memory.

However lists are useful and a very core data structure so it makes sense to start off this class by understanding how we can set up a linked list in Java.

A few basic problems working with lists should give you a good idea of how to traverse and linked lists, add elements to a list and count the number of elements in a list.

The source code attached to this lecture has solutions for even more linked list based problems which are not covered in this lecture.

Linked lists and arrays solve the same kind of problems, holding a list or a collection. When would you use one over the other? Learn how you can make an informed choice.

The stack is a very simple and easy to understand data structure. However it lies underneath many complicated real world problems and is incredibly useful.

Let's build a stack for real using Java. It'll have all the operations we're interested in - push, pop, peek, size etc. It can hold any data type, it's a generic class.

Problems which use stacks as a part of their solutions are very common in programming interviews. Matching parenthesis to check for well formed expressions is a classic interview question - let's solve this using the stack we're already implemented.

Another interview question implemented. You have space available but your processing needs to be very fast indeed. How would you keep track of the minimum element of a stack as it changes?

The queue belongs to the same linear data structure family as the stack but it's behavior is very different. Queues are much more intuitive as there are plenty of real world examples where a queue is the fair and correct way of processing.

A common, fast but slightly tricky implementation of the queue is the array where the last element wraps around to the first. An interview favorite, let's see how to implement the circular queue.

We know the stack, and we know the queue. This problem brings them together. It's possible to mimic the behavior of a queue using 2 stacks in the underlying implementation. Let's write the most efficient code possible to make this work.

A sorting algorithm is not just defined by its complexity, there are a whole bunch of other characteristics which can be used to determine which sorting algorithm is the right one for a system. Let's understand what these characteristics are and what are the trade offs we might make.

Closely allied with selection sort is bubble sort. Its an adaptive sort with the same time complexity as selection sort.

Insertion sort is an improvement over both bubble sort and selection sort. Let's see how exactly it works and why it's preferred in many cases.

Shell sort builds on top of insertion sort, it improves the complexity of it's running time by partitioning the list in a clever way.

This belongs to a class of algorithms which uses divide and conquer to break the problem set into smaller pieces. This also makes a time-space trade off to get a faster running time.

Quick sort is the sort of choice for developers of programming libraries. Let's see what makes it so attractive.

Binary search is a pretty nifty way to search through a sorted list in O(Log N) time. It's also an interview favorite so make sure you understand it well!

The binary tree is an incredibly useful hierarchical data structure. Many other, more complex data structures, use the binary tree as the foundation. Let's see what a binary tree looks like and learn some simple terminology associated with the tree.

Traversing a binary tree can be done in variety of ways. The breadth first traversal visits and processes nodes at every level before moving on to the next. Let's visualize breadth first traversal and see how it's implemented.

Depth first traversal can be of 3 types based on the order in which the node is processed relative to it's left and right sub-trees. Pre-order traversal processes the node before processing the left and then the right sub trees.

Depth first traversal can be of 3 types based on the order in which the node is processed relative to it's left and right sub-trees.

In-order traversal processes the left subtree, then the node itself and then it's right sub trees. Post-order traversal processes the node *after* it's left and right subtrees.

The algorithms are all remarkably similar and very easy once you use recursion.

A Binary Search Tree is a binary tree with specific constraints which make it very useful in certain operations. Learn what a BST is and how we can use it

Insertion and Lookup are operations which are very fast in a Binary Search Tree. See how they work and understand their performance and complexity.

Find the minimum value in a binary search tree, find the maximum depth of a binary tree and mirror a binary tree. Learn to solve these problems recursively and see implementation details.

Count the number of structurally unique binary trees that can be built with N nodes, print the nodes within a certain range in a binary search tree and check whether a certain binary tree is a binary *search* tree. Learn to solve these problems and understand the implementation details.

Priority Queues allow us to make decisions about which task or job has the highest priority and has to be processed first. Common operations on a Priority Queue are **insertion, accessing the highest priority element and removing the highest priority element.**

The Binary Heap is the best implementation of the Priority Queue.

The Binary Heap is logically a Binary Tree with specific constraints. Constraints exist on the value of a node with respect to it's children and on the shape of the tree. The **heap property**and the **shape property** determine whether a Binary Tree is really a Heap.

The Binary Heap may logically be a tree, however the most efficient way to implement it is using an array. Real pointers from parent to child and from child to parent become implicit relationships on the indices of the array.

How do we ensure that when we add an element or remove an element from an existing heap, that the heap property and shape property is maintained? This operation is called Heapify.

Once we understand heapify, adding and removing elements from a heap become very simple.

Back to sorting. The Heap Sort uses a heap to transform an unsorted array into a sorted array. Phase I is converting the unsorted array into a heap.

Phase II actually outputs the final sorted array. It involves removing the elements from the heap and placing it in a sorted array. The cool thing is that all of this can be done **in-place**.

Let's practice heap problems! Use the heap property to find the largest element in a minimum heap and the K largest elements in a stream.

The graph is a data structure that is used to model a very large number of **real world problems**. It's also an **programming interview favorite**. The study of graphs and algorithms associated with graphs forms an entire field of study called graph theory.

Edges in a graph can be directed or undirected. A graph with directed edges forms a **Directed Graph** and those with undirected edges forms an **Undirected Graph**. These edges can be likened to one-way and two-way streets.

Different relationships can be modeled using either Directed or Undirected graphs. When a graph has no cycles it's called an **acyclic graph**. A graph with no cycles is basically a tree.

There are a number of different ways in which graphs can be implemented. However they all follow they same basic **graph interface**. The graph interface allows building up a graph by adding edges and traversing a graph by giving access to all adjacent vertices of any vertex.

An adjacency matrix is one way in which a graph can be represented. The **graph vertices are rows and columns** of the matrix and the **cell value shows the relationship** between the vertices of a graph.

The adjacency list and the adjacency set are alternate ways to represent a graph. Here the connection between the vertices is represented using either a **linked list** or a **set**.

Compare the adjacency matrix, adjacency list and the adjacency set in terms of **space** and **time complexity** of common operations

Common traversal methods of trees apply to graphs as well. There is an additional wrinkle with graphs, **dealing with cycles and with unconnected graphs**. Otherwise the algorithms are exactly the same as those we use to traverse trees.

Topological sort is an ordering of vertices in a graph where a vertex comes before every other vertex to which it has outgoing edges? A mouthful? This lecture will make things easy to follow. Topological sort is widely used in real world problems.

Graphs with simple edges (directed or undirected) are unweighted graphs. The distance table is an important data structure used to find the shortest path between any two vertices on a graph. This is used in almost every shortest path algorithm.

Visualize the shortest path algorithm using the distance table, step by step.

So far we only deal with unweighted graphs. Graphs whose edges have a weight associated are widely used to model real world problems (traffic, length of path etc).

A greedy algorithm is one which tries to find the local optimum by looking at what is the next best step at every iteration. It does not look at the overall picture. It's best used for optimization problems where the solution is very hard and we want an approximate answer.

Finding the shortest path in a weighted graph is a greedy algorithm.

Dijkstra's algorithm is a greedy algorithm to find the shortest path in a weighted graph.

A weighted graph can have edge weights which are negative. Dealing with negative weights have some quirks which are dealt with using the Bellman Ford algorithm.

Visualize how the Bellman Ford works to find the shortest path in a graph with negative weighted edges.

If a graph has a negative cycle then it's impossible to find a shortest path as every round of the cycle makes the path shorter!

A minimal spanning tree is a tree which covers all the vertices of of the graph and has the lowest cost. Prim's algorithm is very similar to Dijkstra's shortest path algorithm with a few differences.

The minimal spanning tree is used when we want to connect all vertices at the lowest cost, it's not the shortest path from source to destination.

Let's see how we implement Prim's algorithm in Java.

Kruskal's algorithm is another greedy algorithm to find a minimal spanning tree.

Given a course list and pre-reqs for every course design a course schedule so pre-reqs are done before the courses.

Find the shortest path in a weighted graph where the number of edges also determine which path is shorter.