As discussed in the previous article, the essence of getting an in-depth understanding of each data structure, to help guide our choice when solving an algorithm problem. I have therefore picked an easier one to start with.
In this episode, we will be learning about ARRAY, one that you are probably more familiar with. We use arrays to store a list of items of the same type sequentially, e.g, strings, numbers, objects, and any other data type. The most important terms to understand in the concept of arrays are element and index. A clear explanation of it these can be seen in the visualization below.
Array element gets stored sequentially in memory using an Index; for instance, if item one is stored in Index 1, the second item will be stored in Index 2, and the list goes on. Storing data in this manner makes it easy for the computer to figure out where exactly in memory it should access when performing a lookup. For this very reason, looking up elements in Array by their index is super fast. The time complexity is O(1); it does not involve any loop or complex logic; it just goes to the address in location and retrieves it.
Removing an item, however, depends on where you are removing it from, if we want to remove the last element, that is pretty straightforward. We can quickly look it up by index and clear the memory by removing the last index, no other activity is required. Then we have O(1), a constant time complexity, which is our best-case scenario. But for removing the first item at the beginning of the array, we will have to shift every item one step forward to the left to fill in the space before it. The more item we have, the more shifting we have to do. This is the worst-case scenario: Since we calculate algorithm complexity in the worse case deletion is an O(n) operation.
Insertion is also O(n) in cases we have to copy an entire list to a new location to do an inserting(when the size is already specified). A new element can be added at the beginning, end, or any given index of the array. Just like deleting from the beginning, adding to the beginning or at a given index will also involve a backward shift and replacement of data. Therefore, in situations where we don't have a foreknowledge of how many items we want to store in or when we need to add or remove a lot of items from the list, they don't perform well. In those cases, we consider other data structures.
Updating an array will take a constant time complexity because no loop is required. We only need to perform a look-up with the index and change the element in the index.
A search operation, however, can be of different forms. You can either search in a linear approach, which will involve looping through the entire array, resulting in O(n) time complexity. Or you can follow a divide and conquer approach (binary search) which will take an O(logN) time complexity: That, however, will require that your elements are ordered.
In a nutshell, arrays will come in handy when we have to perform a look-up. We might need to consider a better data structure in cases where we need the best time complexity when it comes to other operations.
Now that we have discussed the implications of some major traversal operations on an array, which include insertion, updating, deleting, searching. It is important to mention that arrays can come in different forms; Arrays can be one-dimensional or multi-dimensional.
We can simply put that a one-dimensional array is the default array having just a row, while the multi-dimensional arrays can have multiple rows and columns. Multi-dimensional arrays are also good data instruction for solving algorithm problems. But, just so this episode is not too long, we will call it an episode here and leave the discussion for the subsequent article.