Multidimensional arrays play a significant role in problem-solving, especially in computer programming and data manipulation tasks. They provide a way to organize and store data in a structured manner with multiple dimensions. Each dimension represents a different level of indexing, allowing you to access and manipulate data using various combinations of indices.

From a previous article, which gives a basic introduction to arrays and multidimensional arrays in general, It became intriguing to write a more hands-on article on multidimensional arrays because of their complexity compared to a one-dimensional array. To help us better understand this concept, we will dissect a problem involving the search for a value in a 2D matrix with an interesting twist.

Brace yourselves, for we're about to unravel the journey of developing an efficient algorithm to search through a multidimensional array aka, matrix.

Our challenge will begin with a 2D matrix, an array of arrays if you will. But this matrix comes with a set of intriguing rules: each row is arranged in ascending order, and the first element of each row is greater than the last element of the previous row. This peculiar arrangement has a sort of orderly elegance to it, like climbing up a ladder of numbers.

```
# Example 2D matrix
matrix = [
[1, 3, 5],
[7, 9, 11],
[13, 15, 17]
]
```

The task at hand is to design an algorithm that can efficiently locate a specific value within this matrix. While the matrix's row-by-row sorting provides a hint, the fact that it's split into different rows adds an extra layer of complexity to the problem.

As seasoned problem solvers might recognize, a sorted array often calls for the employment of the binary search technique.

If you are not familiar with binary search, let's talk about this super useful trick. Imagine you have a long list of numbers in order, and you want to find a particular number. Instead of checking each number one by one, you can split the list in half and decide whether the number you're looking for is in the left half or the right half. You keep cutting the possibilities in half until you find your target. You do the continuous cutting using a while loop. This trick is like a treasure hunt that gets quicker and quicker as you go.

Binary search:

```
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = left + (right - left) // 2
if arr[mid] == target:
return True
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return False
```

Left and right represent the starting point and the ending point of the array respectively. The loop runs as long as the left is not greater than the right. We begin our search from the middle(mid) because we want to disregard half of the array if our target cannot be in it. We do this by comparing our target with the middle item, If our target is greater than the middle item, then we know our target cannot be found in the first half, and vice versa, so we move our starting or ending point as the case may be.

We can then use the binary search helper above for each row of our matrix:

```
def search_matrix(matrix, target):
rows, cols = len(matrix), len(matrix[0])
for row in range(rows):
if binary_search(matrix[row], target):
return True
return False
```

This approach involves conducting a binary search on each row. By treating each row as an independent, sorted array, we embark on an exploration of every row. If we find it in any row, we can celebrate our success. But if we search through all the rows and don't find our number, we can be sure it's not in the grid. This strategy results in an M x log N solution, where M represents the number of rows and N stands for the number of columns.

Visualizing each row as an isolated entity undergoing its binary search might appear straightforward. Yet, the observant eye discerns an opportunity for further optimization. After all, isn't there a way to exploit the matrix's inherent structure?

In the quest for optimization, a promising prospect will be to imagine that the matrix was flattened into a single array. So Instead of looking at each row separately, we can imagine that all the rows are stuck together in one long line. Then, we can use our binary search trick just like we would on a single row of numbers, yielding a logarithmic solution.

With this new way of thinking, the puzzle starts to make sense. But we need to figure out where the numbers from each row would be in our long line, We can do this by calculating a midpoint and then using **division and the modulo operation** to figure out which row and column the midpoint corresponds to.

Flattened array analogy:

```
def search_matrix(matrix, target):
rows, cols = len(matrix), len(matrix[0])
left, right = 0, rows * cols - 1
while left <= right:
#calculate the middlepoint within the left and right
mid = left + (right - left) // 2
#the division and modulus operator
mid_value = matrix[mid // cols][mid % cols]
if mid_value == target:
return True
elif mid_value < target:
left = mid + 1
else:
right = mid - 1
return False
```

We know that to treat the matrix as a single-line array, what we need to do is to get the startpoint(left) and the endpoint(right). The endpoint in this case will be the length of the row "and" column put together. We know that "and" in mathematics is multiplication, therefore the endpoint of the matrix will be "row * col". The index starts at zero, so we always have to remove 1 from the endpoint.

Continuing with our analogy of a flattened array in our case: [ 1, 3, 5, 7, 9, 11, 13, 15, 17 ] Calculating the first midpoint(mid) will give us 5. But there is no such index as 5 in our matrix, the partitioned nature of the matrix complicates matters. We can only find value by a combination of row and column index. For instance, to print out 11, which is on row 2 column 3, we use **matrix[1][2]** and not just index 5, even though in a flattened array, 11 will be at index 5.

That is why we have to convert 5 into the index of the row and column using the division and the modulo operation. We can use the length of the row **OR** the length of the column for the **division and modulo operation** depending on if the matrix is a row-major or column-major, i.e., sorted by row or column. We will stick to using the length of the column here because ours is sorted by column. **53=1** while **5%3=2**. The division value being our row, and the modulo value being our column gives us the value corresponding to the midpoint value **matrix[1][2]**. Then we can do our comparison and repeat the loop until the target is found.

What we end up with is a snappy algorithm that zooms through the matrix with style. It's like a dance between binary search and some math magic, resulting in a solution that's both efficient and, dare we say, pretty darn cool.

Even though nothing much has changed and we are still using our binary search, this innovative perspective transforms the problem into a logarithmic M times N solution.

And there you have it! We've tackled the challenge of searching for a number in a special grid. We learned about binary search a powerful tool for finding things quickly in sorted lists. We explored how to use it on each row and even figured out a way to make our search even faster by treating all the rows like they're in a single line. The process of dissecting, analyzing, and iterating unveils the hidden beauty in the realm of code. In the end, the satisfaction of unraveling such enigmas lingers as a testament to the boundless potential of human creativity and computational prowess. Now you can carry this solution idea into many matrix/multidimensional array problems.

Remember, learning about these cool techniques is like adding tools to your problem-solving toolbox. The world of arrays and matrices might seem complex, but with a little bit of creativity and some clever tricks, you can conquer any challenge that comes your way. So keep exploring, keep practicing, and keep having fun solving those amazing coding problems!

]]>Just like we can do things differently to achieve the same goal, different programs can also be written to solve the same problem. But as a software engineer, you need to implement or choose the best solution, hence, a need to measure an efficient solution to a problem.

In software programming, we consider good solutions in terms of the resources they use, and in general, we are only concerned about two types of resources: The time they take to run and the amount of space they take in memory. These are referred to as Time complexity and Space complexity respectively. The best practice is to build the solution that uses the least amount of time to execute and the minimal amount of space in memory.

Now you might wonder, if we are only concerned about the time and space they take, why not use a time counter or stopwatch to measure the time, and also find out space consumed in bytes?

Differences in computer hardware and environment affect the execution time of a program. We can expect a modern laptop to be much faster in running programs than computers in the 80s. Even if we run the same program multiple times on the same computer, there will still be some variants to the amount of time it takes to completely run the program. This different behavior can be a result of a series of background services that the computer runs continuously. These background services have great tendencies to affect the execution of a program, therefore making it really hard to find out a consistent and exact amount of time that a program takes to run.

We definitely don't want our conclusion on execution time to be biased and subject to the computer used. Hence a need for a much more defined representation of the efficiency of a program.

Big O Notation is a way to mathematically represent the time and space complexity of a program. We use Big O as a concept to judge which solution is better than the other with respect to the resources they use without subjecting them to any external determinant or concrete units.

This means that, with big O, we won't be measuring execution time in terms of milliseconds or seconds and space in terms of bytes, kilobytes, or megabytes. Instead, we use big O to analyze time and space complexity in terms of how it scales with the size of the input. That is; How well a program executes as its input grows.

It is also worthy of note that with Big O, we only care about the worst-case scenarios. ie, the worst-case performance of the program. The worst-case time complexity indicates the longest running time performed by an algorithm given any input size. Also, note that 'input size' is denoted by 'n', which represents the number of elements in the input data.

The different Mathematical notations of time and space complexity, referred to as big O notations are as follow:

- O(1)
- O(n)
- O(log n)
- O(n log n)
- O(n^2)
- O(2^n)
- O(n!)

For a brief explanation:

O(1) pronounced as 'O of 1' is a denotation for '**Constant Time**' complexity. This literally means that a program will run in constant(the same) time regardless of the changes or increase in the input size. For example, the same amount of time a computer takes to compute a program with an input size of 2 is the same time it will take for input size 1000.

O(n), O of n is a '**Linear Time**' complexity. This means that the time it takes for a program to run increases progressively as input size increases. i.e, the algorithm runs 'n' amount of times.

O(log n) is a '**Logarithmic Time**' complexity. This should remind you of your log table, right?. O(log n) basically means that time goes up linearly as input size goes up exponentially. So if it takes 1 second to compute an algorithm of 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on. Just like log 10 = 1, and log 100 = 2 etc. This usually happens when we have to repeatedly divide our input data into halves to compute.

O(n log n), the '**Linear-logarithmic Time**' complexity is the combination of linear and logarithmic time complexity. Algorithms that repeatedly divide a set of data in half, and then process those halves independently with a sub-algorithm that has a time complexity of O(n), will have an overall time complexity of O(n log n)

O(n^2), pronounced as O of n to the power of 2, or Big O squared is a '**Quadratic Time**' complexity. This means that it takes 2 times as much as 'n' to run a program.

O(2^n), '**Exponential Time**' complexity denotes an algorithm whose time doubles with each addition to the input size. i.e, for an increase in input size (n), the time that the program takes to run doubles.

O(n!), the '**Factorial Time**' complexity is what I like to call the 'Oh No!' complexity. This time complexity is the worst time an algorithm can take. Recall that a factorial is the product of the sequence of n integers right?. For example, the factorial of 5, or 5! is: 5 x4x3x2x1 = 120. This means that an algorithm that runs in factorial time, grows by multiplying by an increasing amount 'n'.

Depending on which one of the above your program(algorithm) time complexity falls into, you can immediately tell how fast or slow the program is going to execute. Hopefully, we get to look at some algorithms and analyze how to know their time and space complexity in subsequent articles.

With that, I hope you enjoy this article and learn a few things about big O. If you did, kindly like, subscribe and share. If you are willing to look at some examples, I suggest you look at developerinsider article on big O with examples.

Happy Learning!

]]>In this episode, we will be learning about ARRAY, one that you are probably more familiar with. We use arrays to store a list of items of the same type sequentially, e.g, strings, numbers, objects, and any other data type.
The most important terms to understand in the concept of arrays are **element** and **index**. A clear explanation of it these can be seen in the visualization below.

Array element gets stored sequentially in memory using an Index; for instance, if item one is stored in Index 1, the second item will be stored in Index 2, and the list goes on. Storing data in this manner makes it easy for the computer to figure out where exactly in memory it should access when performing a lookup. For this very reason, looking up elements in Array by their index is super fast. The time complexity is O(1); it does not involve any loop or complex logic; it just goes to the address in location and retrieves it.

Removing an item, however, depends on where you are removing it from, if we want to remove the last element, that is pretty straightforward. We can quickly look it up by index and clear the memory by removing the last index, no other activity is required. Then we have O(1), a constant time complexity, which is our best-case scenario. But for removing the first item at the beginning of the array, we will have to shift every item one step forward to the left to fill in the space before it. The more item we have, the more shifting we have to do. This is the worst-case scenario: Since we calculate algorithm complexity in the worse case deletion is an O(n) operation.

Insertion is also O(n) in cases we have to copy an entire list to a new location to do an inserting(when the size is already specified). A new element can be added at the beginning, end, or any given index of the array. Just like deleting from the beginning, adding to the beginning or at a given index will also involve a backward shift and replacement of data. Therefore, in situations where we don't have a foreknowledge of how many items we want to store in or when we need to add or remove a lot of items from the list, they don't perform well. In those cases, we consider other data structures.

Updating an array will take a constant time complexity because no loop is required. We only need to perform a look-up with the index and change the element in the index.

A search operation, however, can be of different forms. You can either search in a linear approach, which will involve looping through the entire array, resulting in O(n) time complexity. Or you can follow a divide and conquer approach (binary search) which will take an O(logN) time complexity: That, however, will require that your elements are ordered.

In a nutshell, arrays will come in handy when we have to perform a look-up. We might need to consider a better data structure in cases where we need the best time complexity when it comes to other operations.

Now that we have discussed the implications of some major traversal operations on an array, which include insertion, updating, deleting, searching. It is important to mention that arrays can come in different forms; Arrays can be one-dimensional or multi-dimensional.

We can simply put that a one-dimensional array is the default array having just a row, while the multi-dimensional arrays can have multiple rows and columns. Multi-dimensional arrays are also good data instruction for solving algorithm problems. But, just so this episode is not too long, we will call it an episode here and leave the discussion for the subsequent article.

]]>One of the most important decisions we have to make in formulating a computer solution to a problem is the choice of appropriate data structures. In particular, an inappropriate choice of data structure might lead to clumsy, inefficient, and difficult implementation. Meanwhile, an appropriate choice usually leads to a more simple, transparent, and efficient implementation. This means a key to effectively solving many problems boils down to making appropriate choices about the associated data structure.

A small change in data organization can have a significant influence on the algorithm required to solve the problem. Hence, the need to re-analyze to get to know our data structure more. That being said, let take a look at some major factors that determine when to use this data structure.

To have an idea of when you should use any of these data structures, you need to understand the approach to do some major Traversal to them. This information is useful when deciding the appropriate data structure for your algorithm.

For instance, In linear data structures, each element is connected to either one or two more elements (the next and previous). Traversal in these structures is linear. This means that insertion, deletion, and search work is in O(n). Arrays, linked lists, stacks, and queues are all examples of linear data structures. Non-linear data structures, the exact opposite of linear data structures, each element can be connected to several other data elements. Traversal is not linear; hence, search, insertion, and deletion can work in O(log n) or even O(1) time. Trees, graphs, and hash tables are all non-linear data structures.

When selecting a data structure to solve a problem, you should follow these steps.

Analyze your problem to determine the basic operations that must be supported. What are you going to do often? Examples of basic operations include inserting a data item into the data structure, deleting a data item from the data structure, and finding a specified data item.

Quantify the resource constraints for each operation. i.e., consider the amount of data you'd have to store or if it is predictable.

Next, consider what is more important? For instance, if the search speed is more important than the insertion speed, you'd likely use an ordered array and vice-versa.

Select the data structure that best meets these requirements.

Now, let's consider an example, using the same list of numbers in our previous article; 27, 31, 42, 26, 10, 44, 35, 19, 33, 14. Let's try to find the maximum difference between the numbers.

To choose a data structure, while following the steps above; the basic operation would be to find the maximum and minimum numbers to know their difference.

Now, let's say we know that the amount of data cannot be more than 100, then our data is predictable. We can use a data structure like an array or linked list.

Considering what is more important will further help sieve our choice of data structure. In this case, since we are looking for the minimum and maximum value, searching is more important. So an ordered array will be the correct data structure to use.

When next you need to use a data structure, remember that: only when a data structure is symbolically linked with an algorithm can we expect high performance.

I don't mean for this particular episode to be too long. In our subsequent article, we shall, in no particular order, have a look at some traversal operations on one of our data structures as well as their pros and cons.

]]>First, Lets start with what an algorithm is.

What is an Algorithm?

Algorithms are step-by-step instructions on how to solve a problem. It identifies what is to be done and(the instructions) and the order in which they should be done. It can be represented using pseudocode or flowchart.

For example:

The algorithm for making a cup of tea might look something like this:

Fill the Electric kettle with water.

Bring to boil.

Pour water into a cup.

Put the teabag in the cup.

Steep for about 3 minutes.

Remove Tea Bag.

This can be eventually translated to computer instructions using programming languages

Given a more definite example like finding the maximum value in the list of numbers say; 27, 31, 42, 26, 10, 44, 35, 19, 33, 14.

Mere scanning through this set of numbers, you can immediately see the largest value but a computer can not scan-search as humans do. Even humans will not be able to come up with the answer when the data is many.

A computer can only compare two things at a time, ie, the algorithm must be expressed in terms of binary comparison.

So then, the linear approach a computer will take to look for the largest value might look something like this:

Read first item and store value as max

Look at each other item in the list, If it is greater, then the value becomes the new max

After going through the entire list, the current max is the largest value on the list.

For a better understanding of how data structure come to play, lets look at another example,

eg, Determining whether a list contains a given value, say 33 in the previous list of numbers.

Let come up with an algorithm to solve this, which might look something like this:

Keep the given value as the target,

Look at each value in the list,

If one is equal to the target, then we have found the value and we can stop looking.

If we go through the entire list and have not found the target, then it is not on the list.

This seems effective right?, but if the list is very long, it can take the computer a very long time to look through the entire list(this is called execution time). How much execution time it will take is what is referred to as Algorithm Complexity

Complexity is a way of expressing the number of steps or operations in an algorithm. It gives us an idea of how long it will take for an algorithm to execute.

We naturally expect an algorithm to take longer as input increases, but how much longer?

Complexity is therefore expressed as a function of the number of elements in the input data.

So, when we analyze algorithms,

We consider the number of operation that needs to be performed We also consider complexity in the worst case, so we can see the changes in operations when the input size increases For example;

We could stop when we find the target in the example above, but what happens when we have to look through every item on the list? That means if the number of items in the list increases, then we have to do more comparison through the entire list in cases where the target is not there, say 30. This is the worst case of these algorithms.

Well, what can we do better, What if the item on the list were ordered?

Consider our previous example: 27, 31, 42, 26, 10, 44, 35, 19, 33, 14.

In the ordered version, searching for 33 becomes faster

With this structure of data, searching for an item that is not in the list, say 30, becomes even easier. The computer would not have to search through the entire list, we can just stop when the next comparison value is greater than the target we are looking for. It can stop looking immediately it gets to 31 because it is greater than our target 30.

This leads us to the Data structure.

A data structure is a data organization management and storage format that enables efficient access and modification. It provides a means to manage large amounts of data efficiently. It is the way of organizing data in memory such that it is easy to access.

There are many ways to store data in software engineering. Some ways are significantly better than the other depending on the requirements, say less memory, faster access or ease of modification.

The following are some of the available data structures:

Array

List

LinkedList

ArrayList

HashTables

Dictionary

Generic Collections

Stack

Queue

Tree

Graphs

This should now give you a clearer view of algorithms and data structure even if this is the first time you heard of it.

In our subsequent article, we will look at some of these data structures, and also a better algorithm to look for our given value in our earlier example.

We searched through the list using a Linear approach. A linear algorithm is one in which the number of operations increases linearly as the increase in the input size. We shall also look at a better and faster approach called the Binary Search Algorithm.

]]>