# Data Structures: Fundamentals of Algorithms

An algorithm is a step-by-step procedure that defines a set of instructions that are executed in one order or another to obtain the desired result. Algorithms are usually created independently of the underlying programming languages, that is, with the ability to be implemented in several languages.

From the point of view of data structures, the following categories of algorithms are important:

**The algorithm for finding**an item in a data structure.**An algorithm for sorting**items in a specific order.**The algorithm for inserting**an element into a data structure.**The algorithm for changing**an existing element in the data structure.**An algorithm for removing**an item from a data structure.

### Algorithm characteristics

Not all procedures can be called an algorithm. The algorithm must have the following characteristics:

**Unambiguous.**The algorithm must be clear and unambiguous. Each of its steps, as well as the data in the I/O, must be clear and lead to only one value.**Input.**The algorithm must have well-defined inputs, but the input data may be missing.**Output.**The algorithm must have well-defined output and match the desired result.**Limb.**Algorithms must complete after a finite number of steps.**Feasibility.**Should be feasible with available resources.**Independence.**The algorithm should have step-by-step instructions that are independent of any program code.

### How to write an algorithm?

Rather, it depends on the task and resources. There are no clearly defined standards for writing algorithms. Algorithms are never written to support a particular program code.

As you know, all programming languages have common basic code constructs, for example, loops (*do*, *for*, *while*), flow control (*if-else*), etc. These common constructs can be used to write an algorithm.

Algorithms are usually written step by step, but not always. Writing an algorithm is a process that is performed after a clear definition of the problem area. That is, you need to know the problem area for which the solution is being developed.

### Example

Let's try to learn how to write algorithms by example.

**Task: to** develop an algorithm for adding two numbers and displaying the result:

**Step 1** − START ADD**Step 2** − declare three integers **a**, **b** and **c****Step 3** − define the values **a** and **b****Step 4** − add values **a** and **b****Step 5** − save the result of step 4 in **c****Step 6** − print **c****Step 7** − END ADD

Algorithms tell programmers how to write program code. The algorithm can also be written in this form:

**Step 1** − START ADD**Step 2** − get the values **a** and **b****Step 3** − c ← a + b**Step 4** − display c**Step 5** − END ADD

When developing and analyzing algorithms, the second method is usually used to describe the algorithm. This makes it easier to analyze, ignoring all unwanted definitions. The analyst can see what operations are being used and how the process is proceeding.

It is not necessary to write **step numbers**.

The algorithm is developed to obtain a solution to the problem. And the problem can have several solutions:

Therefore, algorithms with many solutions can be developed for the problem. The next step is to analyze these proposed algorithms and implement the most appropriate solution.

### Analysis of algorithms

The effectiveness of the algorithm can be analyzed at two different stages – before and after implementation. Here are the steps:

**A priori****analysis**is a theoretical analysis of an algorithm. The effectiveness of an algorithm is measured provided that all other factors, such as the speed of the processor, are constant and do not affect the implementation.**A posteriori****analysis**is an empirical analysis of an algorithm. The selected algorithm is implemented using a programming language and then executed on the target computer. This analysis collects actual statistics, such as execution time and required space.

Let's study *a priori* analysis of algorithms. Algorithm analysis takes into account the execution time or duration of the various operations involved. The execution time of an operation can be defined as the number of instructions executed by a computer in one operation.

### Algorithm complexity

Let **X** be the algorithm, **n** be the size of the input data, and the time and space used in the X algorithm are the two main factors that determine its efficiency.

**The time factor.**Time is measured by counting the number of key operations, such as comparisons in the sorting algorithm.**The space factor.**Space is measured by calculating the maximum amount of memory required by the algorithm.

The complexity of the **f(n)** algorithm determines the execution time and/or disk space required by the algorithm, where **n** is the size of the input data.

### Spatial complexity

The spatial complexity of an algorithm is the amount of memory required by the algorithm over its life cycle. This volume consists of two parts:

- A fixed part, that is, the space required to store certain data and variables that are independent of the size of the task. For example, the simple variables and constants used, the size of the program, and so on.
- The variable part, that is, the space required by variables that depend on the size of the task. For example, dynamic memory allocation, recursion stack space, and so on.

The spatial complexity S(P) of any algorithm P is defined as S(P) = C + SP(I), where C is a fixed , and SP(I) is the variable part of the algorithm that depends on the characteristic of instance I. Here is a simple example explaining this concept:

**Algorithm: SUM(A, B)** **Step 1** - START**Step 2** - C ← A + B + 10**Step 3** - End

There are three variables A, B, and C and one constant. Therefore, the spatial complexity S(P) = 1 + 3. The space depends on the data types of the specified variables and constant types and will increase accordingly.

### Temporal difficulty

The temporal complexity of an algorithm is the amount of time it takes to complete it. It can be defined as a numerical function T(n), where T(n) can be measured by the number of steps, provided that a constant time is expended for each step.

For example, the addition of two n-bit integers takes place in **n** steps. So the total computational time is T(n) = c ∗ n, where c is the time it takes to add two bits. Here there is a linear growth of T(n) as the size of the input data increases.