2.1 - Elements of computational thinking
2.1.1–2.1.5 The five aspects of computational thinking
Why needed: complex real-world problems are too detailed to solve directly, so abstraction makes them manageable.
Examples: a map is an abstraction of the real world; a class is an abstraction of a real-world object.
Caching: storing results of expensive operations for reuse. Benefit: faster. Drawback: uses memory, stale data.
Reusable components: write code once, use many times. Reduces duplication, easier to maintain.
Closely related to decomposition: identifying which components are needed and in what order.
Examples: if/else conditions, loops with conditions, case statements.
Benefits: faster execution for tasks that can be parallelised.
Trade-offs: not all problems can be parallelised (some parts are sequential); synchronisation overhead; race conditions.
Exam questions on computational thinking often ask you to apply one of these five aspects to a given scenario, e.g. "explain how abstraction was used in designing this system."
2.2 - Problem solving and programming
2.2.1 Programming techniques
Programming constructs: sequence (instructions in order), iteration (loops: for, while, do-while), branching (if/else, case/switch).
Recursion: a function that calls itself. Needs a base case (termination condition) to prevent infinite recursion. Call stack grows with each recursive call.
Variables:
Modularity: breaking a program into separate functions/procedures. Benefits: easier to test, debug, maintain and reuse. Enables parallel development by a team.
2.2.2 Computational methods
2.3 - Algorithms
2.3.1a Algorithm complexity: Big O notation
Big O notation describes how execution time or space grows as input size n increases. We care about the worst case and the dominant term.
| Notation | Name | Example | Notes |
|---|---|---|---|
| O(1) | Constant | Array access by index, hash table lookup | Always same time regardless of n |
| O(log n) | Logarithmic | Binary search, BST operations (balanced) | Very efficient: doubling the input adds only one extra step |
| O(n) | Linear | Linear search, traversing a linked list | Proportional to input size |
| O(n log n) | Linearithmic | Merge sort, quick sort (average) | Efficient for sorting |
| O(n²) | Polynomial/Quadratic | Bubble sort, insertion sort, selection sort | Nested loops; poor for large n |
| O(2ⁿ) | Exponential | Brute force subset problems, recursive Fibonacci | Impractical for large n |
To determine Big O: identify the dominant loop structure. One loop over n = O(n). Nested loop = O(n²). Halving the problem each step = O(log n). Drop constants and lower-order terms.
2.3.1b Sorting algorithms
Best case: O(n) if already sorted (with optimisation). Simple but inefficient for large datasets.
Efficient for nearly-sorted data. Good for small datasets. Stable.
Requires O(n) extra space. Stable sort.
Worst case O(n²) if pivot consistently worst choice (sorted array with first element pivot). In-place. Not stable. Very fast in practice.
Merge sort = guaranteed O(n log n) but needs extra memory. Quick sort = faster in practice but O(n²) worst case. Bubble/insertion sort fine for small or nearly-sorted data only.
2.3.1c Search algorithms
Highly efficient, as it eliminates half the data each step.
2.3.1d Graph/tree traversal and pathfinding
Dijkstra's vs A*: Dijkstra finds shortest path to ALL nodes. A* finds shortest path to ONE target node faster by using a heuristic to prioritise promising paths.