OCR A-Level CS: Component 02 - Algorithms & Programming

H446/02  ·  2 hours 30 minutes  ·  40% of A-Level  ·  Topics 2.1 – 2.3

2.1 - Elements of computational thinking

2.1.1–2.1.5  The five aspects of computational thinking

Thinking abstractly (2.1.1)
Removing unnecessary detail to focus on what matters for the problem. An abstraction is a simplified model of reality.
Why needed: complex real-world problems are too detailed to solve directly, so abstraction makes them manageable.
Examples: a map is an abstraction of the real world; a class is an abstraction of a real-world object.
Thinking ahead (2.1.2)
Identifying inputs and outputs before solving. Defining preconditions.
Caching: storing results of expensive operations for reuse. Benefit: faster. Drawback: uses memory, stale data.
Reusable components: write code once, use many times. Reduces duplication, easier to maintain.
Thinking procedurally (2.1.3)
Breaking a problem into a series of ordered steps. Identifying sub-procedures. Determining the sequence of operations needed.
Closely related to decomposition: identifying which components are needed and in what order.
Thinking logically (2.1.4)
Identifying decision points in a solution. Determining the conditions that affect outcomes. Tracing how decisions affect program flow.
Examples: if/else conditions, loops with conditions, case statements.
Thinking concurrently (2.1.5)
Identifying which parts of a problem can be tackled at the same time.
Benefits: faster execution for tasks that can be parallelised.
Trade-offs: not all problems can be parallelised (some parts are sequential); synchronisation overhead; race conditions.

Exam questions on computational thinking often ask you to apply one of these five aspects to a given scenario, e.g. "explain how abstraction was used in designing this system."

2.2 - Problem solving and programming

2.2.1  Programming techniques

Programming constructs: sequence (instructions in order), iteration (loops: for, while, do-while), branching (if/else, case/switch).


Recursion: a function that calls itself. Needs a base case (termination condition) to prevent infinite recursion. Call stack grows with each recursive call.

Recursion advantages
Elegant, concise code for naturally recursive problems (trees, fractals, factorial). Easier to read for some problems.
Recursion disadvantages
Higher memory use (stack frames). Stack overflow risk. Often slower than iterative equivalent due to function call overhead.

Variables:

Global variables
Accessible throughout entire program. Convenient but can cause side effects, as any function can change them unexpectedly.
Local variables
Only accessible within the function/block they're declared in. Better encapsulation. Destroyed when function returns.

Modularity: breaking a program into separate functions/procedures. Benefits: easier to test, debug, maintain and reuse. Enables parallel development by a team.

Pass by value
A copy of the variable is passed. Changes inside the function don't affect the original. Safer.
Pass by reference
The memory address is passed. Changes inside the function DO affect the original. More efficient for large data.

2.2.2  Computational methods

Divide and conquer
Break problem into smaller sub-problems of the same type. Solve recursively. Combine results. Examples: merge sort, binary search, quick sort.
Backtracking
Explore all possible solutions. When a dead end is reached, backtrack and try another path. Examples: maze solving, Sudoku, N-queens problem.
Data mining
Discovering patterns in large datasets. Used for: fraud detection, market analysis, recommendation systems.
Heuristics
Rules of thumb that find good-enough solutions quickly when an optimal solution is impractical. Used in A* pathfinding, NP-hard problems.
Performance modelling
Simulating system behaviour to predict performance before building. Used for capacity planning, bottleneck identification.
Pipelining
Breaking a task into stages that can be processed concurrently on different data. Used in CPUs, compilers, data processing.
Visualisation
Representing data graphically to aid understanding. Helps identify patterns, trends and outliers in complex data.

2.3 - Algorithms

2.3.1a  Algorithm complexity: Big O notation

Big O notation describes how execution time or space grows as input size n increases. We care about the worst case and the dominant term.

NotationNameExampleNotes
O(1)ConstantArray access by index, hash table lookupAlways same time regardless of n
O(log n)LogarithmicBinary search, BST operations (balanced)Very efficient: doubling the input adds only one extra step
O(n)LinearLinear search, traversing a linked listProportional to input size
O(n log n)LinearithmicMerge sort, quick sort (average)Efficient for sorting
O(n²)Polynomial/QuadraticBubble sort, insertion sort, selection sortNested loops; poor for large n
O(2ⁿ)ExponentialBrute force subset problems, recursive FibonacciImpractical for large n

To determine Big O: identify the dominant loop structure. One loop over n = O(n). Nested loop = O(n²). Halving the problem each step = O(log n). Drop constants and lower-order terms.

2.3.1b  Sorting algorithms

Bubble sort - O(n²)
Repeatedly compare adjacent pairs, swap if out of order. Each pass bubbles the largest unsorted element to its correct position.
Best case: O(n) if already sorted (with optimisation). Simple but inefficient for large datasets.
Insertion sort - O(n²)
Build sorted portion one element at a time. Take next element, insert it into correct position in already-sorted portion by shifting larger elements right.
Efficient for nearly-sorted data. Good for small datasets. Stable.
Merge sort - O(n log n)
Divide and conquer. Split list in half recursively until single elements. Merge pairs of sorted sublists. Always O(n log n), giving consistent performance.
Requires O(n) extra space. Stable sort.
Quick sort - O(n log n) avg, O(n²) worst
Choose pivot. Partition: elements < pivot left, elements > pivot right. Recursively sort each partition.
Worst case O(n²) if pivot consistently worst choice (sorted array with first element pivot). In-place. Not stable. Very fast in practice.

Merge sort = guaranteed O(n log n) but needs extra memory. Quick sort = faster in practice but O(n²) worst case. Bubble/insertion sort fine for small or nearly-sorted data only.

2.3.1c  Search algorithms

Linear search - O(n)
Check each element one by one until found or end reached. Works on unsorted data. Simple. Inefficient for large datasets.
Binary search - O(log n)
Requires sorted data. Compare target with middle element. If equal: found. If less: search left half. If greater: search right half. Repeat.
Highly efficient, as it eliminates half the data each step.

2.3.1d  Graph/tree traversal and pathfinding

Depth-first traversal (post-order)
Uses a stack. Explores as far as possible down each branch before backtracking. Post-order: Left → Right → Root. Used for: expression evaluation, topological sort, detecting cycles.
Breadth-first traversal
Uses a queue. Explores all nodes at current depth before moving deeper. Finds shortest path in unweighted graphs. Used for: shortest path, level-order traversal.
Dijkstra's shortest path
Finds shortest path from a source node to all other nodes in a weighted graph with non-negative weights. Uses a priority queue, always processing the lowest-cost unvisited node first. Greedy algorithm.
A* algorithm
Extension of Dijkstra's. Uses a heuristic (estimated distance to goal) to guide search. f(n) = g(n) + h(n) where g = cost so far, h = heuristic estimate. Faster than Dijkstra for single-target pathfinding. Used in games and GPS navigation.

Dijkstra's vs A*: Dijkstra finds shortest path to ALL nodes. A* finds shortest path to ONE target node faster by using a heuristic to prioritise promising paths.