Complexity Flashcards Preview

Algorithms & Data Structures > Complexity > Flashcards

Flashcards in Complexity Deck (29)
Loading flashcards...
1
Q

Big O

A
  • O(g(n)) is the set of functions are “upper-bounded” by g(n)
  • If f(n) is a function that describes the running time of the program (w.r.t. input)
  • Then f(n) is in O(g(n)) if g(n) dominates f(n) for some c after n0
  • Basically, f(n) will always be <= c*g(n)
2
Q

Big Omega

A
  • Ω(g(n)) is the set of functions are “lower-bounded” by g(n)
  • If f(n) is a function that describes the running time of the program (w.r.t. input)
  • Then f(n) is in Ω(g(n)) if f(n) dominates g(n) for some c after n0
  • Basically, f(n) will always be >= c*g(n)
3
Q

Little O

A
  • More restrictive version of big O
  • o(g(n)) is the set of functions that are dominated by g(n) for all constants c after some n0
  • f(n) and g(n) can’t have the same dominant term (can’t both be squares or cubes, etc)
4
Q

Little Omega [ω]

A
  • More restrictive version of big Ω
  • ω(g(n)) is the set of functions that are dominate g(n) for all constants c after some n0
  • f(n) and g(n) can’t have the same dominant term (can’t both be squares or cubes, etc)
5
Q

Theta

A
  • The set of functions for which g(n) is both an upper and lower bound
  • Θ(g(n)) is the set of functions that are in both O(g(n)) and Ω(g(n))
  • Provides a tight bound
  • c1*g(n) <= f(n) <= c2*g(n)
    • f(n) is sandwiched between g(n) on both sides (albeit with different constants c1 and c2)
  • Many functions do not have a simple Θ
6
Q

Big O

Formal Definition

A

f(n) is in the set O(g(n)) if and only if:

There exist positive constants c & N0 s.t. for all n > N0,

f(n) <= c*g(n)

7
Q

Asymptotic

A
  • Basically, when n is really large
8
Q

Best Case/Worst Case

A
  • Describes how running time changes based on the type of input
  • A program has a different best case and worst case if running time differs depending on input
  • Ex: an algorithm might have a constant running time when input is sorted (best case), but a quadratic running time when input is reversed (worst case)
  • Best/worst/average case each have their own Big O and Big Omega
9
Q

Upper Bound/Lower Bound

A
  • An upper bound of f(n) is any function g(n) that is >= f(n) asymptotically
  • A function f(n) has multiple (infinite) upper bounds
  • Ex:
    • n2/2 is an upper bound of n2
    • n3 is also an upper bound of n2
  • Lower bound is basically the reverse
    • Any function g(n) <= f(n)
10
Q

Space Complexity

A
  • The maximum amount of space required at any given time for an algorithm to complete its task
  • Uses the same notation (Big O, Big Omega, etc) as time complexity
  • Includes:
  1. The input
  2. Any auxiliary space
  3. Space created in the stack (in recursive calls)
  • But, in general, when discussing space complexity, we only care about aux space and stack space
11
Q

Amortized Analysis

A
  • An alternative approach to calculating the worst-case running time
  • Used when your algorithm has more than one operation, some are cheap and some are expensive
  • Ex: “append” consists of both populating a cell and, occationally, resizing an array
  • Provides an upper bound on a sequence of operations
  • Unlike average case, can’t get a higher running time (on a sequence) by being “unluckly”
  • Three approaches
    • aggregate method
    • accounting method
    • potential method
12
Q

Average Case

A
  • The performance of an algorithm based on what input is “most likely” to look like
  • Based on probability of likely inputs
  • Often requires statistical analysis
  • Has its own upper bound/lower bound
13
Q

Averaging Method

A
  • The most common type of amortized analysis
  • Basically, add up the cost of n operations, and divide by n
14
Q

Logarithms and Complexity

A

For an input size of n

  • log2(n) is the number of times you can divide n by 2 before n becomes a fraction (while n is >= 1)
  • Number of “iterations/steps” to process all elements
  • Ex: input of size 32 can be processed with 5 steps
15
Q

Recursion and Complexity

A
  • Time complexity is number of recursive calls (size of recursion tree) * complexity of each recursive calls
  • Ex: fibonacci has O(2n) recursive calls, each is constant
  • Ex: mergesort as O(n) recursive calls, each of which is (on average) O(log2(n))
  • Space complexity is max height of stack (max height of tree) * aux space in each frame + maximum amount of space allocated by any individual call
16
Q

decision problems

A
  • problems with yes/no answers
17
Q

P

A
  • set of all decision problems that can be solved in polynomial time
18
Q

EXP

A
  • the set of all decision problems solvable in exponential time
  • exponential ⇒ 2n^c
19
Q

R

A
  • the set of all decision problems that can be solved in finite time
  • things out of R are “unsolvable”
20
Q

NP

A
  • non-deterministic polynomial
  • the set of decision problems which aren’t necessarily solveable in P, but whose ‘yes’ solutions can be confirmed (probably via a different algorithm) in polynomial time
  • or, the set of decision problems can be solved in poly time by a non-deterministic turing machine
  • P is in NP
21
Q

turing machine

A
  • a hypothetical computer model that is powerful enough to compute anything that is computable

Consists of:

  • Aribtrarily long/endless line of tape
    • tape is divided into square
    • every square has a 1,0
    • long enough tape in either direction
    • equivalent to unlimited RAM
  • A state variable
  • Set of rules
    • Describes what to do given a state and value on tape
  • Read-write head
22
Q

non-deterministic turing machine

A
  • hypothetical computer that can perform multiple actions at once
  • in essense, explores multiple “branches” in a decision problem simultaneously
23
Q

turing complete

A
  • it can do everything that a turing machine can do
24
Q

halting problem

A
  • the problem of determining, given an arbitrary computer program, whether or not it ever finishes running or loops forever
  • not in R (unsolvable)
25
Q

P = NP

A
  • open question of whether all problems in NP are also in P
  • most people think P != NP
26
Q

reductions

A
  • converting from one type of problem to another
  • ex: converting from unweighted shortest paths to weighted shortest paths by giving all paths weight of 1
  • if you reduce A ⇒ B, the B is at least as hard as A
27
Q

hardness

A
  • basically refers to complexity (difficulty)
  • also refers to the relationship between 2 problems
  • a problem B is hard for A if A can be reduced to B
  • therefore, a solution to B can be used to get a solution to A
  • we use the solutions for “harder” problems to solve the easier problems
  • generally reduce from easier problem to harder problem, or to a problem with equivalent hardness
28
Q

NP-hard

A
  • at least as “hard” as any other NP problem
  • every problem in NP can be reduced to it
  • if we could solve an NP-hard problem, we could use it to solve other problems of equivalent or lesser hardness
  • includes problems that are not in NP (problems in EXP or R)
29
Q

NP-complete

A
  • the set of decision problems that are:
  1. NP-hard
  2. in NP (as opposed to EXP)