Final Flashcards

1
Q

What is the main purpose of memory management?

A

To control and coordinate computer memory by assigning portions called blocks to various processes to improve system performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the roles of base and limit registers?

A

In order to protect the memory we need to ensure that a process can only access addresses in it’s address space. The base and limit registers define the logical address space of a process to provide this protection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How are hardware addresses protected?

A

CPU must check every memory access generated in user mode to be sure it is between base and limit for that user.
-Instructions to load the base and limit registers are privileged.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the different stages of binding instructions and data to memory?

A

Compile Time: If memory location known a priori, absolute code can be generated; Must recompile code if starting location changes

Load Time: Must generate relocatable code if memory location is not known at compile time

Execution Time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the difference between absolute code and relocatable code?

A

Absolute code is placed exactly where you tell the assembler it will be.

Relocatable code means that the code can be placed anywhere that there is room for it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the difference between logical and physical address space

A

Logical Address - Generated by the CPU, also known as a virtual address. LA Space is the set of all logical addresses generated by a program.

Physical Address - Address seen by the memory unit. PA space is the set of all physical addresses generated by a program.

The addresses are the same in compile-time and load-time address-binding schemes, but differ in execution-time address binding schemes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the MMU?

A

The Memory-Management Unit is a hardware device that maps virtual addresses to physical addresses at runtime.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a simple MMU scheme?

A

The value in the relocation register (this is what the base register is now called) is added to every address generated by a user process at the time it is sent to memory.

The user program deals with logical addresses, and never sees the real physical addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is dynamic loading?

A

The entire program does not need to be in memory to execute. Routine is not loaded until it is called.

Better memory-space utilization; unused routine is never loaded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is static linking?

A

System libraries and program code combined by the loader into the binary program image

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is dynamic linking?

A

Linking is postponed until execution time. Particularly useful for libraries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a stub?

A

A small piece of code, used to locate the appropriate memory-resident library routine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is contiguous allocation?

A

Main Memory must support both OS and user processes with limited resources. Contiguous allocation is one early method.

Usually MM is split into two partitions where resident operating system usually held in low memory with interrupt vector and user processes held in high memory. Each process contained in single contiguous section of memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is multiple-partition allocation?

A

Variable-partition sizes for efficiency (sized to a given process’ needs)

A hole is a block of available memory. When a process arrives it is allocated memory from a hole large enough to accommodate it.

Process exiting frees its partition and adjacent free partitions are combined.

OS maintains information about allocated partitions and free ones (holes).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the methods of dynamic storage allocation?

A

How to satisfy a request of size n from a list of free holes?

First-fit: The first hole that is big enough

Best-Fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. (Leaves smallest leftover hole)

Worst-fit: Allocated the largest hole; must also search the entire list (Leaves largest leftover hole)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the types of fragmentation?

A

External Fragmentation - Total memory space exists to satisfy a request but it is not contiguous.

Internal Fragmentation - Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is compaction?

A

Compaction aims to reduce external fragmentation.

  • Shuffles memory contents to place all free memory together in one large block
  • Possible only if relocation is dynamic and done at execution time
  • I/O problem - Latch job in memory while it is involved in I/O. Do I/O only into OS buffers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is paging?

A

Physical address space of a process can be non-contiguous; process is allocated physical memory whenever the latter is available which avoids external fragmentation and the problem of varying sized memory chunks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How does paging work?

A

Memory is divided into fixed-size blocks called frames whose size is a power of 2 (512 bytes to 16Mbytes)

Logical memory is divided into blocks of the same size called pages.

All frames are kept track of, and running a program of size N pages requires N free frames.

Still has internal fragmentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the address translation scheme?

A
Address generated by CPU is divided into:
page number (p) - Used as an index into a page table which contains base address of each page in physical memory
page offset (d) - Combined with base address to define the physical memory address that is sent to the memory unit.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How is internal fragmentation calculated with paging?

A

Page size - n bytes
Process size - m bytes

Internal fragmentation = n - (m % n)

Smaller pages = less internal fragmentation, but each page takes memory to track.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How is the page table implemented?

A

Page-table base register (PTBR) points to the page table

Page-table length register (PTLR) indicates the size of the page table

In this scheme every data/instruction access requires two memory accesses, one for the page table and one for the data / instruction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are translation look-aside buffers?

A

A special fast-lookup hardware cache used to solve the two memory access problem. (Also called TLBs or associative memory)

24
Q

How do translation look-aside buffers work?

A

Some TLBs store address-space identifiers (ASIDs) in each TLB entry which uniquely identifies each process to provide address-space protection for that process.

TLBs are typically small (64 to 1024 entries)

On a TLB miss the value is loaded into the TLB for faster access next time.
-Replacement policies must be considered but some entries can be wired down to permanently remain in TLB

25
Q

How is memory protection implemented?

A

By associating protection bit with each frame to indicate if read-only or read-write access is allowed.

Valid-invalid bit attached to each entry in the page table indicating if the associated page is in the process’ logical address space, and is thus a legal page (valid).

26
Q

What are shared pages?

A

Shared code

  • One copy of read-only code shared among processes
  • Similar to multiple threads sharing the same process
  • Useful for interprocess communication if sharing of read-write pages is allowed.

Private code and data

  • Each process kepps a separate copy of the code and data
  • The pages for the private code and data can appear anywhere in the logical address space
27
Q

What is the structure of the page table?

A

Memory structures can get huge using straight forward methods, which is why it is useful to divide the page table into smaller units.

28
Q

What is hierarchical paging?

A

Logical address space broken up into multiple page tables, most simply a two-level page table. The page table is then separated into pages.

29
Q

How does two-level paging work?

A

A logical address is divided into a page number consisting of 22 bits and a page offset consisting of 10 bits (on a 32-bit machine)

Since the page table is paged the page number is further divided into a 10-bit page number and a 12-bit page offset.

30
Q

What are hashed page tables?

A

Common with address spaces > 32 bits

  • The virtual page number is hashed into a page table which contains a chain of elements hashing to the same location.
  • Each element contains the virtual page number, the value of the mapped page frame and a pointer to the next element.
  • Virtual page numbers are compared in this chain searching for a match, if a match is found the corresponding physical frame is extracted.
31
Q

What is an inverted page table?

A
  • Rather than each process having a page table and keeping track of all possible logical pages, track all physical pages.
  • One entry for each real page of memory
  • Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page.
  • Decreases memory needed to store each page table, but increases time needed to search the tables when a page reference occurs.
  • Use hash table to limit the search to at most a few page-table entries (TLB can accelerate access)
  • Implement shared memory by mapping a virtual address to the shared physical address.
32
Q

What is the basic concept of virtual memory?

A

Code needs to be in memory to execute, but the entire program not needed at the same time and some parts rarely used (i.e., error code, unusual routines, large data structures).

33
Q

What are the advantages of virtual memory?

A
  • Allows execution of partially-loaded program in memory.
  • Programs no longer constrained by limits of physical memory
  • Since each program takes less memory while running -> more programs run concurrently, resulting in increased CPU utilization and throughput with no increase in response time or turnaround time.
  • Allows for more efficient process creation
  • Less I/O needed to load or swap programs into memory -> each user program runs faster
34
Q

What is a virtual address space?

A

Logical view of how process is stored in memory

  • Usually start at address 0, contiguous addresses until end of space
  • Physical memory organized in page frames
  • MMU must map logical to physical
35
Q

What is demand paging?

A

Bringing a page into memory only when it is needed.

Page is needed => reference to it
• invalid reference => abort (that’s when page does not exist in process’ address space)
• not-in-memory => bring to memory (that means process does exist in process’ address space, but it’s on disk)

36
Q

What is a lazy swapper?

A

Never swaps a page into memory unless page will be needed

37
Q

What is a pager?

A

Swapper that deals with pages is a pager.

-Pager guesses which pages will be used before swapping out again
-Pager brings in only those pages into memory.
-How to determine that set of pages?
• Need new MMU functionality to implement demand paging
-If pages needed are already memory resident
• No demand paging needed

-If page needed and not memory resident
• Need to detect and load the page into memory from storage

38
Q

How does the valid-invalid bit work?

A

With each page table entry a valid–invalid bit is associated
(v => in-memory – memory resident, i => not-in-memory)
Initially valid–invalid bit is set to i on all entries

During MMU address translation, if valid–invalid bit in page table entry is
i => page fault

39
Q

What is a page fault?

A

If there is a reference to a page, first reference to that page will trap to operating system: page fault
1. Operating system looks at another table to decide:
• Invalid reference => abort
• Just not in memory
2. Find free frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory & set validation bit = v
5. Restart the instruction that caused the page fault

Extreme case: – start process with no pages in memory - Pure demand paging

40
Q

What is copy-on-write?

A

Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memory
If either process modifies a shared page, only then is the page copied

41
Q

What happens if there is no free frame?

A

• Used up by process pages
• Also in demand from the kernel, I/O buffers, etc.
• Page replacement – find some page in memory, but not really in use, page it out
o Algorithm – terminate? swap out? replace the page?
o Performance - want an algorithm which will result in minimum number of page faults
• Same page may be brought into memory several times
• Prevent over-allocation of memory by modifying page-fault service routine to include page replacement
• Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk

42
Q

How does basic page replacement work?

A
  1. Find the location of the desired page on disk
  2. Find a free frame:
    • If there is a free frame, use it
    • If there is no free frame, use a page replacement algorithm to select a
      victim frame
      - Write victim frame to disk if dirty
  3. Bring the desired page into the (newly) free frame; update the page and frame tables
  4. Continue the process by restarting the instruction that caused the trap
43
Q

What are the page and frame replacement algorithms?

A

Frame-allocation algorithm determines
• How many frames to give each process
• Which frames to replace

Page-replacement algorithm
• Want lowest page-fault rate on both first access and re-access

Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string
String is just page numbers, not full addresses
Repeated access to the same page does not cause a page fault
Results depend on number of frames available

Here is a sample reference string of referenced page numbers:

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

How many page faults?

44
Q

What is the optimal paging algorithm?

A

Replace page that will not be used for longest period of time
-9 is optimal for the example
Hard to implement in practice because you can’t read the future to know which page won’t be used for the longest period of time, but good at measuring how well your algorithm performs.

45
Q

What is thrashing?

A
If a process does not have “enough” pages, the page-fault rate is very high
•	Page fault to get page
•	Replace existing frame
•	But quickly need replaced frame back
•	This leads to:
o	Low CPU utilization
o	Operating system thinking that it needs to increase the degree of multiprogramming
o	Another process added to the system

Thrashing = a process is busy swapping pages in and out, resulting in overhead.

46
Q

What is the transfer rate?

A

Rate at which data flows between drive and computer

47
Q

What is seek time?

A

Time to move disk arm to desired cylinder

48
Q

What is rotational latency?

A

The time for desired sector to rotate under the disk head.

49
Q

What are solid state disks and how are they an improvement over HDD?

A
  • More reliable than HDDs
  • More expensive per MB
  • Shorter life span and less capacity
  • MUCH Faster
  • No moving parts, no seek time or rotational latency
  • USB drives, DRAM disk replacements and main storage in devices like smartphones are examples.
50
Q

How are disks structured?

A

• Disk drives are addressed as large 1-dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer
o Low-level formatting creates logical blocks on physical media

• The 1-dimensional array of logical blocks is mapped into the sectors of the disk sequentially
o Sector 0 is the first sector of the first track on the outermost cylinder
o Mapping proceeds in order through that track, then the rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost

51
Q

How are disks scheduled?

A

• There are many sources of disk I/O request
o OS
o System processes
o Users processes

• I/O request includes input or output mode, disk address, memory address, number of sectors to transfer
• OS maintains queue of requests, per disk or device
• Idle disk can immediately work on I/O request, busy disk means work must queue
o Optimization algorithms only make sense when a queue exists
• In the past, operating system responsible for queue management, disk drive head scheduling
o Now, built into the storage devices, controllers
o Just provide LBAs, handle sorting of requests

  • Note that drive controllers have small buffers and can manage a queue of I/O requests (of varying “depth”)
  • Several algorithms exist to schedule the servicing of disk I/O requests
  • We illustrate scheduling algorithms with a request queue (0-199)
52
Q

What are the types of scheduling algorithms?

A

98, 183, 37, 122, 14, 124, 65, 67
Head pointer 53

FCFS - First come first serve, disk moves from 98 to 183 to 37 etc. etc. Lots of movement

SCAN - Starts at one end of the disk and moves toward the other end, servicing requests that it runs into along the way. When it reaches one end it reverses and goes the other way.

C-SCAN - Head moves from one end of the disk to the other, servicing requests as it goes. When it reaches the other end it immediately returns to the beginning of the disk without servicing requests on the return trip. Treats the cylinders as a circular list that wraps around from the last cylinder to the first one.

53
Q

How do you choose a disk-scheduling algorithm?

A

• SSTF is common and has a natural appeal
• SCAN and C-SCAN perform better for systems that place a heavy load on the disk
o Less starvation, but still possible
• To avoid starvation Linux implements deadline scheduler
o Maintains separate read and write queues, gives read priority
Because processes more likely to block on read than write
o Implements four queues: 2 x read and 2 x write
1 read and 1 write queue sorted in LBA order, essentially implementing C-SCAN
1 read and 1 write queue sorted in FCFS order

54
Q

What is a raw disk?

A

Raw disk access for apps that want to do their own block management, keep OS out of the way (databases for example)

55
Q

What is RAID?

A

• RAID – redundant array of inexpensive disks
multiple disk drives provide reliability via redundancy
Increases the mean time to failure
RAID is arranged into six different levels