Threads - Operating Systems Group

Threads - Operating Systems Group Threads - Operating Systems Group

os.inf.tu.dresden.de
from os.inf.tu.dresden.de More from this publisher

Faculty of Computer Science Institute of System Architecture, <strong>Operating</strong> <strong>Systems</strong> <strong>Group</strong><br />

THREADS<br />

MICHAEL ROITZSCH


RECAP<br />

TU Dresden MOS: <strong>Threads</strong><br />

2


■ kernel:<br />

■ provides system foundation<br />

TU Dresden MOS: <strong>Threads</strong><br />

MICROKERNEL<br />

■ usually runs in privileged CPU mode<br />

■ microkernel:<br />

■ kernel provides mechanisms, no policies<br />

■ most functionality implemented in user<br />

mode, unless dictated otherwise by<br />

■ security<br />

■ performance<br />

3


Rights<br />

TU Dresden MOS: <strong>Threads</strong><br />

ABSTRACTIONS<br />

Resource Mechanism<br />

CPU Thread<br />

Memory Task<br />

Communication IPC, IRQ<br />

Platform Virtual Machine<br />

Capabilities<br />

4


VIRTUAL MACHINE<br />

■ provides an exclusive instance of a full<br />

system platform<br />

■ may be a synthetic platform (bytecode)<br />

■ full software implementations<br />

■ hardware-assisted implementations in the<br />

kernel (hypervisor)<br />

■ see virtualization lecture on Nov 27 th<br />

TU Dresden MOS: <strong>Threads</strong><br />

5


■ inter-process communication<br />

■ between threads<br />

■ two-way agreement, synchronous<br />

■ memory mapping with flexpages<br />

■ see communication lecture (past)<br />

TU Dresden MOS: <strong>Threads</strong><br />

IPC<br />

6


■ (virtual) address space<br />

■ unit of memory management<br />

■ provides spatial isolation<br />

TU Dresden MOS: <strong>Threads</strong><br />

TASK<br />

■ common memory content can be shared<br />

■ shared libraries<br />

■ kernel<br />

■ see memory lecture next week<br />

7


TU Dresden MOS: <strong>Threads</strong><br />

KERNEL AS<br />

User Kernel<br />

User Address Space Kernel Address Space<br />

8


Kernel<br />

User<br />

Task 1<br />

SHARED KERNEL<br />

Pagetables<br />

Physical RAM<br />

TU Dresden MOS: <strong>Threads</strong><br />

Pagetables<br />

Kernel<br />

User<br />

Task 2<br />

9


more code<br />

TU Dresden MOS: <strong>Threads</strong><br />

ALTERNATIVES<br />

user shared system privileged<br />

Monolith Exokernel Microkernel Software<br />

Isolation<br />

10


THREADS<br />

TU Dresden MOS: <strong>Threads</strong><br />

11


■ abstraction of code execution<br />

■ unit of scheduling<br />

■ provides temporal isolation<br />

■ typically requires a stack<br />

■ thread state:<br />

■ instruction pointer<br />

■ stack pointer<br />

■ CPU registers, flags<br />

TU Dresden MOS: <strong>Threads</strong><br />

BASICS<br />

CPU<br />

IP<br />

SP<br />

Regs<br />

Stack<br />

Code<br />

12


■ storage for function-local data<br />

■ local variables<br />

■ return address<br />

■ one stack frame per function<br />

■ grows and shrinks<br />

dynamically<br />

■ grows from high to low<br />

addresses<br />

TU Dresden MOS: <strong>Threads</strong><br />

STACK<br />

Stack Frame 1<br />

Stack Frame 2<br />

Stack Frame 3<br />

13


TU Dresden MOS: <strong>Threads</strong><br />

KERNEL’S VIEW<br />

■ maps user-level threads to kernel-level<br />

threads<br />

■ often a 1:1 mapping<br />

■ threads can be implemented in userland<br />

■ assigns threads to hardware<br />

■ one kernel-level thread per logical CPU<br />

■ with hyper-threading and multicore, we<br />

have more than one hardware thread now<br />

14


CPU<br />

SP<br />

IP<br />

Regs<br />

TU Dresden MOS: <strong>Threads</strong><br />

KERNEL ENTRY<br />

Stack<br />

Code<br />

✖<br />

■ thread can enter<br />

kernel:<br />

■ voluntarily<br />

■ system call<br />

■ forced<br />

■ interrupt<br />

■ exception<br />

15


CPU<br />

SP<br />

IP<br />

Regs<br />

TU Dresden MOS: <strong>Threads</strong><br />

KERNEL ENTRY<br />

Stack<br />

Code<br />

Stack<br />

Code<br />

✖<br />

■ IP and SP point<br />

into kernel<br />

■ user CPU state<br />

stored in TCB<br />

■ old IP and SP<br />

■ registers<br />

■ flags<br />

■ FPU state<br />

■ MMX, SSE<br />

16


■ thread control block<br />

■ kernel object, one per thread<br />

TU Dresden MOS: <strong>Threads</strong><br />

TCB<br />

■ stores thread’s userland state while it is<br />

not running<br />

■ untrusted parts can be stored in user<br />

space<br />

■ separation into KTCB (kernel TCB) and<br />

UTCB (user TCB)<br />

■ UTCB also holds system call parameters<br />

17


TU Dresden MOS: <strong>Threads</strong><br />

KERNEL EXIT<br />

■ once the kernel has provided its services,<br />

it returns back to userland<br />

■ by restoring the saved user IP and SP<br />

■ the same thread or a different thread<br />

■ the old thread may be blocking now<br />

■ waiting for some resource<br />

■ returning to a different thread might<br />

involve switching address spaces<br />

18


SCHEDULING<br />

TU Dresden MOS: <strong>Threads</strong><br />

19


TU Dresden MOS: <strong>Threads</strong><br />

BASICS<br />

■ scheduling describes the decision, which<br />

thread to run on a CPU at a given time<br />

■ When do we schedule?<br />

■ current thread blocks or yields<br />

■ time quantum expired<br />

■ How do we schedule?<br />

■ RR, FIFO, RMS, EDF<br />

■ based on thread priorities<br />

20


■ scheduling decisions are policies<br />

■ should not be in a microkernel<br />

TU Dresden MOS: <strong>Threads</strong><br />

POLICY<br />

■ L4 used to have facilities to implement<br />

scheduling in user land<br />

■ each thread has an associated preempter<br />

■ kernel sends an IPC when thread blocks<br />

■ preempter tells kernel where to switch to<br />

■ no efficient implementation yet<br />

■ scheduling is the only in-kernel policy in L4<br />

21


■ scheduling in L4 is based on thread<br />

priorities<br />

■ time-slice-based round robin within the<br />

same priority level<br />

■ kernel manages priority and timeslice as<br />

part of the thread state<br />

■ see scheduling lecture on Nov 6 th<br />

TU Dresden MOS: <strong>Threads</strong><br />

L4<br />

22


TU Dresden MOS: <strong>Threads</strong><br />

EXAMPLE<br />

■ thread 1 is a high priority driver thread,<br />

waiting for an interrupt (blocking)<br />

■ thread 2 and 3 are ready with equal<br />

priority<br />

Priority<br />

Thread 1<br />

Thread 2<br />

Thread 3<br />

23


■ 1 hardware thread<br />

TU Dresden MOS: <strong>Threads</strong><br />

EXAMPLE<br />

■ kernel fills time slices of threads 2 and 3<br />

■ scheduler selects 2 to run<br />

Priority<br />

Thread 1<br />

Thread 2<br />

Thread 3<br />

24


■ device interrupt arrives<br />

TU Dresden MOS: <strong>Threads</strong><br />

EXAMPLE<br />

■ thread 2 is forced into the kernel, where it<br />

unblocks thread 1 and fills its time slice<br />

■ switch to thread 1 preempts thread 2<br />

Priority<br />

Thread 1<br />

Thread 2<br />

Thread 3<br />

25


TU Dresden MOS: <strong>Threads</strong><br />

EXAMPLE<br />

■ thread 1 blocks again (interrupt handled,<br />

waiting for next)<br />

■ thread 2 has time left<br />

Priority<br />

Thread 1<br />

Thread 2<br />

Thread 3<br />

26


■ thread 2’s time slice has expired<br />

TU Dresden MOS: <strong>Threads</strong><br />

EXAMPLE<br />

■ timer interrupt forces thread 2 into kernel<br />

■ scheduler selects the next thread on the<br />

same priority level (round robin)<br />

Priority<br />

Thread 1<br />

Thread 2<br />

Thread 3<br />

27


TU Dresden MOS: <strong>Threads</strong><br />

EXAMPLE<br />

■ it’s really only one hardware thread being<br />

multiplexed<br />

Priority<br />

Thread 1<br />

Thread 2<br />

Thread 3<br />

Kernel<br />

28


SYNCHRONIZATION<br />

TU Dresden MOS: <strong>Threads</strong><br />

29


■ synchronization used for<br />

■ mutual exclusion<br />

■ producer-consumer-scenarios<br />

TU Dresden MOS: <strong>Threads</strong><br />

BASICS<br />

■ traditional approaches that do not work<br />

■ spinning, busy waiting<br />

■ disabling interrupts<br />

30


TU Dresden MOS: <strong>Threads</strong><br />

ATOMIC OPS<br />

■ for concurrent access to data structures<br />

■ use atomic operations to protect<br />

manipulations<br />

■ only suited for simple critical sections<br />

31


Thread 1<br />

Thread 2<br />

TU Dresden MOS: <strong>Threads</strong><br />

EXPECTATION<br />

Thread 1 in critical section<br />

Thread 2 in critical section<br />

32


Thread 1<br />

Serializer<br />

Thread<br />

Thread 2<br />

IPC call<br />

IPC call<br />

TU Dresden MOS: <strong>Threads</strong><br />

SOLUTION<br />

IPC reply<br />

Thread 1 in critical section<br />

IPC reply<br />

Thread 2 in critical section<br />

33


TU Dresden MOS: <strong>Threads</strong><br />

SEMAPHORES<br />

■ serializer and atomic operations can be<br />

combined to a nice counting semaphore<br />

■ semaphore<br />

■ shared counter for correctness<br />

■ wait queue for fairness<br />

■ down (P) and up (V) operation<br />

■ semaphore available iff counter > 0<br />

34


TU Dresden MOS: <strong>Threads</strong><br />

SEMAPHORES<br />

■ counter increments and decrements using<br />

atomic operations<br />

■ when necessary, call semaphore thread to<br />

block/unblock and enqueue/dequeue<br />

Thread 1<br />

Semaphore<br />

Thread<br />

Thread 2<br />

down<br />

down<br />

enqueue<br />

and block<br />

up<br />

dequeue<br />

and unblock<br />

up<br />

35


TU Dresden MOS: <strong>Threads</strong><br />

BENEFITS<br />

■ cross-task semaphores, when counter is<br />

in shared memory<br />

■ IPC only in the contention case<br />

■ good for mutual exclusion when contention<br />

is rare<br />

■ for producer-consumer-scenarios,<br />

contention is the common case<br />

■ solution for small critical sections in<br />

scheduling lecture<br />

36


NOVA<br />

TU Dresden MOS: <strong>Threads</strong><br />

37


TU Dresden MOS: <strong>Threads</strong><br />

INTRODUCTION<br />

■ NOVA is a research microhypervisor<br />

currently developed by Udo Steinberg<br />

■ explore technologies for a small and<br />

robust platform that hosts:<br />

■ legacy operating systems<br />

■ native NOVA applications<br />

■ designed for virtualization and manycore<br />

38


KERNEL STYLES<br />

Process-Style Interrupt-Style<br />

■ one kernel stack per thread ■ one kernel stack per CPU<br />

■ context switch: switch to<br />

kernel stack of target<br />

thread<br />

■ target thread resumes at<br />

last context switch point<br />

■ kernel state retained on<br />

stack at switch time<br />

TU Dresden MOS: <strong>Threads</strong><br />

■ context switch: save kernel<br />

state of current thread,<br />

discard stack, restore state<br />

of target thread<br />

■ target thread resumes with<br />

empty kernel stack in<br />

continuation function<br />

■ kernel state must be<br />

explicitly serialized<br />

■ can switch anytime ■ lower thread overhead<br />

Fiasco, Linux NOVA, (xnu)<br />

39


TU Dresden MOS: <strong>Threads</strong><br />

RECAP<br />

■ repeated basic microkernel concepts<br />

■ tasks, threads, IPC<br />

■ closer look on threads<br />

■ TCB, kernel entry<br />

■ scheduling<br />

■ time quanta, priorities, preemption<br />

■ synchronization<br />

■ atomic ops, serializer thread, semaphore<br />

■ next up: memory management<br />

40

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!