Əsas məzmuna keçin

Hardware Security

Security Threat Model

graph TB
Threats[Security Threats] --> Software[Software Attacks<br/>Buffer overflow, ROP]
Threats --> Hardware[Hardware Attacks<br/>Side-channels, fault injection]
Threats --> Physical[Physical Attacks<br/>Direct hardware access]

Software --> SoftMit[Software mitigations<br/>DEP, ASLR, CFI]
Hardware --> HardMit[Hardware mitigations<br/>Spectre/Meltdown patches]
Physical --> PhysMit[Physical security<br/>Tamper detection, encryption]

Speculative Execution Vulnerabilities

Spectre

Spectre - speculative execution vasitəsilə məlumat sızması.

sequenceDiagram
participant Attacker
participant CPU
participant Cache
participant Victim

Attacker->>CPU: Train branch predictor<br/>(legitimate pattern)
Note over CPU: Predictor learns pattern

Attacker->>CPU: Trigger with malicious input
CPU->>CPU: Mispredicts branch
CPU->>CPU: Speculatively execute wrong path

CPU->>Victim: Read secret data (speculative)
CPU->>Cache: Load secret into cache

Note over CPU: Branch resolved<br/>Discard speculative results

Note over Cache: BUT cache still contains data!

Attacker->>Cache: Timing attack
Cache-->>Attacker: Leak secret via timing

Spectre Variants:

Variant 1 (Bounds Check Bypass):
if (x < array_size) { // Trained to be true
y = array[x]; // x can be malicious!
// Speculative load based on y
}

Variant 2 (Branch Target Injection):
Poison indirect branch predictor
→ Speculative jump to attacker-controlled code

Spectre Example

// Victim code
uint8_t victim_array[256 * 4096];
uint8_t public_array[256];

if (x < public_array_size) { // Boundary check
uint8_t value = public_array[x];
uint8_t dummy = victim_array[value * 4096];
}

// Attacker code
// 1. Train branch predictor: x always < size
for (int i = 0; i < 100; i++) {
victim_function(i); // x is valid
}

// 2. Attack: x is secret_address (out of bounds)
victim_function(secret_address); // Mispredicts, executes speculatively
// Speculatively loads: victim_array[secret * 4096]
// secret is now in cache!

// 3. Timing attack to recover secret
for (int i = 0; i < 256; i++) {
time1 = rdtsc();
dummy = victim_array[i * 4096];
time2 = rdtsc();
if (time2 - time1 < threshold) {
// Cache hit! i == secret
}
}

Meltdown

Meltdown - kernel memory-yə user-space-dən çıxış.

sequenceDiagram
participant User as User Process
participant CPU
participant Kernel as Kernel Memory
participant Cache

User->>CPU: Load kernel address (illegal)
CPU->>CPU: Out-of-order execution<br/>(before permission check)

CPU->>Kernel: Speculatively read kernel data
CPU->>Cache: Load into cache

Note over CPU: Permission check fails<br/>Exception raised

Note over CPU: Discard results
Note over Cache: BUT cache still has data!

User->>Cache: Timing attack
Cache-->>User: Leak kernel data

Meltdown Example:

// Attacker code
char data;
try {
// This will fault, but executes speculatively first
data = *(char*)kernel_address;

// Speculatively executed (before exception)
dummy = probe_array[data * 4096];

} catch (exception) {
// Exception caught, but cache already poisoned
}

// Timing attack to recover 'data'
for (int i = 0; i < 256; i++) {
if (is_cached(probe_array + i * 4096)) {
// i == kernel data!
}
}

Mitigations

graph TB
Mit[Mitigations] --> SW[Software]
Mit --> HW[Hardware]
Mit --> Hybrid[Hybrid]

SW --> KPTI[KPTI<br/>Kernel Page Table Isolation]
SW --> Retpoline[Retpoline<br/>Indirect branch protection]
SW --> LFENCE[LFENCE barriers]

HW --> IBRS[IBRS<br/>Indirect Branch Restricted Speculation]
HW --> STIBP[STIBP<br/>Single Thread Indirect Branch Prediction]
HW --> SSBD[SSBD<br/>Speculative Store Bypass Disable]

Hybrid --> Microcode[Microcode updates]
Hybrid --> OSPatch[OS patches]

KPTI (Kernel Page Table Isolation):

Without KPTI:
User page tables map both user and kernel memory
→ Kernel memory accessible (but protected by permissions)
→ Meltdown can leak it

With KPTI:
Separate page tables for user and kernel
→ Kernel memory not mapped in user page tables
→ Meltdown blocked

Cost: 5-30% performance overhead (context switch)

Retpoline:

; Original indirect jump (vulnerable)
jmp *%rax

; Retpoline (safe)
call set_up_target
capture_spec:
pause
lfence
jmp capture_spec
set_up_target:
mov %rax, (%rsp)
ret

Check mitigations (Linux):

# Vulnerability status
grep . /sys/devices/system/cpu/vulnerabilities/*

# Output:
# meltdown: Mitigation: PTI
# spectre_v1: Mitigation: usercopy/swapgs barriers
# spectre_v2: Mitigation: Full generic retpoline, IBPB, IBRS_FW

Return-Oriented Programming (ROP)

ROP - code reuse attack.

graph LR
Buffer[Buffer Overflow] --> Overwrite[Overwrite return address]
Overwrite --> Chain[Chain of gadgets]

Chain --> G1[Gadget 1<br/>pop rax; ret]
Chain --> G2[Gadget 2<br/>pop rbx; ret]
Chain --> G3[Gadget 3<br/>syscall; ret]

G1 --> Execute[Execute arbitrary code]
G2 --> Execute
G3 --> Execute

ROP Gadget:

; Gadget 1 (exists in legitimate code)
pop rax
ret

; Gadget 2
pop rdi
pop rsi
ret

; Gadget 3
syscall
ret

ROP Chain:

Stack layout:
[address of gadget 1]
[value for rax]
[address of gadget 2]
[value for rdi]
[value for rsi]
[address of gadget 3] // execve syscall

Mitigations

1. DEP (Data Execution Prevention) / NX bit:

Mark stack/heap as non-executable
→ Can't execute shellcode directly
BUT: ROP reuses existing code (still executable)

2. ASLR (Address Space Layout Randomization):

Randomize addresses of:
- Stack
- Heap
- Libraries
- Executable

→ Attacker can't predict gadget addresses

3. Control Flow Integrity (CFI):

Enforce valid control flow
- Check indirect jumps/calls
- Maintain shadow stack
- Validate return addresses

→ Prevent arbitrary ROP chains

4. Intel CET (Control-flow Enforcement Technology):

Hardware-enforced CFI

Shadow Stack:
- Parallel stack for return addresses
- Write-protected
- Compared on ret instruction

Indirect Branch Tracking:
- ENDBRANCH instruction marks valid targets
- Indirect jumps must land on ENDBRANCH
; CET enabled code
function:
endbranch64 ; Valid indirect branch target
push rbp
; ... function body ...
pop rbp
ret ; Checked against shadow stack

Hardware Security Modules (HSM)

HSM - dedicated crypto hardware.

graph TB
HSM[HSM] --> Crypto[Cryptographic Operations<br/>AES, RSA, ECC]
HSM --> KeyGen[Key Generation]
HSM --> KeyStore[Secure Key Storage]
HSM --> Random[True Random Number Generator]

HSM --> Tamper[Tamper Detection]
HSM --> Isolate[Physical Isolation]

Crypto --> Fast[Hardware acceleration<br/>100-1000x faster]
KeyStore --> Secure[Keys never leave HSM]

Use cases:

  • Certificate Authorities (CA)
  • Payment systems
  • Banking
  • TLS/SSL termination
  • Database encryption

Examples:

  • Thales nShield
  • Utimaco SecurityServer
  • AWS CloudHSM
  • YubiHSM

Trusted Platform Module (TPM)

TPM - secure crypto processor on motherboard.

graph TB
TPM[TPM Chip] --> PCR[Platform Configuration Registers<br/>PCRs]
TPM --> Keys[Endorsement Key<br/>Storage Root Key]
TPM --> RNG[Random Number Generator]
TPM --> NVRAM[Non-volatile RAM]

PCR --> Measure[Measure boot process<br/>Secure Boot]
Keys --> Seal[Seal/Unseal data<br/>Tied to PCR values]
NVRAM --> Store[Store secrets]

TPM PCRs (Platform Configuration Registers)

PCR 0: BIOS/UEFI firmware
PCR 1: BIOS/UEFI configuration
PCR 2: Option ROM code
PCR 3: Option ROM config
PCR 4: MBR/GPT
PCR 5: MBR/GPT config
PCR 6: State transitions
PCR 7: Secure Boot state
PCR 8-15: OS specific

PCRs are "extend-only":
PCR_new = Hash(PCR_old || measurement)

Measured Boot

sequenceDiagram
participant BIOS
participant TPM
participant Bootloader
participant OS

BIOS->>TPM: Measure BIOS<br/>Extend PCR 0
BIOS->>Bootloader: Load bootloader

BIOS->>TPM: Measure bootloader<br/>Extend PCR 4
Bootloader->>OS: Load kernel

Bootloader->>TPM: Measure kernel<br/>Extend PCR 8

Note over TPM: PCRs reflect entire boot chain

OS->>TPM: Seal secret with PCRs
Note over TPM: Secret only unsealed<br/>if PCRs match

TPM Sealed Storage

// Seal data (can only unseal if PCR values match)
tpm_seal(data, pcr_selection, password, &sealed_blob);

// Later: Unseal (verifies PCR values)
if (tpm_unseal(sealed_blob, password, &data) == SUCCESS) {
// PCRs match - boot chain unmodified
use(data);
} else {
// PCRs don't match - system compromised!
}

Use cases:

  • Disk encryption (BitLocker, LUKS)
  • Secure Boot
  • Attestation (prove system state to remote party)
  • Key storage
# Check TPM (Linux)
cat /sys/class/tpm/tpm0/device/description
# TPM 2.0 Device

# Read PCRs
tpm2_pcrread sha256:0,1,2,3,4,5,6,7

Intel SGX (Software Guard Extensions)

SGX - secure enclaves in user-space.

graph TB
Process[Process] --> Normal[Normal Memory<br/>Accessible by OS/VMM]
Process --> Enclave[SGX Enclave<br/>Encrypted, Isolated]

Enclave --> Code[Enclave Code]
Enclave --> Data[Enclave Data]

Enclave --> Protect[Protected from:]
Protect --> OS[Operating System]
Protect --> Hypervisor[Hypervisor]
Protect --> BIOS[BIOS/SMM]
Protect --> Other[Other processes]

Note[Only enclave code<br/>can access enclave memory]

SGX Architecture

EPC (Enclave Page Cache):
- Reserved physical memory
- Encrypted with random key (per power-cycle)
- 128 MB (typical)

PRM (Processor Reserved Memory):
- Contains EPC
- Not accessible to OS/VMM
- Hardware-enforced

EPCM (Enclave Page Cache Map):
- Metadata for EPC pages
- Ownership, permissions

SGX Workflow

sequenceDiagram
participant App
participant Enclave
participant CPU
participant Memory

App->>CPU: ECREATE (create enclave)
CPU->>Memory: Allocate EPC pages

App->>CPU: EADD (add page to enclave)
App->>CPU: EINIT (finalize enclave)

App->>CPU: EENTER (enter enclave)
Note over CPU: Switch to enclave mode

Enclave->>Enclave: Execute sensitive code
Note over Memory: Memory encrypted

Enclave->>CPU: EEXIT (exit enclave)
Note over CPU: Switch back to app

SGX Example

// Enclave code (trusted)
int enclave_function(int* sealed_data) {
// Decrypt sealed data
int secret = unseal(sealed_data);

// Process secret
int result = process(secret);

// Return result (secret never leaves enclave)
return result;
}

// Application code (untrusted)
int main() {
sgx_enclave_id_t eid;

// Create enclave
sgx_create_enclave("enclave.signed.so", &eid);

// Call enclave function (ECALL)
int result;
enclave_function(eid, &result, sealed_data);

// Destroy enclave
sgx_destroy_enclave(eid);
}

SGX Attestation

sequenceDiagram
participant Enclave
participant App
participant Remote as Remote Party
participant IAS as Intel Attestation Service

Enclave->>Enclave: Generate report
Note over Enclave: Contains enclave measurement

Enclave->>App: Report
App->>IAS: Quote (signed report)

IAS->>IAS: Verify signature
IAS->>Remote: Attestation result

Note over Remote: Verify enclave is genuine<br/>and running correct code

Remote->>Enclave: Send secret

Limitations:

  • Small enclave size (128 MB)
  • Performance overhead (context switches)
  • Side-channel attacks (speculative execution)
  • Intel controls attestation

ARM TrustZone

TrustZone - hardware isolation for secure world.

graph TB
ARM[ARM Processor] --> Normal[Normal World<br/>Rich OS<br/>Linux, Android]
ARM --> Secure[Secure World<br/>Trusted OS<br/>OP-TEE, QSEE]

Normal --> Apps[Applications]
Normal --> Kernel[Linux Kernel]

Secure --> TA[Trusted Applications<br/>Crypto, DRM, Payment]
Secure --> SecKernel[Secure Kernel]

Apps -.SMC call.-> TA

Note[Hardware-enforced separation<br/>Normal World can't access Secure World]

TrustZone Architecture

Two virtual processors:
- Normal World (NS bit = 1)
- Secure World (NS bit = 0)

NS bit controls:
- Memory access (separate address spaces)
- Peripheral access
- Interrupts (FIQ → Secure, IRQ → Normal)

SMC (Secure Monitor Call):
- Switch between worlds
- Handled by Secure Monitor

TrustZone Example

// Normal World (Linux)
int normal_world_app() {
// Call secure world
int result = trustzone_call(CMD_DECRYPT, encrypted_data);
return result;
}

// Secure World (OP-TEE)
int secure_world_handler(int cmd, void* data) {
switch (cmd) {
case CMD_DECRYPT:
// Access secure keys (not accessible from Normal World)
return aes_decrypt(secure_key, data);
case CMD_SIGN:
return rsa_sign(secure_key, data);
}
}

SGX vs TrustZone

FeatureIntel SGXARM TrustZone
IsolationEnclaves (per-process)Secure World (system-wide)
TrustDon't trust OSTrust Secure OS
SizeSmall (128 MB)Large (GB)
PerformanceOverhead (context switch)Fast (hardware switch)
Use caseCloud, untrusted envMobile, embedded
AttestationRemote attestationLocal attestation

Side-Channel Attacks

Cache Timing Attacks

sequenceDiagram
participant Attacker
participant Cache
participant Victim

Attacker->>Cache: Flush cache lines
Note over Attacker: Prime cache

Victim->>Cache: Access secret-dependent data
Note over Cache: Some lines loaded

Attacker->>Cache: Probe cache lines
Note over Attacker: Measure access time

Attacker->>Attacker: Fast = cache hit = victim accessed
Attacker->>Attacker: Slow = cache miss = victim didn't access

Note over Attacker: Infer secret from pattern

Flush+Reload:

// 1. Flush
clflush(shared_memory);

// 2. Wait for victim
sleep_a_bit();

// 3. Reload and measure
time1 = rdtsc();
access(shared_memory);
time2 = rdtsc();

if (time2 - time1 < threshold) {
// Victim accessed this address
}

Mitigations

graph TB
Mit[Cache Attack Mitigations] --> Isolation[Cache Isolation]
Mit --> Noise[Add Noise]
Mit --> Disable[Disable Features]

Isolation --> Partition[Cache partitioning<br/>CAT]
Isolation --> Flush[Flush on context switch]

Noise --> Random[Randomize timing]
Noise --> Dummy[Dummy accesses]

Disable --> DisableCache[Disable cache sharing]
Disable --> DisableHT[Disable hyperthreading]

Intel CAT (Cache Allocation Technology):

# Partition L3 cache
# Give VM1 ways 0-7, VM2 ways 8-15
pqos -e "llc:0=0xff;llc:1=0xff00"

Row Hammer

Row Hammer - DRAM bit flips via repeated access.

sequenceDiagram
participant Attacker
participant DRAM

loop Millions of times
Attacker->>DRAM: Read row N
Attacker->>DRAM: Read row N+2
end

Note over DRAM: Row N+1 (victim row)<br/>Bit flips due to EM interference

Attacker->>DRAM: Read row N+1
Note over Attacker: Flip security-critical bit<br/>Escalate privileges

Mitigation:

ECC memory: Detect/correct bit flips
TRR (Target Row Refresh): Refresh neighboring rows

Secure Boot

graph LR
ROM[ROM<br/>Hardware Root of Trust] --> Bootloader[Bootloader]
Bootloader --> Kernel[Kernel]
Kernel --> Init[Init]

ROM -.Verify signature.-> Bootloader
Bootloader -.Verify signature.-> Kernel
Kernel -.Verify signature.-> Init

ROM --> Fail1[Signature fail<br/>Boot halted]
Bootloader --> Fail2[Signature fail<br/>Boot halted]

UEFI Secure Boot:

Platform Key (PK): Top-level key
Key Exchange Keys (KEK): Authorize signature databases
Signature Database (db): Allowed signatures
Forbidden Database (dbx): Revoked signatures

Boot flow:
1. UEFI firmware verifies bootloader signature against db
2. Bootloader verifies kernel signature
3. Kernel verifies module signatures
# Check Secure Boot status (Linux)
mokutil --sb-state
# SecureBoot enabled

# List enrolled keys
mokutil --list-enrolled

Hardware Random Number Generators

// Intel RDRAND instruction
uint64_t random;
if (_rdrand64_step(&random)) {
// random now contains true random number
}

// Intel RDSEED (even more random, slower)
uint64_t seed;
if (_rdseed64_step(&seed)) {
// seed from hardware entropy source
}

Why hardware RNG?

  • True randomness (thermal noise, quantum effects)
  • Faster than software PRNGs
  • Critical for cryptography

Memory Encryption

AMD SME/SEV

SME (Secure Memory Encryption):

  • Encrypt all DRAM
  • Transparent to OS

SEV (Secure Encrypted Virtualization):

  • Encrypt VM memory
  • Each VM has unique key
  • Hypervisor can't read VM memory
graph TB
CPU[AMD CPU] --> MemEnc[Memory Encryption Engine]

MemEnc --> VM1[VM 1<br/>Key A]
MemEnc --> VM2[VM 2<br/>Key B]
MemEnc --> Host[Hypervisor<br/>No encryption]

VM1 --> DRAM1[Encrypted DRAM<br/>Key A]
VM2 --> DRAM2[Encrypted DRAM<br/>Key B]

Note[Hypervisor can't read<br/>VM memory]

Intel TME/MKTME

TME (Total Memory Encryption):

  • Similar to AMD SME

MKTME (Multi-Key TME):

  • Similar to AMD SEV
  • Multiple encryption keys

Best Practices

Development

  1. Use constant-time algorithms

    // BAD: Timing leak
    if (password == correct_password) { ... }

    // GOOD: Constant time
    int equals = constant_time_compare(password, correct_password);
  2. Validate input

    // Prevent buffer overflows
    if (len > MAX_LEN) return ERROR;
  3. Use hardware security features

    // Use TPM for key storage
    // Use SGX/TrustZone for sensitive operations
  4. Enable mitigations

    # Compile with security flags
    gcc -fstack-protector-strong -D_FORTIFY_SOURCE=2 \
    -fPIE -pie -Wl,-z,relro,-z,now

System Administration

  1. Keep systems updated

    # Microcode updates
    apt install intel-microcode # or amd64-microcode
  2. Enable security features

    # Check Spectre/Meltdown mitigations
    grep . /sys/devices/system/cpu/vulnerabilities/*

    # Enable Secure Boot in BIOS
    # Enable TPM in BIOS
  3. Monitor for attacks

    # Check for suspicious activity
    # Audit logs
    # IDS/IPS
  4. Disable unnecessary features

    # Disable SMT (hyperthreading) for high-security environments
    echo off > /sys/devices/system/cpu/smt/control

Əlaqəli Mövzular

  • CPU Architecture: Speculative execution, branch prediction
  • Cache Memory: Side-channel attacks
  • Memory Hierarchy: Memory encryption
  • Virtualization: VM isolation, SGX
  • Modern Architectures: Security features in new CPUs