diff options
author | Ard Biesheuvel <ardb@kernel.org> | 2021-09-23 09:15:53 +0200 |
---|---|---|
committer | Ard Biesheuvel <ardb@kernel.org> | 2021-12-03 15:11:33 +0100 |
commit | a1c510d0adc604bb143c86052bc5be48cbcfa17c (patch) | |
tree | 63087a2a99eb52707663706cb69a939eabb2b7de /arch/arm/kernel/entry-header.S | |
parent | ae5cc07da8f8e4c9802894c05f1d96802b9de5f2 (diff) | |
download | linux-a1c510d0adc604bb143c86052bc5be48cbcfa17c.tar.gz linux-a1c510d0adc604bb143c86052bc5be48cbcfa17c.tar.bz2 linux-a1c510d0adc604bb143c86052bc5be48cbcfa17c.zip |
ARM: implement support for vmap'ed stacks
Wire up the generic support for managing task stack allocations via vmalloc,
and implement the entry code that detects whether we faulted because of a
stack overrun (or future stack overrun caused by pushing the pt_regs array)
While this adds a fair amount of tricky entry asm code, it should be
noted that it only adds a TST + branch to the svc_entry path. The code
implementing the non-trivial handling of the overflow stack is emitted
out-of-line into the .text section.
Since on ARM, we rely on do_translation_fault() to keep PMD level page
table entries that cover the vmalloc region up to date, we need to
ensure that we don't hit such a stale PMD entry when accessing the
stack. So we do a dummy read from the new stack while still running from
the old one on the context switch path, and bump the vmalloc_seq counter
when PMD level entries in the vmalloc range are modified, so that the MM
switch fetches the latest version of the entries.
Note that we need to increase the per-mode stack by 1 word, to gain some
space to stash a GPR until we know it is safe to touch the stack.
However, due to the cacheline alignment of the struct, this does not
actually increase the memory footprint of the struct stack array at all.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Keith Packard <keithpac@amazon.com>
Tested-by: Marc Zyngier <maz@kernel.org>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
Diffstat (limited to 'arch/arm/kernel/entry-header.S')
-rw-r--r-- | arch/arm/kernel/entry-header.S | 37 |
1 files changed, 37 insertions, 0 deletions
diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S index ae24dd54e9ef..81df2a3561ca 100644 --- a/arch/arm/kernel/entry-header.S +++ b/arch/arm/kernel/entry-header.S @@ -423,3 +423,40 @@ scno .req r7 @ syscall number tbl .req r8 @ syscall table pointer why .req r8 @ Linux syscall (!= 0) tsk .req r9 @ current thread_info + + .macro do_overflow_check, frame_size:req +#ifdef CONFIG_VMAP_STACK + @ + @ Test whether the SP has overflowed. Task and IRQ stacks are aligned + @ so that SP & BIT(THREAD_SIZE_ORDER + PAGE_SHIFT) should always be + @ zero. + @ +ARM( tst sp, #1 << (THREAD_SIZE_ORDER + PAGE_SHIFT) ) +THUMB( tst r1, #1 << (THREAD_SIZE_ORDER + PAGE_SHIFT) ) +THUMB( it ne ) + bne .Lstack_overflow_check\@ + + .pushsection .text +.Lstack_overflow_check\@: + @ + @ The stack pointer is not pointing to a valid vmap'ed stack, but it + @ may be pointing into the linear map instead, which may happen if we + @ are already running from the overflow stack. We cannot detect overflow + @ in such cases so just carry on. + @ + str ip, [r0, #12] @ Stash IP on the mode stack + ldr_l ip, high_memory @ Start of VMALLOC space +ARM( cmp sp, ip ) @ SP in vmalloc space? +THUMB( cmp r1, ip ) +THUMB( itt lo ) + ldrlo ip, [r0, #12] @ Restore IP + blo .Lout\@ @ Carry on + +THUMB( sub r1, sp, r1 ) @ Restore original R1 +THUMB( sub sp, r1 ) @ Restore original SP + add sp, sp, #\frame_size @ Undo svc_entry's SP change + b __bad_stack @ Handle VMAP stack overflow + .popsection +.Lout\@: +#endif + .endm |