From: Catalin Marinas Date: Tue, 31 Aug 2021 08:10:00 +0000 (+0100) Subject: Merge remote-tracking branch 'tip/sched/arm64' into for-next/core X-Git-Tag: microblaze-v5.16~29^2~2 X-Git-Url: http://git.monstr.eu/?p=linux-2.6-microblaze.git;a=commitdiff_plain;h=65266a7c6abfa1ad915a362c41bf38576607f1f9 Merge remote-tracking branch 'tip/sched/arm64' into for-next/core * tip/sched/arm64: (785 commits) Documentation: arm64: describe asymmetric 32-bit support arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 arm64: Advertise CPUs capable of running 32-bit applications in sysfs arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0 arm64: Implement task_cpu_possible_mask() sched: Introduce dl_task_check_affinity() to check proposed affinity sched: Allow task CPU affinity to be restricted on asymmetric systems sched: Split the guts of sched_setaffinity() into a helper function sched: Introduce task_struct::user_cpus_ptr to track requested affinity sched: Reject CPU affinity changes based on task_cpu_possible_mask() cpuset: Cleanup cpuset_cpus_allowed_fallback() use in select_fallback_rq() cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus() cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 sched: Introduce task_cpu_possible_mask() to limit fallback rq selection sched: Cgroup SCHED_IDLE support sched/topology: Skip updating masks for non-online nodes Linux 5.14-rc6 lib: use PFN_PHYS() in devmem_is_allowed() ... --- 65266a7c6abfa1ad915a362c41bf38576607f1f9 diff --cc arch/arm64/kernel/process.c index 5464d575192b,e0e7f4e9b607..2bd270cd603e --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@@ -472,22 -469,7 +473,13 @@@ static void erratum_1418040_thread_swit write_sysreg(val, cntkctl_el1); } - static void compat_thread_switch(struct task_struct *next) - { - if (!is_compat_thread(task_thread_info(next))) - return; - - if (static_branch_unlikely(&arm64_mismatched_32bit_el0)) - set_tsk_thread_flag(next, TIF_NOTIFY_RESUME); - } - -static void update_sctlr_el1(u64 sctlr) +/* + * __switch_to() checks current->thread.sctlr_user as an optimisation. Therefore + * this function must be called with preemption disabled and the update to + * sctlr_user must be made in the same preemption disabled block so that + * __switch_to() does not see the variable update before the SCTLR_EL1 one. + */ +void update_sctlr_el1(u64 sctlr) { /* * EnIA must not be cleared while in the kernel as this is necessary for diff --cc arch/arm64/kernel/signal.c index e93ffd7d38e1,22899c86711a..fb54fb76e17f --- a/arch/arm64/kernel/signal.c +++ b/arch/arm64/kernel/signal.c @@@ -916,20 -912,8 +917,7 @@@ static void do_signal(struct pt_regs *r restore_saved_sigmask(); } - static bool cpu_affinity_invalid(struct pt_regs *regs) - { - if (!compat_user_mode(regs)) - return false; - - /* - * We're preemptible, but a reschedule will cause us to check the - * affinity again. - */ - return !cpumask_test_cpu(raw_smp_processor_id(), - system_32bit_el0_cpumask()); - } - -asmlinkage void do_notify_resume(struct pt_regs *regs, - unsigned long thread_flags) +void do_notify_resume(struct pt_regs *regs, unsigned long thread_flags) { do { if (thread_flags & _TIF_NEED_RESCHED) {