Skip to content
Snippets Groups Projects
  1. Oct 10, 2013
    • Bharat Bhushan's avatar
      kvm: ppc: booke: check range page invalidation progress on page setup · 40fde70d
      Bharat Bhushan authored
      
      When the MM code is invalidating a range of pages, it calls the KVM
      kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
      kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
      However, the Linux PTEs for the range being flushed are still valid at
      that point.  We are not supposed to establish any new references to pages
      in the range until the ...range_end() notifier gets called.
      The PPC-specific KVM code doesn't get any explicit notification of that;
      instead, we are supposed to use mmu_notifier_retry() to test whether we
      are or have been inside a range flush notifier pair while we have been
      referencing a page.
      
      This patch calls the mmu_notifier_retry() while mapping the guest
      page to ensure we are not referencing a page when in range invalidation.
      
      This call is inside a region locked with kvm->mmu_lock, which is the
      same lock that is called by the KVM MMU notifier functions, thus
      ensuring that no new notification can proceed while we are in the
      locked region.
      
      Signed-off-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Acked-by: default avatarAlexander Graf <agraf@suse.de>
      [Backported to 3.12 - Paolo]
      Reviewed-by: default avatarBharat Bhushan <bharat.bhushan@freescale.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      40fde70d
    • Paul Mackerras's avatar
      KVM: PPC: Book3S HV: Fix typo in saving DSCR · cfc86025
      Paul Mackerras authored
      
      This fixes a typo in the code that saves the guest DSCR (Data Stream
      Control Register) into the kvm_vcpu_arch struct on guest exit.  The
      effect of the typo was that the DSCR value was saved in the wrong place,
      so changes to the DSCR by the guest didn't persist across guest exit
      and entry, and some host kernel memory got corrupted.
      
      Cc: stable@vger.kernel.org [v3.1+]
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      Acked-by: default avatarAlexander Graf <agraf@suse.de>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cfc86025
    • Gleb Natapov's avatar
      KVM: nVMX: fix shadow on EPT · d0d538b9
      Gleb Natapov authored
      
      72f85795 broke shadow on EPT. This patch reverts it and fixes PAE
      on nEPT (which reverted commit fixed) in other way.
      
      Shadow on EPT is now broken because while L1 builds shadow page table
      for L2 (which is PAE while L2 is in real mode) it never loads L2's
      GUEST_PDPTR[0-3].  They do not need to be loaded because without nested
      virtualization HW does this during guest entry if EPT is disabled,
      but in our case L0 emulates L2's vmentry while EPT is enables, so we
      cannot rely on vmcs12->guest_pdptr[0-3] to contain up-to-date values
      and need to re-read PDPTEs from L2 memory. This is what kvm_set_cr3()
      is doing, but by clearing cache bits during L2 vmentry we drop values
      that kvm_set_cr3() read from memory.
      
      So why the same code does not work for PAE on nEPT? kvm_set_cr3()
      reads pdptes into vcpu->arch.walk_mmu->pdptrs[]. walk_mmu points to
      vcpu->arch.nested_mmu while nested guest is running, but ept_load_pdptrs()
      uses vcpu->arch.mmu which contain incorrect values. Fix that by using
      walk_mmu in ept_(load|save)_pdptrs.
      
      Signed-off-by: default avatarGleb Natapov <gleb@redhat.com>
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Tested-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d0d538b9
  2. Oct 03, 2013
  3. Oct 01, 2013
  4. Sep 30, 2013
  5. Sep 28, 2013
  6. Sep 27, 2013
    • David S. Miller's avatar
      sparc64: Fix buggy strlcpy() conversion in ldom_reboot(). · 2bd161a6
      David S. Miller authored
      
      Commit 117a0c5f ("sparc: kernel: using
      strlcpy() instead of strcpy()") added a bug to ldom_reboot in
      arch/sparc/kernel/ds.c
      
      -		strcpy(full_boot_str + strlen("boot "), boot_command);
      +				     strlcpy(full_boot_str + strlen("boot "), boot_command,
      +				     			     sizeof(full_boot_str + strlen("boot ")));
      
      That last sizeof() expression evaluates to sizeof(size_t) which is
      not what was intended.
      
      Also even the corrected:
      
           sizeof(full_boot_str) + strlen("boot ")
      
      is not right as the destination buffer length is just plain
      "sizeof(full_boot_str)" and that's what the final argument
      should be.
      
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      2bd161a6
    • Frederic Weisbecker's avatar
      arm: Fix build error with context tracking calls · 0c06a5d4
      Frederic Weisbecker authored
      
      ad65782f (context_tracking: Optimize main APIs off case
      with static key) converted context tracking main APIs to inline
      function and left ARM asm callers behind.
      
      This can be easily fixed by making ARM calling the post static
      keys context tracking function. We just need to replicate the
      static key checks there. We'll remove these later when ARM will
      support the context tracking static keys.
      
      Reported-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Reported-by: default avatarRussell King <linux@arm.linux.org.uk>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Tested-by: default avatarKevin Hilman <khilman@linaro.org>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Anil Kumar <anilk4.v@gmail.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Benoit Cousson <b-cousson@ti.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Kevin Hilman <khilman@linaro.org>
      0c06a5d4
    • Uwe Kleine-König's avatar
      ARC: Use clockevents_config_and_register over clockevents_register_device · 55c2e262
      Uwe Kleine-König authored
      
      clockevents_config_and_register is more clever and correct than doing it
      by hand; so use it.
      
      [vgupta: fixed build failure due to missing ; in patch]
      
      Signed-off-by: default avatarUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      55c2e262
    • Vineet Gupta's avatar
      ARC: Workaround spinlock livelock in SMP SystemC simulation · 6c00350b
      Vineet Gupta authored
      
      Some ARC SMP systems lack native atomic R-M-W (LLOCK/SCOND) insns and
      can only use atomic EX insn (reg with mem) to build higher level R-M-W
      primitives. This includes a SystemC based SMP simulation model.
      
      So rwlocks need to use a protecting spinlock for atomic cmp-n-exchange
      operation to update reader(s)/writer count.
      
      The spinlock operation itself looks as follows:
      
      	mov reg, 1		; 1=locked, 0=unlocked
      retry:
      	EX reg, [lock]		; load existing, store 1, atomically
      	BREQ reg, 1, rety	; if already locked, retry
      
      In single-threaded simulation, SystemC alternates between the 2 cores
      with "N" insn each based scheduling. Additionally for insn with global
      side effect, such as EX writing to shared mem, a core switch is
      enforced too.
      
      Given that, 2 cores doing a repeated EX on same location, Linux often
      got into a livelock e.g. when both cores were fiddling with tasklist
      lock (gdbserver / hackbench) for read/write respectively as the
      sequence diagram below shows:
      
                 core1                                   core2
               --------                                --------
      1. spin lock [EX r=0, w=1] - LOCKED
      2. rwlock(Read)            - LOCKED
      3. spin unlock  [ST 0]     - UNLOCKED
                                               spin lock [EX r=0,w=1] - LOCKED
                            -- resched core 1----
      
      5. spin lock [EX r=1] - ALREADY-LOCKED
      
                            -- resched core 2----
      6.                                       rwlock(Write) - READER-LOCKED
      7.                                       spin unlock [ST 0]
      8.                                       rwlock failed, retry again
      
      9.                                       spin lock  [EX r=0, w=1]
                            -- resched core 1----
      
      10  spinlock locked in #9, retry #5
      11. spin lock [EX gets 1]
                            -- resched core 2----
      ...
      ...
      
      The fix was to unlock using the EX insn too (step 7), to trigger another
      SystemC scheduling pass which would let core1 proceed, eliding the
      livelock.
      
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      6c00350b
    • Vineet Gupta's avatar
      ARC: Fix 32-bit wrap around in access_ok() · 0752adfd
      Vineet Gupta authored
      
      Anton reported
      
       | LTP tests syscalls/process_vm_readv01 and process_vm_writev01 fail
       | similarly in one testcase test_iov_invalid -> lvec->iov_base.
       | Testcase expects errno EFAULT and return code -1,
       | but it gets return code 1 and ERRNO is 0 what means success.
      
      Essentially test case was passing a pointer of -1 which access_ok()
      was not catching. It was doing [@addr + @sz <= TASK_SIZE] which would
      pass for @addr == -1
      
      Fixed that by rewriting as [@addr <= TASK_SIZE - @sz]
      
      Reported-by: default avatarAnton Kolesov <Anton.Kolesov@synopsys.com>
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      0752adfd
    • Mischa Jonker's avatar
      ARC: Handle zero-overhead-loop in unaligned access handler · c11eb222
      Mischa Jonker authored
      
      If a load or store is the last instruction in a zero-overhead-loop, and
      it's misaligned, the loop would execute only once.
      
      This fixes that problem.
      
      Signed-off-by: default avatarMischa Jonker <mjonker@synopsys.com>
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      c11eb222
Loading