Skip to content
Snippets Groups Projects
  1. May 22, 2014
  2. May 15, 2014
    • Linus Torvalds's avatar
      x86-64, modify_ldt: Make support for 16-bit segments a runtime option · fa81511b
      Linus Torvalds authored
      
      Checkin:
      
      b3b42ac2 x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
      
      disabled 16-bit segments on 64-bit kernels due to an information
      leak.  However, it does seem that people are genuinely using Wine to
      run old 16-bit Windows programs on Linux.
      
      A proper fix for this ("espfix64") is coming in the upcoming merge
      window, but as a temporary fix, create a sysctl to allow the
      administrator to re-enable support for 16-bit segments.
      
      It adds a "/proc/sys/abi/ldt16" sysctl that defaults to zero (off). If
      you hit this issue and care about your old Windows program more than
      you care about a kernel stack address information leak, you can do
      
         echo 1 > /proc/sys/abi/ldt16
      
      as root (add it to your startup scripts), and you should be ok.
      
      The sysctl table is only added if you have COMPAT support enabled on
      x86-64, but I assume anybody who runs old windows binaries very much
      does that ;)
      
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/CA%2B55aFw9BPoD10U1LfHbOMpHWZkvJTkMcfCs9s3urPr1YyWBxw@mail.gmail.com
      Cc: <stable@vger.kernel.org>
      fa81511b
  3. Apr 03, 2014
  4. Mar 30, 2014
  5. Mar 26, 2014
  6. Mar 25, 2014
  7. Mar 24, 2014
  8. Mar 21, 2014
  9. Mar 20, 2014
  10. Mar 18, 2014
  11. Mar 14, 2014
  12. Feb 14, 2014
  13. Jan 12, 2014
  14. Jan 07, 2014
  15. Nov 06, 2013
  16. Jul 18, 2013
  17. Jun 19, 2013
  18. Feb 15, 2013
  19. Dec 12, 2012
  20. Nov 28, 2012
  21. Sep 24, 2012
    • John Stultz's avatar
      time: Convert x86_64 to using new update_vsyscall · 650ea024
      John Stultz authored
      
      Switch x86_64 to using sub-ns precise vsyscall
      
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarJohn Stultz <john.stultz@linaro.org>
      650ea024
  22. Jun 07, 2012
  23. Mar 24, 2012
    • Jason Baron's avatar
      coredump: remove VM_ALWAYSDUMP flag · 909af768
      Jason Baron authored
      The motivation for this patchset was that I was looking at a way for a
      qemu-kvm process, to exclude the guest memory from its core dump, which
      can be quite large.  There are already a number of filter flags in
      /proc/<pid>/coredump_filter, however, these allow one to specify 'types'
      of kernel memory, not specific address ranges (which is needed in this
      case).
      
      Since there are no more vma flags available, the first patch eliminates
      the need for the 'VM_ALWAYSDUMP' flag.  The flag is used internally by
      the kernel to mark vdso and vsyscall pages.  However, it is simple
      enough to check if a vma covers a vdso or vsyscall page without the need
      for this flag.
      
      The second patch then replaces the 'VM_ALWAYSDUMP' flag with a new
      'VM_NODUMP' flag, which can be set by userspace using new madvise flags:
      'MADV_DONTDUMP', and unset via 'MADV_DODUMP'.  The core dump filters
      continue to work the same as before unless 'MADV_DONTDUMP' is set on the
      region.
      
      The qemu code which implements this features is at:
      
        http://people.redhat.com/~jbaron/qemu-dump/qemu-dump.patch
      
      
      
      In my testing the qemu core dump shrunk from 383MB -> 13MB with this
      patch.
      
      I also believe that the 'MADV_DONTDUMP' flag might be useful for
      security sensitive apps, which might want to select which areas are
      dumped.
      
      This patch:
      
      The VM_ALWAYSDUMP flag is currently used by the coredump code to
      indicate that a vma is part of a vsyscall or vdso section.  However, we
      can determine if a vma is in one these sections by checking it against
      the gate_vma and checking for a non-NULL return value from
      arch_vma_name().  Thus, freeing a valuable vma bit.
      
      Signed-off-by: default avatarJason Baron <jbaron@redhat.com>
      Acked-by: default avatarRoland McGrath <roland@hack.frob.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Avi Kivity <avi@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      909af768
    • Andy Lutomirski's avatar
      x86-64: Inline vdso clock_gettime helpers · 5f293474
      Andy Lutomirski authored
      
      This is about a 3% speedup on Sandy Bridge.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: default avatarJohn Stultz <john.stultz@linaro.org>
      5f293474
    • Andy Lutomirski's avatar
      x86-64: Simplify and optimize vdso clock_gettime monotonic variants · 91ec87d5
      Andy Lutomirski authored
      
      We used to store the wall-to-monotonic offset and the realtime base.
      It's faster to precompute the monotonic base.
      
      This is about a 3% speedup on Sandy Bridge for CLOCK_MONOTONIC.
      It's much more impressive for CLOCK_MONOTONIC_COARSE.
      
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: default avatarJohn Stultz <john.stultz@linaro.org>
      91ec87d5
  24. Mar 16, 2012
    • Thomas Gleixner's avatar
      x86: vdso: Use seqcount instead of seqlock · 2ab51657
      Thomas Gleixner authored
      
      The update of the vdso data happens under xtime_lock, so adding a
      nested lock is pointless. Just use a seqcount to sync the readers.
      
      Reviewed-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarJohn Stultz <john.stultz@linaro.org>
      2ab51657
    • John Stultz's avatar
      time: x86: Fix race switching from vsyscall to non-vsyscall clock · a939e817
      John Stultz authored
      
      When switching from a vsyscall capable to a non-vsyscall capable
      clocksource, there was a small race, where the last vsyscall
      gettimeofday before the switch might return a invalid time value
      using the new non-vsyscall enabled clocksource values after the
      switch is complete.
      
      This is due to the vsyscall code checking the vclock_mode once
      outside of the seqcount protected section. After it reads the
      vclock mode, it doesn't re-check that the sampled clock data
      that is obtained in the seqcount critical section still matches.
      
      The fix is to sample vclock_mode inside the protected section,
      and as long as it isn't VCLOCK_NONE, return the calculated
      value. If it has changed and is now VCLOCK_NONE, fall back
      to the syscall gettime calculation.
      
      v2:
        * Cleanup checks as suggested by tglx
        * Also fix same issue present in gettimeofday path
      
      CC: Andy Lutomirski <luto@amacapital.net>
      CC: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarJohn Stultz <john.stultz@linaro.org>
      a939e817
  25. Feb 23, 2012
  26. Feb 21, 2012
  27. Feb 20, 2012
    • H. J. Lu's avatar
      x32: Add x32 VDSO support · 1a21d4e0
      H. J. Lu authored
      
      Add support for the x32 VDSO.  The x32 VDSO takes advantage of the
      similarity between the x86-64 and the x32 ABIs to contain the same
      content, only the container is different, as the x32 VDSO obviously is
      an x32 shared object.
      
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      1a21d4e0
  28. Aug 24, 2011
  29. Aug 05, 2011
    • Borislav Petkov's avatar
      x86, amd: Avoid cache aliasing penalties on AMD family 15h · dfb09f9b
      Borislav Petkov authored
      
      This patch provides performance tuning for the "Bulldozer" CPU. With its
      shared instruction cache there is a chance of generating an excessive
      number of cache cross-invalidates when running specific workloads on the
      cores of a compute module.
      
      This excessive amount of cross-invalidations can be observed if cache
      lines backed by shared physical memory alias in bits [14:12] of their
      virtual addresses, as those bits are used for the index generation.
      
      This patch addresses the issue by clearing all the bits in the [14:12]
      slice of the file mapping's virtual address at generation time, thus
      forcing those bits the same for all mappings of a single shared library
      across processes and, in doing so, avoids instruction cache aliases.
      
      It also adds the command line option "align_va_addr=(32|64|on|off)" with
      which virtual address alignment can be enabled for 32-bit or 64-bit x86
      individually, or both, or be completely disabled.
      
      This change leaves virtual region address allocation on other families
      and/or vendors unaffected.
      
      Signed-off-by: default avatarBorislav Petkov <borislav.petkov@amd.com>
      Link: http://lkml.kernel.org/r/1312550110-24160-2-git-send-email-bp@amd64.org
      
      
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      dfb09f9b
Loading