Skip to content
Snippets Groups Projects
  1. Jul 19, 2013
  2. Jul 17, 2013
    • Kees Cook's avatar
      x86: Make sure IDT is page aligned · 4df05f36
      Kees Cook authored
      
      Since the IDT is referenced from a fixmap, make sure it is page aligned.
      Merge with 32-bit one, since it was already aligned to deal with F00F
      bug. Since bss is cleared before IDT setup, it can live there. This also
      moves the other *_idt_table variables into common locations.
      
      This avoids the risk of the IDT ever being moved in the bss and having
      the mapping be offset, resulting in calling incorrect handlers. In the
      current upstream kernel this is not a manifested bug, but heavily patched
      kernels (such as those using the PaX patch series) did encounter this bug.
      
      The tables other than idt_table technically do not need to be page
      aligned, at least not at the current time, but using a common
      declaration avoids mistakes.  On 64 bits the table is exactly one page
      long, anyway.
      
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Link: http://lkml.kernel.org/r/20130716183441.GA14232@www.outflux.net
      
      
      Reported-by: default avatarPaX Team <pageexec@gmail.com>
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      4df05f36
  3. Jul 15, 2013
    • H. Peter Anvin's avatar
      x86, suspend: Handle CPUs which fail to #GP on RDMSR · 5ff560fd
      H. Peter Anvin authored
      
      There are CPUs which have errata causing RDMSR of a nonexistent MSR to
      not fault.  We would then try to WRMSR to restore the value of that
      MSR, causing a crash.  Specifically, some Pentium M variants would
      have this problem trying to save and restore the non-existent EFER,
      causing a crash on resume.
      
      Work around this by making sure we can write back the result at
      suspend time.
      
      Huge thanks to Christian Sünkenberg for finding the offending erratum
      that finally deciphered the mystery.
      
      Reported-and-tested-by: default avatarJohan Heinrich <onny@project-insanity.org>
      Debugged-by: default avatarChristian Sünkenberg <christian.suenkenberg@student.kit.edu>
      Acked-by: default avatarRafael J. Wysocki <rjw@sisk.pl>
      Link: http://lkml.kernel.org/r/51DDC972.3010005@student.kit.edu
      Cc: <stable@vger.kernel.org> # v3.7+
      5ff560fd
    • Paul Gortmaker's avatar
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker authored
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  4. Jul 12, 2013
  5. Jul 11, 2013
  6. Jul 09, 2013
  7. Jul 05, 2013
  8. Jul 04, 2013
    • Dave Hansen's avatar
      consolidate per-arch stack overflow debugging options · d1a1dc0b
      Dave Hansen authored
      Original posting:
      
      	http://lkml.kernel.org/r/20121214184202.F54094D9@kernel.stglabs.ibm.com
      
      
      
      Several architectures have similar stack debugging config options.
      They all pretty much do the same thing, some with slightly
      differing help text.
      
      This patch changes the architectures to instead enable a Kconfig
      boolean, and then use that boolean in the generic Kconfig.debug
      to present the actual menu option.  This removes a bunch of
      duplication and adds consistency across arches.
      
      Signed-off-by: default avatarDave Hansen <dave@linux.vnet.ibm.com>
      Reviewed-by: default avatarH. Peter Anvin <hpa@zytor.com>
      Reviewed-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Acked-by: Chris Metcalf <cmetcalf@tilera.com> [for tile]
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d1a1dc0b
    • Gleb Natapov's avatar
      KVM: VMX: mark unusable segment as nonpresent · 03617c18
      Gleb Natapov authored
      
      Some userspaces do not preserve unusable property. Since usable
      segment has to be present according to VMX spec we can use present
      property to amend userspace bug by making unusable segment always
      nonpresent. vmx_segment_access_rights() already marks nonpresent segment
      as unusable.
      
      Cc: stable@vger.kernel.org # 3.9+
      Reported-by: default avatarStefan Pietsch <stefan.pietsch@lsexperts.de>
      Tested-by: default avatarStefan Pietsch <stefan.pietsch@lsexperts.de>
      Signed-off-by: default avatarGleb Natapov <gleb@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      03617c18
    • Alexandre Bounine's avatar
      rapidio: add modular build option for the subsystem core · fdf90abc
      Alexandre Bounine authored
      
      Add a configuration option to build RapidIO subsystem core code as a
      loadable kernel module.  Currently this option is available only for
      x86-based platforms, with the additional patch for PowerPC planned to be
      provided later.
      
      This patch replaces kernel command line parameter "riohdid=" with its
      module-specific analog "rapidio.hdid=".
      
      Signed-off-by: default avatarAlexandre Bounine <alexandre.bounine@idt.com>
      Cc: Matt Porter <mporter@kernel.crashing.org>
      Cc: Li Yang <leoli@freescale.com>
      Cc: Kumar Gala <galak@kernel.crashing.org>
      Cc: Andre van Herk <andre.van.herk@Prodrive.nl>
      Cc: Micha Nelissen <micha.nelissen@Prodrive.nl>
      Cc: Stef van Os <stef.van.os@Prodrive.nl>
      Cc: Jean Delvare <jdelvare@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fdf90abc
    • Oleg Nesterov's avatar
      x86: kill TIF_DEBUG · 37f07655
      Oleg Nesterov authored
      
      Because it is not used.
      
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jan Kratochvil <jan.kratochvil@redhat.com>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      37f07655
    • Jiang Liu's avatar
      mm/x86: prepare for removing num_physpages and simplify mem_init() · 46a84132
      Jiang Liu authored
      
      Prepare for removing num_physpages and simplify mem_init().
      
      Signed-off-by: default avatarJiang Liu <jiang.liu@huawei.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andreas Herrmann <andreas.herrmann3@amd.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      46a84132
    • Jiang Liu's avatar
      mm: concentrate modification of totalram_pages into the mm core · 0c988534
      Jiang Liu authored
      
      Concentrate code to modify totalram_pages into the mm core, so the arch
      memory initialized code doesn't need to take care of it.  With these
      changes applied, only following functions from mm core modify global
      variable totalram_pages: free_bootmem_late(), free_all_bootmem(),
      free_all_bootmem_node(), adjust_managed_page_count().
      
      With this patch applied, it will be much more easier for us to keep
      totalram_pages and zone->managed_pages in consistence.
      
      Signed-off-by: default avatarJiang Liu <jiang.liu@huawei.com>
      Acked-by: default avatarDavid Howells <dhowells@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0c988534
    • Jiang Liu's avatar
      mm: make __free_pages_bootmem() only available at boot time · 170a5a7e
      Jiang Liu authored
      
      In order to simpilify management of totalram_pages and
      zone->managed_pages, make __free_pages_bootmem() only available at boot
      time.  With this change applied, __free_pages_bootmem() will only be
      used by bootmem.c and nobootmem.c at boot time, so mark it as __init.
      Other callers of __free_pages_bootmem() have been converted to use
      free_reserved_page(), which handles totalram_pages and
      zone->managed_pages in a safer way.
      
      This patch also fix a bug in free_pagetable() for x86_64, which should
      increase zone->managed_pages instead of zone->present_pages when freeing
      reserved pages.
      
      And now we have managed_pages_count_lock to protect totalram_pages and
      zone->managed_pages, so remove the redundant ppb_lock lock in
      put_page_bootmem().  This greatly simplifies the locking rules.
      
      Signed-off-by: default avatarJiang Liu <jiang.liu@huawei.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      170a5a7e
    • Jiang Liu's avatar
      mm: accurately calculate zone->managed_pages for highmem zones · 7b4b2a0d
      Jiang Liu authored
      
      Commit "mm: introduce new field 'managed_pages' to struct zone" assumes
      that all highmem pages will be freed into the buddy system by function
      mem_init().  But that's not always true, some architectures may reserve
      some highmem pages during boot.  For example PPC may allocate highmem
      pages for giagant HugeTLB pages, and several architectures have code to
      check PageReserved flag to exclude highmem pages allocated during boot
      when freeing highmem pages into the buddy system.
      
      So treat highmem pages in the same way as normal pages, that is to:
      1) reset zone->managed_pages to zero in mem_init().
      2) recalculate managed_pages when freeing pages into the buddy system.
      
      Signed-off-by: default avatarJiang Liu <jiang.liu@huawei.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7b4b2a0d
    • Jiang Liu's avatar
      mm/x86: use free_reserved_area() to simplify code · c88442ec
      Jiang Liu authored
      
      Use common help function free_reserved_area() to simplify code.
      
      Signed-off-by: default avatarJiang Liu <jiang.liu@huawei.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Jianguo Wu <wujianguo@huawei.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: <sworddragon2@aol.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Michel Lespinasse <walken@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c88442ec
    • Pavel Emelyanov's avatar
      mm: soft-dirty bits for user memory changes tracking · 0f8975ec
      Pavel Emelyanov authored
      
      The soft-dirty is a bit on a PTE which helps to track which pages a task
      writes to.  In order to do this tracking one should
      
        1. Clear soft-dirty bits from PTEs ("echo 4 > /proc/PID/clear_refs)
        2. Wait some time.
        3. Read soft-dirty bits (55'th in /proc/PID/pagemap2 entries)
      
      To do this tracking, the writable bit is cleared from PTEs when the
      soft-dirty bit is.  Thus, after this, when the task tries to modify a
      page at some virtual address the #PF occurs and the kernel sets the
      soft-dirty bit on the respective PTE.
      
      Note, that although all the task's address space is marked as r/o after
      the soft-dirty bits clear, the #PF-s that occur after that are processed
      fast.  This is so, since the pages are still mapped to physical memory,
      and thus all the kernel does is finds this fact out and puts back
      writable, dirty and soft-dirty bits on the PTE.
      
      Another thing to note, is that when mremap moves PTEs they are marked
      with soft-dirty as well, since from the user perspective mremap modifies
      the virtual memory at mremap's new address.
      
      Signed-off-by: default avatarPavel Emelyanov <xemul@parallels.com>
      Cc: Matt Mackall <mpm@selenic.com>
      Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0f8975ec
  9. Jul 02, 2013
    • Seiji Aguchi's avatar
      x86/tracing: Add irq_enter/exit() in smp_trace_reschedule_interrupt() · 4787c368
      Seiji Aguchi authored
      
      Reschedule vector tracepoints may be called in cpu idle state.
      This causes lockdep check warning below.
      
      The tracepoint requires rcu but for accuracy it also
      requires irq_enter() (tracepoints record the irq context), thus,
      the tracepoint interrupt handler should be calling irq_enter()
      and not rcu_irq_enter() (irq_enter() calls rcu_irq_enter()).
      
      So, add irq_enter/exit() to smp_trace_reschedule_interrupt()
      with common pre/post processing functions, smp_entering_irq()
      and exiting_irq() (exiting_irq() calls just irq_exit()
       in arch/x86/include/asm/apic.h),
      because these can be shared among reschedule, call_function,
      and call_function_single vectors.
      
      [   50.720557] Testing event reschedule_exit:
      [   50.721349]
      [   50.721502] ===============================
      [   50.721835] [ INFO: suspicious RCU usage. ]
      [   50.722169] 3.10.0-rc6-00004-gcf910e8 #190 Not tainted
      [   50.722582] -------------------------------
      [   50.722915] /c/kernel-tests/src/linux/arch/x86/include/asm/trace/irq_vectors.h:50 suspicious rcu_dereference_check() usage!
      [   50.723770]
      [   50.723770] other info that might help us debug this:
      [   50.723770]
      [   50.724385]
      [   50.724385] RCU used illegally from idle CPU!
      [   50.724385] rcu_scheduler_active = 1, debug_locks = 0
      [   50.725232] RCU used illegally from extended quiescent state!
      [   50.725690] no locks held by swapper/0/0.
      [   50.726010]
      [   50.726010] stack backtrace:
      [...]
      
      Signed-off-by: default avatarSeiji Aguchi <seiji.aguchi@hds.com>
      Reviewed-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/51CDCFA3.9080101@hds.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      4787c368
  10. Jun 29, 2013
  11. Jun 28, 2013
  12. Jun 27, 2013
    • Chen Gong's avatar
      x86/mce: Update MCE severity condition check · 33d7885b
      Chen Gong authored
      
      Update some SRAR severity conditions check to make it clearer,
      according to latest Intel SDM Vol 3(June 2013), table 15-20.
      
      Signed-off-by: default avatarChen Gong <gong.chen@linux.intel.com>
      Acked-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      33d7885b
    • Gleb Natapov's avatar
      KVM: Fix RTC interrupt coalescing tracking · 24f7bb52
      Gleb Natapov authored
      
      This reverts most of the f1ed0450. After
      the commit kvm_apic_set_irq() no longer returns accurate information
      about interrupt injection status if injection is done into disabled
      APIC. RTC interrupt coalescing tracking relies on the information to be
      accurate and cannot recover if it is not.
      
      Signed-off-by: default avatarGleb Natapov <gleb@redhat.com>
      24f7bb52
    • Yoshihiro YUNOMAE's avatar
      kvm: Add a tracepoint write_tsc_offset · 489223ed
      Yoshihiro YUNOMAE authored
      
      Add a tracepoint write_tsc_offset for tracing TSC offset change.
      We want to merge ftrace's trace data of guest OSs and the host OS using
      TSC for timestamp in chronological order. We need "TSC offset" values for
      each guest when merge those because the TSC value on a guest is always the
      host TSC plus guest's TSC offset. If we get the TSC offset values, we can
      calculate the host TSC value for each guest events from the TSC offset and
      the event TSC value. The host TSC values of the guest events are used when we
      want to merge trace data of guests and the host in chronological order.
      (Note: the trace_clock of both the host and the guest must be set x86-tsc in
      this case)
      
      This tracepoint also records vcpu_id which can be used to merge trace data for
      SMP guests. A merge tool will read TSC offset for each vcpu, then the tool
      converts guest TSC values to host TSC values for each vcpu.
      
      TSC offset is stored in the VMCS by vmx_write_tsc_offset() or
      vmx_adjust_tsc_offset(). KVM executes the former function when a guest boots.
      The latter function is executed when kvm clock is updated. Only host can read
      TSC offset value from VMCS, so a host needs to output TSC offset value
      when TSC offset is changed.
      
      Since the TSC offset is not often changed, it could be overwritten by other
      frequent events while tracing. To avoid that, I recommend to use a special
      instance for getting this event:
      
      1. set a instance before booting a guest
       # cd /sys/kernel/debug/tracing/instances
       # mkdir tsc_offset
       # cd tsc_offset
       # echo x86-tsc > trace_clock
       # echo 1 > events/kvm/kvm_write_tsc_offset/enable
      
      2. boot a guest
      
      Signed-off-by: default avatarYoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: default avatarGleb Natapov <gleb@redhat.com>
      489223ed
Loading