- Apr 14, 2014
-
-
Feng Wu authored
This patch exposes SMAP feature to guest Signed-off-by:
Feng Wu <feng.wu@intel.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Feng Wu authored
SMAP is disabled if CPU is in non-paging mode in hardware. However KVM always uses paging mode to emulate guest non-paging mode with TDP. To emulate this behavior, SMAP needs to be manually disabled when guest switches to non-paging mode. Signed-off-by:
Feng Wu <feng.wu@intel.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Feng Wu authored
This patch adds SMAP handling logic when setting CR4 for guests Thanks a lot to Paolo Bonzini for his suggestion to use the branchless way to detect SMAP violation. Signed-off-by:
Feng Wu <feng.wu@intel.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
Feng Wu authored
This patch removes SMAP bit from CR4_RESERVED_BITS. Signed-off-by:
Feng Wu <feng.wu@intel.com> Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com>
-
- Apr 02, 2014
-
-
Steven Rostedt (Red Hat) authored
Commit 2223f6f6 "x86: Clean up dumpstack_64.c code" changed the irq_stack processing a little from what it was before. The irq_stack_end variable needed to be cleared after its first use. By setting irq_stack to the per cpu irq_stack and passing that to analyze_stack(), and then clearing it after it is processed, we can get back the original behavior. Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Steven Rostedt (Red Hat) authored
Commit 2223f6f6 "x86: Clean up dumpstack_64.c code" moved the used variable to a local within the loop, but the in_exception_stack() depended on being non-volatile with the ability to change it. By always re-initializing the "used" variable to zero, it would cause the in_exception_stack() to return the same thing each time, and cause the dump_stack loop to go into an infinite loop. Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 31, 2014
-
-
Neil Horman authored
Commit 03bbcb2e (iommu/vt-d: add quirk for broken interrupt remapping on 55XX chipsets) properly disables irq remapping on the 5500/5520 chipsets that don't correctly perform that feature. However, when I wrote it, I followed the errata sheet linked in that commit too closely, and explicitly tied the activation of the quirk to revision 0x13 of the chip, under the assumption that earlier revisions were not in the field. Recently a system was reported to be suffering from this remap bug and the quirk hadn't triggered, because the revision id register read at a lower value that 0x13, so the quirk test failed improperly. Given this, it seems only prudent to adjust this quirk so that any revision less than 0x13 has the quirk asserted. [ tglx: Removed the 0x12 comparison of pci id 3405 as this is covered by the <= 0x13 check already ] Signed-off-by:
Neil Horman <nhorman@tuxdriver.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1394649873-14913-1-git-send-email-nhorman@tuxdriver.com Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- Mar 30, 2014
-
-
Andy Lutomirski authored
The new symbols provide the same API as the 64-bit variants, so they should have the same symbol version name. This can't break userspace, since these symbols are new for 32-bit Linux. Signed-off-by:
Andy Lutomirski <luto@amacapital.net> Cc: Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/0a869bce03d25619565b1eee7d69a4fd15fd203a.1396124118.git.luto@amacapital.net Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- Mar 28, 2014
-
-
Artem Fetishev authored
On x86 uniprocessor systems topology_physical_package_id() returns -1 which causes rapl_cpu_prepare() to leave rapl_pmu variable uninitialized which leads to GPF in rapl_pmu_init(). See arch/x86/kernel/cpu/perf_event_intel_rapl.c. It turns out that physical_package_id and core_id can actually be retreived for uniprocessor systems too. Enabling them also fixes rapl_pmu code. Signed-off-by:
Artem Fetishev <artem_fetishev@epam.com> Cc: Stephane Eranian <eranian@google.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Mar 27, 2014
-
-
Jason Wang authored
This patch bypass the timer_irq_works() check for hyperv guest since: - It was guaranteed to work. - timer_irq_works() may fail sometime due to the lpj calibration were inaccurate in a hyperv guest or a buggy host. In the future, we should get the tsc frequency from hypervisor and use preset lpj instead. [ hpa: I would prefer to not defer things to "the future" in the future... ] Cc: K. Y. Srinivasan <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: <stable@vger.kernel.org> Acked-by:
K. Y. Srinivasan <kys@microsoft.com> Signed-off-by:
Jason Wang <jasowang@redhat.com> Link: http://lkml.kernel.org/r/1393558229-14755-1-git-send-email-jasowang@redhat.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Paolo Bonzini authored
kvm_x86_ops is still NULL at this point. Since kvm_init_msr_list cannot fail, it is safe to initialize it before the call. Fixes: 93c4adc7 Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Tested-by:
Jet Chen <jet.chen@intel.com> Cc: kvm@vger.kernel.org Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Mar 26, 2014
-
-
Matt Fleming authored
The ARM EFI boot stub doesn't need to care about the efi_early infrastructure that x86 requires in order to do mixed mode thunking. So wrap everything up in an efi_call_early() macro. This allows x86 to do the necessary indirection jumps to call whatever firmware interface is necessary (native or mixed mode), but also allows the ARM folks to mask the fact that they don't support relocation in the boot stub and need to pass 'sys_table_arg' to every function. [ hpa: there are no object code changes from this patch ] Signed-off-by:
Matt Fleming <matt.fleming@intel.com> Link: http://lkml.kernel.org/r/20140326091011.GB2958@console-pimps.org Cc: Roy Franz <roy.franz@linaro.org> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Andy Lutomirski authored
vdso32/vclock_gettime.o was confusing kbuild. Signed-off-by:
Andy Lutomirski <luto@amacapital.net> Cc: Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/d741449340642213744dd659471a35bb970a0c4c.1395789923.git.luto@amacapital.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Mar 25, 2014
-
-
H. Peter Anvin authored
The .discard/.discard.* sections are used to generate intermediate results for the assembler (effectively "test assembly".) The output is waste and should not be retained. Cc: Stefani Seibold <stefani@seibold.net> Cc: Andy Lutomirski <luto@amacapital.net> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Link: http://lkml.kernel.org/n/tip-psizrnant8x3nrhbgvq2vekr@git.kernel.org
-
Thomas Gleixner authored
destroy_timer_on_stack() is hardly the right thing for a delayed work. We leak a tracking object for the work itself when DEBUG_OBJECTS is enabled. Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/20140323141940.034005322@linutronix.de Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
David Vrabel authored
This reverts commit a9c8e4be. PTEs in Xen PV guests must contain machine addresses if _PAGE_PRESENT is set and pseudo-physical addresses is _PAGE_PRESENT is clear. This is because during a domain save/restore (migration) the page table entries are "canonicalised" and uncanonicalised". i.e., MFNs are converted to PFNs during domain save so that on a restore the page table entries may be rewritten with the new MFNs on the destination. This canonicalisation is only done for PTEs that are present. This change resulted in writing PTEs with MFNs if _PAGE_PROTNONE (or _PAGE_NUMA) was set but _PAGE_PRESENT was clear. These PTEs would be migrated as-is which would result in unexpected behaviour in the destination domain. Either a) the MFN would be translated to the wrong PFN/page; b) setting the _PAGE_PRESENT bit would clear the PTE because the MFN is no longer owned by the domain; or c) the present bit would not get set. Symptoms include "Bad page" reports when munmapping after migrating a domain. Signed-off-by:
David Vrabel <david.vrabel@citrix.com> Acked-by:
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: <stable@vger.kernel.org> [3.12+]
-
- Mar 24, 2014
-
-
Kees Cook authored
There was a potential lock ordering problem with the module kASLR patch ("x86, kaslr: randomize module base load address"). This patch removes the usage of the module_mutex and creates a new mutex to protect the module base address offset value. Chain exists of: text_mutex --> kprobe_insn_slots.mutex --> module_mutex [ 0.515561] Possible unsafe locking scenario: [ 0.515561] [ 0.515561] CPU0 CPU1 [ 0.515561] ---- ---- [ 0.515561] lock(module_mutex); [ 0.515561] lock(kprobe_insn_slots.mutex); [ 0.515561] lock(module_mutex); [ 0.515561] lock(text_mutex); [ 0.515561] [ 0.515561] *** DEADLOCK *** Reported-by:
Fengguang Wu <fengguang.wu@intel.com> Signed-off-by:
Andy Honig <ahonig@google.com> Signed-off-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Stefani Seibold authored
The size of the reserved memory for a 32 bit vdso must be the size of the 32 bit vDSO in pages + HPET page + VVAR page. One page is not enough for this. Grrrr.... silly copy and paste bug, was right in previous patch. Signed-off-by:
Stefani Seibold <stefani@seibold.net> Cc: Andy Lutomirski <luto@amacapital.net> Link: http://lkml.kernel.org/r/1395592694-20571-1-git-send-email-stefani@seibold.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
- Mar 21, 2014
-
-
Andy Lutomirski authored
It's a declaration of a nonexistent symbol. We can get rid of the 64-bit versions, too, but that's more intrusive. Signed-off-by:
Andy Lutomirski <luto@amacapital.net> Cc: Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/2ce2ce18447d8a0b78d44a278a066b6c0af06b32.1395366931.git.luto@amacapital.net Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Andy Lutomirski authored
This fixes the Xen build and gets rid of a silly header file. Signed-off-by:
Andy Lutomirski <luto@amacapital.net> Cc: Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/1df77311795aff75f5742c787d277518314a38d3.1395366931.git.luto@amacapital.net Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Chris Bainbridge authored
Many Pentium M systems disable PAE but may have a functionally usable PAE implementation. This adds the "forcepae" parameter which bypasses the boot check for PAE, and sets the CPU as being PAE capable. Using this parameter will taint the kernel with TAINT_CPU_OUT_OF_SPEC. Signed-off-by:
Chris Bainbridge <chris.bainbridge@gmail.com> Link: http://lkml.kernel.org/r/20140307114040.GA4997@localhost Acked-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Dave Jones authored
Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC, so we can repurpose the flag to encompass a wider range of pushing the CPU beyond its warrany. Signed-off-by:
Dave Jones <davej@fedoraproject.org> Link: http://lkml.kernel.org/r/20140226154949.GA770@redhat.com Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
- Mar 20, 2014
-
-
Andy Lutomirski authored
This replaces a decent amount of incomprehensible and buggy code with much more straightforward code. It also brings the 32-bit vdso more in line with the 64-bit vdsos, so maybe someday they can share even more code. This wastes a small amount of kernel .data and .text space, but it avoids a couple of allocations on startup, so it should be more or less a wash memory-wise. Signed-off-by:
Andy Lutomirski <luto@amacapital.net> Cc: Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/b8093933fad09ce181edb08a61dcd5d2592e9814.1395352498.git.luto@amacapital.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Jan Beulich authored
Minor cleanups: - simplify switch statement - add __init annotation to setup_arch_fast_hash() Signed-off-by:
Jan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/530F09CE020000780011FBEF@nat28.tlf.novell.com Cc: Francesco Fusco <ffusco@redhat.com> Cc: Thomas Graf <tgraf@redhat.com> Cc: David S. Miller <davem@davemloft.net> Acked-by:
Daniel Borkmann <dborkman@redhat.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Jan Beulich authored
... to match the function's parameters. While reportedly commutative, using the proper order allows for leveraging the instruction permitting the source operand to be in memory. [ hpa: This code originated in the dpdk toolkit. This was a bug in dpdk which has recently been fixed in part due to an earlier version of this patch. ] Signed-off-by:
Jan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/530F09B6020000780011FBEB@nat28.tlf.novell.com Acked-by:
Daniel Borkmann <dborkman@redhat.com> Cc: Francesco Fusco <ffusco@redhat.com> Cc: Thomas Graf <tgraf@redhat.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Jan Beulich authored
Just like for other ISA extension instruction uses we should check whether the assembler actually supports them. The fallback here simply is to encode an instruction with fixed operands (%eax and %ecx). [ hpa: tagging for -stable as a build fix ] Signed-off-by:
Jan Beulich <jbeulich@suse.com> Link: http://lkml.kernel.org/r/530F0996020000780011FBE7@nat28.tlf.novell.com Cc: Francesco Fusco <ffusco@redhat.com> Cc: Thomas Graf <tgraf@redhat.com> Cc: David S. Miller <davem@davemloft.net> Acked-by:
Daniel Borkmann <dborkman@redhat.com> Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com> Cc: <stable@vger.kernel.org> # v3.14
-
- Mar 19, 2014
-
-
Vivek Goyal authored
Currently compressed/misc.c needs to link against memset(). I think one of the reasons of this need is inclusion of various header files which define static inline functions and use memset() inside these. For example, include/linux/bitmap.h I think trying to include "../string.h" and using builtin version of memset does not work because by the time "#define memset" shows up, it is too late. Some other header file has already used memset() and expects to find a definition during link phase. Currently we have a C definitoin of memset() in misc.c. Move it to compressed/string.c so that others can use it if need be. Signed-off-by:
Vivek Goyal <vgoyal@redhat.com> Link: http://lkml.kernel.org/r/1395170800-11059-6-git-send-email-vgoyal@redhat.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Vivek Goyal authored
Try to treat memcmp() in same way as memcpy() and memset(). Provide a declaration in boot/string.h and by default user gets a memcmp() which maps to builtin function. Move optimized definition of memcmp() in boot/string.c. Now a user can do #undef memcmp and link against string.c to use optimzied memcmp(). It also simplifies boot/compressed/string.c where we had to redefine memcmp(). That extra definition is gone now. Signed-off-by:
Vivek Goyal <vgoyal@redhat.com> Link: http://lkml.kernel.org/r/1395170800-11059-5-git-send-email-vgoyal@redhat.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Vivek Goyal authored
Move optimized versions of memcpy to compressed/string.c This will allow any other code to use these functions too if need be in future. Again trying to put definition in a common place instead of hiding it in misc.c Signed-off-by:
Vivek Goyal <vgoyal@redhat.com> Link: http://lkml.kernel.org/r/1395170800-11059-4-git-send-email-vgoyal@redhat.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Vivek Goyal authored
Create a separate arch/x86/boot/string.h file to provide declaration of some of the common string functions. By default memcpy, memset and memcmp functions will default to gcc builtin functions. If code wants to use an optimized version of any of these functions, they need to #undef the respective macro and link against a local file providing definition of undefed function. For example, arch/x86/boot/* code links against copy.S to get memcpy() and memcmp() definitions. arch/86/boot/compressed/* links against compressed/string.c. There are quite a few places in arch/x86/ where these functions are used. Idea is to try to consilidate their declaration and possibly definitions so that it can be reused. I am planning to reuse boot/string.h in arch/x86/purgatory/ and use gcc builtin functions for memcpy, memset and memcmp. Signed-off-by:
Vivek Goyal <vgoyal@redhat.com> Link: http://lkml.kernel.org/r/1395170800-11059-3-git-send-email-vgoyal@redhat.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Vivek Goyal authored
With CONFIG_X86_32=y, string_32.h gets pulled in compressed/string.c by "misch.h". string_32.h defines a macro to map memcmp to __builtin_memcmp(). And that macro in turn changes the name of memcmp() defined here and converts it to __builtin_memcmp(). I thought that's not the intention though. We probably want to provide our own optimized definition of memcmp(). If yes, then undef the memcmp before we define a new memcmp. Signed-off-by:
Vivek Goyal <vgoyal@redhat.com> Link: http://lkml.kernel.org/r/1395170800-11059-2-git-send-email-vgoyal@redhat.com Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Bjorn Helgaas authored
This reverts commit 56dd669a, which makes the GART visible in /proc/iomem. This fixes a regression: e501b3d8 ("agp: Support 64-bit APBASE") exposed an existing problem with a conflict between the GART region and a PCI BAR region. The GART addresses are bus addresses, not CPU addresses, and therefore should not be inserted in iomem_resource. On many machines, the GART region is addressable by the CPU as well as by an AGP master, but CPU addressability is not required by the spec. On some of these machines, the GART is mapped by a PCI BAR, and in that case, the PCI core automatically inserts it into iomem_resource, just as it does for all BARs. Inserting it here means we'll have a conflict if the PCI core later tries to claim the GART region, so let's drop the insertion here. The conflict indirectly causes X failures, as reported by Jouni in the bugzilla below. We detected the conflict even before e501b3d8, but after it the AGP code (fix_northbridge()) uses the PCI resource (which is zeroed because of the conflict) instead of reading the BAR again. Conflicts: arch/x86_64/kernel/aperture.c Fixes: e501b3d8 agp: Support 64-bit APBASE Link: https://bugzilla.kernel.org/show_bug.cgi?id=72201 Reported-and-tested-by:
Jouni Mettälä <jtmettala@gmail.com> Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com>
-
Viresh Kumar authored
Two cpufreq notifiers CPUFREQ_RESUMECHANGE and CPUFREQ_SUSPENDCHANGE have not been used for some time, so remove them to clean up code a bit. Signed-off-by:
Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by:
Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> [rjw: Changelog] Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- Mar 18, 2014
-
-
Bjorn Helgaas authored
This reverts commit 56dd669a, which makes the GART visible in /proc/iomem. This fixes a regression: e501b3d8 ("agp: Support 64-bit APBASE") exposed an existing problem with a conflict between the GART region and a PCI BAR region. The GART addresses are bus addresses, not CPU addresses, and therefore should not be inserted in iomem_resource. On many machines, the GART region is addressable by the CPU as well as by an AGP master, but CPU addressability is not required by the spec. On some of these machines, the GART is mapped by a PCI BAR, and in that case, the PCI core automatically inserts it into iomem_resource, just as it does for all BARs. Inserting it here means we'll have a conflict if the PCI core later tries to claim the GART region, so let's drop the insertion here. The conflict indirectly causes X failures, as reported by Jouni in the bugzilla below. We detected the conflict even before e501b3d8, but after it the AGP code (fix_northbridge()) uses the PCI resource (which is zeroed because of the conflict) instead of reading the BAR again. Conflicts: arch/x86_64/kernel/aperture.c Fixes: e501b3d8 agp: Support 64-bit APBASE Link: https://bugzilla.kernel.org/show_bug.cgi?id=72201 Reported-and-tested-by:
Jouni Mettälä <jtmettala@gmail.com> Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com>
-
Stefani Seibold authored
This patch enables 32 bit vDSO which are larger than a page. Signed-off-by:
Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/1395094933-14252-14-git-send-email-stefani@seibold.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
H. Peter Anvin authored
For the 32-bit VDSO, match the 64-bit VDSO in: 1. Disable the stack protector. 2. Use -fno-omit-frame-pointer for user space debugging sanity. 3. Use -foptimize-sibling-calls like the 64-bit VDSO does. Reported-by:
Ingo Molnar <mingo@kernel.org> Signed-off-by:
Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/1395094933-14252-13-git-send-email-stefani@seibold.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Andy Lutomirski authored
By coincidence, the VVAR page is at the end of an ELF segment. As a result, if it ends up being a partial page, the kernel loader will leave garbage behind at the end of the vvar page. Zero-pad it to a full page to fix this issue. This has probably been broken since the VVAR page was introduced. On QEMU, if you dump the run-time contents of the VVAR page, you can find entertaining strings from seabios left behind. It's remotely possible that this is a security bug -- conceivably there's some BIOS out there that leaves something sensitive in the few K of memory that is exposed to userspace. Signed-off-by:
Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/1395094933-14252-12-git-send-email-stefani@seibold.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Stefani Seibold authored
This patch add the VDSO time support for the IA32 Emulation Layer. Due the nature of the kernel headers and the LP64 compiler where the size of a long and a pointer differs against a 32 bit compiler, there is some type hacking necessary for optimal performance. The vsyscall_gtod_data struture must be a rearranged to serve 32- and 64-bit code access at the same time: - The seqcount_t was replaced by an unsigned, this makes the vsyscall_gtod_data intedepend of kernel configuration and internal functions. - All kernel internal structures are replaced by fix size elements which works for 32- and 64-bit access - The inner struct clock was removed to pack the whole struct. The "unsigned seq" would be handled by functions derivated from seqcount_t. Signed-off-by:
Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/1395094933-14252-11-git-send-email-stefani@seibold.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Stefani Seibold authored
This patch add the time support for 32 bit a VDSO to a 32 bit kernel. For 32 bit programs running on a 32 bit kernel, the same mechanism is used as for 64 bit programs running on a 64 bit kernel. Reviewed-by:
Andy Lutomirski <luto@amacapital.net> Signed-off-by:
Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/1395094933-14252-10-git-send-email-stefani@seibold.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-
Andy Lutomirski authored
We need the alternatives mechanism for rdtsc_barrier() to work. Signed-off-by:
Stefani Seibold <stefani@seibold.net> Link: http://lkml.kernel.org/r/1395094933-14252-9-git-send-email-stefani@seibold.net Signed-off-by:
H. Peter Anvin <hpa@linux.intel.com>
-