Compare commits

..

108 Commits

Author SHA1 Message Date
Alexander Boettcher
00bad9bee5 sel4: increase resources for fb_bench
Issue #5423
2025-01-24 16:46:05 +01:00
Alexander Boettcher
ca7bcc2d80 fixup "sel4: add MSI support for x86"
Issue #5423
2025-01-24 16:46:00 +01:00
Christian Helmuth
8e0fe39248 libdrm: explicitly convert values to __u64
Prevent errors like follows.

  error: invalid cast from type ‘size_t’ {aka ‘long unsigned int’} to type ‘__u64’ {aka ‘long long unsigned int’}

Issue #5431
2025-01-24 15:39:37 +01:00
Alexander Boettcher
a376ebafa7 libc: add missing header for qemu port 2025-01-24 12:49:16 +01:00
Norman Feske
f9f874d7e4 fixup "libc: unify base types for arm_64 and riscv" (revert unintended __int64_t modification)
Fixes #5431
2025-01-24 11:50:45 +01:00
Josef Söntgen
9b61f00187 libnl: use fixed_stint.h for typedefs
Issue #5431.
2025-01-23 19:28:11 +01:00
Stefan Kalkowski
8730657e08 libusb: don't freeze when device vanishs
Instead of freezing, return corresponding libusb error code if the
USB device got disconnected. Therefore, components using the library
can continue to work otherwise.

Fix genodelabs/genode#5434
2025-01-23 16:04:09 +01:00
Stefan Kalkowski
35e0a2b144 base: add missing wakeup signal in child framework
In Child::deliver_session_cap a signal to wakeup a service after
altering its session ROM was missing when the requesting client
that does not longer exist.

Fix genodelabs/genode#5435
2025-01-23 15:48:55 +01:00
Alexander Boettcher
d12b491a5c sel4: add MSI support for x86
Fixes #5423
2025-01-23 15:45:50 +01:00
Alexander Boettcher
130efed0cb base: support specifying PCI bdf on irq creation
Required by the seL4 kernel interface for MSI creation and by another upstream
kernel.

Issue #5423
2025-01-23 15:45:34 +01:00
Benjamin Lamowski
004aaf0235 fixup; hw: always serialize rdtsc reads
Since rdtsc() provides ordered timestamps now, we should reordering of
statements by the compiler too.

Issues #5215, #5430
2025-01-23 08:37:57 +01:00
Alexander Boettcher
27c2064a3c foc: support more caps
by increasing the base-foc internal cap indices implementation from 16bit
size to 28 bit.

Issue #5406
2025-01-22 15:31:36 +01:00
Christian Helmuth
be7df4fa82 fixup "hw: always serialize rdtsc reads" (final cosmetic touches) 2025-01-22 14:41:48 +01:00
Benjamin Lamowski
6793143d31 hw: always serialize rdtsc reads
While implementing TSC calibration in #5215, the issue of properly serializing
TSC reads came up. Some learnings of the discussion were noted in #5430.

Using `cpuid` for serialization as in Trace::timestamp() is portable,
but will cause VM exits on VMX and SVM and is therefore unsuitable to
retain a roughly working calibration loop while running virtualized.
On the other hand on most AMD systems, dispatch serializing `lfence`
needs to be explicitly enabled via a non-architectural MSR.

Enable setting up dispatch serializing lfence on AMD systems and always
serialize rdtsc accesses in Hw::Tsc::rdtsc() for maximum reliability.

Issues #5215, #5430
2025-01-22 14:41:48 +01:00
Christian Helmuth
28ecbbbb71 fixup "hw: calibrate Local APIC via ACPI timer" (final cosmetic touches)
and one bug fix
2025-01-22 14:41:48 +01:00
Benjamin Lamowski
8bdddbd46a hw: calibrate Local APIC via ACPI timer
Upto now, bootstrap used the Programmable Interval Timer to set a
suitable divider and determine the frequency of the Local APIC.
The PIT is not available on recent x86_64 hardware anymore.

Move Local APIC calibration to bootstrap and use the ACPI timer as a
reference. Clean up hw's timer implementation a little and disable the
PIT in bootstrap.

Fixes #5215
2025-01-22 14:41:48 +01:00
Christian Helmuth
103d03b590 fixup "hw: calibrate TSC via ACPI timer" (final cosmetic touches) 2025-01-22 14:41:48 +01:00
Benjamin Lamowski
5def882001 hw: calibrate TSC via ACPI timer
To get the Time Stamp Counter's frequency, hw relied on a complex and
incomplete algorithm.

Since this is a one-time initialization issue, move TSC calibration to
bootstrap and implement it using the ACPI timer.

Issue #5215
2025-01-22 14:41:48 +01:00
Norman Feske
da65626cdd libc: unify base types for arm_64 and riscv
Fixes #5431
2025-01-22 10:12:55 +01:00
Norman Feske
cdbc31add8 fixup "Enable -ffreestanding by default" (libgcov, x86emu) 2025-01-22 10:12:47 +01:00
Norman Feske
b455139e8c Enable -ffreestanding by default
Fixes #5429
2025-01-21 12:30:57 +01:00
Christian Prochaska
b4f4a6db09 qt6: install the SVG image format plugin
Fixes #5427
2025-01-20 17:34:01 +01:00
Christian Prochaska
d3d901886f qt6: fix dangling pointer in QGenodeWindowSurface
Fixes #5426
2025-01-20 17:33:57 +01:00
Stefan Kalkowski
0d4d23a161 fixup "hw: implement helping of pager threads" (fix bomb) 2025-01-20 16:33:56 +01:00
Stefan Kalkowski
827401ee2d fixup "hw: implement helping of pager threads" (managed ds pf faults) 2025-01-17 14:38:09 +01:00
Christian Prochaska
fa9373cbd3 qt6: use window title as label for GUI session
Fixes #5424
2025-01-17 08:56:22 +01:00
Alexander Boettcher
bfaacb1ada fixup "base: use Dataspace_attr in io_mem _map_local(...)" - fiasco nightly failure
Issue #5406
2025-01-16 13:28:56 +01:00
Alexander Boettcher
8eae6501b7 tool: use bender hwp options also for foc and seL4
Issue #5406
2025-01-16 11:37:04 +01:00
Alexander Boettcher
e840a3f8f9 sculpt: use rom_fs for fiasco and foc
Issue #5406
2025-01-16 11:36:27 +01:00
Alexander Boettcher
7c5f879b91 squash "base: use Dataspace_attr in io_mem _map_local(...)" - rename to "base: use Map_local_result in io_mem _map_local(...)"
Issue #5406
2025-01-16 11:34:47 +01:00
Norman Feske
2b566042ad fixup "gems: ABI and depot recipe for dialog API" 2025-01-15 17:54:33 +01:00
Norman Feske
6e82520165 fixup "vm_session: use Callable for with_state" 2025-01-15 17:52:30 +01:00
Norman Feske
1677a36225 fixup "monitor/sandbox: use Callable" (more callables) 2025-01-15 17:44:29 +01:00
Norman Feske
c564013a5d vm_session: use Callable for with_state
Issue #5420
2025-01-15 16:59:19 +01:00
Norman Feske
d033e4c153 sandbox: don't use Xml_node as return value
Issue #5411
2025-01-15 16:07:59 +01:00
Norman Feske
c5e6d071e5 monitor/sandbox: use Callable
Issue #5420
2025-01-15 15:54:42 +01:00
Norman Feske
68613c8265 cpu_sampler: propagate Create_thread_error
The accounting of caps for the UTCB allocation on base-hw puts pressure
on the out-of-ram/caps handling of Create_thread_result in the CPU
sampler. This patch implements the formerly missing error handling.

Issue #5408
2025-01-15 15:13:03 +01:00
Stefan Kalkowski
67985e2ba9 hw: remove unused object pool of pager ojects
Fix genodelabs/genode#5417
2025-01-15 14:54:43 +01:00
Stefan Kalkowski
673c97b3e6 hw: sanitize kernel's signal datastructures
* Move all Kernel::Signal_* structures to kernel/signal.*
* Remove return value of kill_signal_context, which wasn't evaluated
* Remove Kernel::Signal_context::can_kill
* Remove Kernel::Signal_context::can_submit
* Remove Kernel::Signal_receiver::can_add_handler
* Turn nullptr into cxx nullptr instead of just zero
* Turn boolean values into true/false instead of one/zero
* Always add to signal FIFO also if submit counter
  cannot get increased enough

Fix genodelabs/genode#5416
2025-01-15 14:54:39 +01:00
Stefan Kalkowski
3bdfb078bf hw: implement helping of pager threads
Instead of blocking in case of exceptions and MMU faults, delegate
the faulter's scheduling context to the assigned pager thread.

Fix genodelabs/genode#5318
2025-01-15 14:54:28 +01:00
Stefan Kalkowski
84cabaf9b7 hw: move remaining pager code to pager.cc
Consolidate core's ha-specific pager code in one and the same compilation unit.

Ref genodelabs/genode#5318
2025-01-15 14:54:12 +01:00
Stefan Kalkowski
f2d91fd56a hw: pager thread per cpu
Instatiate a separate pager thread for each cpu core. Every time a pager
object gets managed by the Pager_entrypoint, assign it to the pager thread
on the same cpu core.

Ref genodelabs/genode#5318
2025-01-15 14:53:54 +01:00
Stefan Kalkowski
a5664ba5c0 hw: use enums in stack base definition
Ref genodelabs/genode#5319
2025-01-15 14:53:35 +01:00
Stefan Kalkowski
7e4decfacc hw: sanitze cpu context
* Rename Kernel::Cpu_job to Kernel::Cpu_context (alias Kernel::Cpu::Context)
* State first Cpu affinity of Cpu::Context at construction time
* Move cpu affinity argument from kernel syscall create_thread to start_thread
* Ensure that Cpu pointer is always valid

Fix genodelabs/genode#5319
2025-01-15 14:52:40 +01:00
Stefan Kalkowski
d9ca56e1ed hw: add missing six-arguments syscall for riscv
Ref genodelabs/genode#5319
2025-01-15 14:52:34 +01:00
Stefan Kalkowski
7010dbbbbb hw: generalize IPC-helping to a common mechanism
* Removes helping from Ipc_node and Thread class
* Implement helping as mechanism of Scheduler::Context

Ref genodelabs/genode#5318
2025-01-15 14:52:25 +01:00
Stefan Kalkowski
b7d3d8237b hw: ensure board_name is set as depot build
When no BOARD variable is set via the build environment,
the board_name in the platform_info ROM needs to be set either.

Ref genodelabs/genode#5360
Fix genodelabs/genode#5414
2025-01-15 14:47:07 +01:00
Norman Feske
f4c255955f fixup "base: remove base/internal/unmanaged_singleton.h" (armv6) 2025-01-15 14:32:40 +01:00
Alexander Boettcher
13abc33ec9 platform: be robust on IRQ creation failure
In case invalid IRQ numbers are used (255 on x86), the IRQ session creation
request may be denied and the platform driver is killed because of the
uncatched exception, which must be avoided.

Issue #5406
2025-01-15 12:24:45 +01:00
Alexander Boettcher
2e3e57f50a foc: increase max count for RPC caps in core
to boot Sculpt

Issue #5406
2025-01-15 08:58:10 +01:00
Alexander Boettcher
b451f4c45c foc: add support to add ACPI rsdp in platform info
which is required for UEFI boots.

Issue #5406
2025-01-15 08:58:10 +01:00
Alexander Boettcher
5e2ba4d859 acpi: be robuster on IO_MEM session denied
Issue #5406
2025-01-15 08:58:10 +01:00
Alexander Boettcher
896e652b4c platform/pc: be robust on IOMMU claiming
Issue #5406
2025-01-15 08:58:10 +01:00
Alexander Boettcher
0d641600a4 base: use Dataspace_attr in io_mem _map_local(...)
Issue #5406
2025-01-15 08:58:10 +01:00
Alexander Boettcher
42c78b946b nova: add x2apic support
Fixes #5413
2025-01-15 08:46:51 +01:00
Johannes Schlatow
2743e625f6 base-nova: allow access to IOAPIC
This allows the platform driver to control remapping of legacy
interrupts.

genodelabs/genode#5066
2025-01-15 08:41:05 +01:00
Johannes Schlatow
0ebf11f143 platform/pc: re-use init code during resume
genodelabs/genode#5066
2025-01-15 08:37:17 +01:00
Johannes Schlatow
d25a9a0e21 platform/pc: remove Env from generate() methods
This was a relic from a time where we dumped the page tables from NOVA
and therefore needed to attach the corresponding dataspaces.

genodelabs/genode#5066
2025-01-15 08:37:08 +01:00
Johannes Schlatow
e3cc8274ba platform/pc: implement IRQ remapping for Intel
genodelabs/genode#5066
2025-01-15 08:36:48 +01:00
Johannes Schlatow
c1eafc39ab platform/pc: use queued invalidation interface
genodelabs/genode#5066
2025-01-15 08:36:42 +01:00
Johannes Schlatow
a392b93187 platform/pc: move register-based invalidation
This is in preparation of implementing the queued-invalidation
interface.

genodelabs/genode#5066
2025-01-15 08:36:33 +01:00
Johannes Schlatow
de7e11c122 platform/pc: implement IOAPIC
genodelabs/genode#5066
2025-01-15 08:36:12 +01:00
Johannes Schlatow
54b036d61c platform: add IRQ remapping support
genodelabs/genode#5066
2025-01-15 08:36:03 +01:00
Johannes Schlatow
a392e258cd pci_decode: report IOAPIC devices
genodelabs/genode#5066
2025-01-15 08:35:47 +01:00
Johannes Schlatow
54153be983 acpi_drv: report IOAPIC devices
genodelabs/genode#5066
2025-01-15 08:35:15 +01:00
Johannes Schlatow
811e51a24e acpi_drv: reflect DMAR properties in report
genodelabs/genode#5066
2025-01-15 08:35:13 +01:00
Christian Prochaska
64e0c5413f qt6: add 'i18n' example
Fixes #5421
2025-01-15 08:29:48 +01:00
Norman Feske
1d315bc355 sculpt_manager: use Callable
Issue #5420
2025-01-14 17:59:44 +01:00
Norman Feske
cc7501cee7 base: add util/callable.h
Fixes #5420
2025-01-14 17:39:26 +01:00
Christian Prochaska
311c7a8cd0 qt6: add 'SvgWidgets' files
Fixes #5419
2025-01-14 16:54:12 +01:00
Norman Feske
891cc5bf5f base: remove base/internal/unmanaged_singleton.h
Fixes #5418
2025-01-14 15:21:34 +01:00
Norman Feske
19163e44b4 test-libc_fifo_pipe: adjust cap_quota
The proper accounting of caps consumed for allocation of UTCBs on
base-hw slightly increases the costs per thread.

Issue #5408
2025-01-14 14:46:39 +01:00
Norman Feske
94ece697e1 fixup "core: don't rely on Core_env in platform.cc" (base-hw: account caps for utcb) 2025-01-14 14:39:48 +01:00
Benjamin Lamowski
40b00e2bcd fixup: hw: x86_64: refactor Vm_session_component
Remove unused vCPU priority support.

Issue #5221
2025-01-14 13:37:04 +01:00
Norman Feske
bd79ee0805 fixup "hw: factor out Vmid_allocator" (fixes build for imx53_qsb_tz) 2025-01-14 11:47:40 +01:00
Benjamin Lamowski
cea5a16abb fixup: hw: x86_64: refactor Vm_session_component
Use Constrained_ram_allocator for physical memory.

Issue #5221
2025-01-14 11:39:26 +01:00
Norman Feske
142ddd4d3b libc: remove use of unmanaged_singleton
Issue #5418
2025-01-13 20:58:05 +01:00
Norman Feske
5a2b6ee101 libc: host fd alloc as part of Libc::Kernel
This avoids the need to construct the fd_alloc out of thin air using
unmanaged_singleton.

Issue #5418
2025-01-13 19:56:14 +01:00
Benjamin Lamowski
9950fecf49 hw: x86_64: refactor Vm_session_component
On x86, the `Vm_session_component` obscured the differences between SVM
and VMX.

Separate the implementations, factor out common functionality and
address a number of long-standing issues in the process:

- Allocate nested page tables from Core_ram_allocator as a more suitable
  abstraction and account for the required memory, subtract the
  necessary amount of RAM from the session's `Ram_quota` *before*
  constructing the session object, to make sure that the memory
  allocated from the `Core_ram_allocator` is available from the VMM's
  RAM quota.
- Move the allocation of Vcpu_state and Vcpu_data into the Core::Vcpu
  class and use the Core RAM Allocator to allocate memory with a known
  physical address.
- Remove the fixed number of virtual CPUs and the associated reservation
  of memory by using a Registry for a flexible amount of vCPUs.

Issue #5221
2025-01-07 17:44:30 +01:00
Benjamin Lamowski
65e78497cb hw: factor out Vmid_allocator
Move the static `Vmid_allocator` in each `Vm_session_component` into a
common header file.

Issue #5221
2025-01-07 17:04:09 +01:00
Norman Feske
9f6a6b33db os/session_policy.h: avoid use of Xml_node assign
Issue #5411
2025-01-07 11:29:45 +01:00
Norman Feske
d3f5015c3a xml_node.h: construct from Const_byte_range_ptr
This patch allows for the construction of 'Xml_node' objects from a
'Const_byte_range_ptr' argument as a safer alternative to the pair
of addr, max_len arguments.

Issue #5411
2025-01-07 11:27:24 +01:00
Norman Feske
3f8d9e5953 ldso: avoid use of Xml_node assign operator
Issue #5411
2025-01-07 11:10:45 +01:00
Christian Helmuth
2d8efcec1e tool/port/metadata: fix usage output
Variable ECHO is not set as common.inc is not included.
2025-01-07 08:23:04 +01:00
Josef Söntgen
4a2aa95099 symbols/libc: add open_memstream function 2025-01-06 16:33:02 +01:00
Norman Feske
a3c05bd793 base/thread.h: guard deref of '_logger()'
The pointer returned by '_logger()' can be a nullptr, in particular
while tracing is (temporarily) inhibited. This patch ensures that
the 'Thread::trace' accessors never operate on a nullptr.

Fixes #5410
2025-01-06 15:16:44 +01:00
Norman Feske
7fce3b0767 util/construct_at.h: ensure legit sizeof(Placeable)
If the memory for the designated object is allocated as char[sizeof(T)],
the size of 'Placeable' is expected to equal the size of T. However, in
principle, the compiler has the freedom to inflate the 'Placeable'
object. The static assertion gives us the assurance that the compiler
does not violate our assumption.
2025-01-06 14:22:43 +01:00
Norman Feske
03f18e1dfe fixup "core: don't rely on Core_env in platform.cc" (okl4) 2025-01-06 14:03:47 +01:00
Norman Feske
68f3e54738 fixup "gems: ABI and depot recipe for dialog API" (file_vault_gui, test/dialog) 2025-01-06 11:44:05 +01:00
Norman Feske
b84d6b95ae gems: ABI and depot recipe for dialog API
This exposes makes the dialog API usable for users of Goa. It turns the
former static dialog library to a shared object and accompanied symbols
file, and adds depot recipes for the library and API.

Issue #5409
2025-01-03 15:14:00 +01:00
Norman Feske
3531bfc4c7 fixup "base: split Pd_account from Pd_session" (test/rm_fault, test/resource_yield, cpu_balancer build, sequence, test/fault_detection, launchpad) 2024-12-19 11:40:00 +01:00
Norman Feske
e72915c13e nitpicker: send pointer pos to global key handler
To enable a global key handler to implement motion gestures while a
global key is held, it needs to know the current pointer position at the
time when the global key sequence starts. This is prerequisite for
the window layouter's ability to drag windows by clicking anywhere
within the window while holding a global key.

Issue #5403
2024-12-18 18:32:09 +01:00
Norman Feske
5eba0d68e0 wm: revoke curr focus if new focus is undefined
This allows the window layouter to ensure that input entered after
switching to an empty screen won't be routed to the old focused but
no longer visible window.

Issue #5390
2024-12-18 18:30:04 +01:00
Norman Feske
38d99c6dd1 nitpicker: no absolute motion without hover
This patch enforces the invariant that absolute motion events are
delivered to the hovered client only. If no client is hovered, the event
is discarded.

Otherwise, in a situation where no client is hovered (i.e., due to a
background that does not cover the entire screen) but a focus is
defined, absolute motion events would be delivered to the focused
session. From a client's perspective, when moving the pointer from the
client to emptiness, the client would observe a leave event followed by
absolute motion. This, in turn, confuses the window manager, which
expects that the receiver of an absolute motion event is hovered.

Fixes #5375
2024-12-18 18:28:19 +01:00
Norman Feske
5d2b57118d fixup "base: split Pd_account from Pd_session" (revert quota donation to core) 2024-12-18 15:29:35 +01:00
Norman Feske
6578639bb9 fixup "base: split Pd_account from Pd_session" (monitor) 2024-12-18 13:52:14 +01:00
Norman Feske
44860e89be core: remove Core_env
This patch adjusts the last remaining callers of 'core_env' and removes
the 'Core_env' interface.

- Core's RAM/cap accounts are now represented by 'Core_account'
  implementing the 'Pd_account' interface.

- The former parts of 'Core_env' are now initialized in sequence
  in 'bootstrap_component'.

- 'Core_child' has been moved to a header to reduce the code in
  'main.cc' to a bare minimum. This as a preparation for the
  plan of making 'main.cc' specific for each kernel.

Fixes #5408
2024-12-18 12:41:06 +01:00
Norman Feske
4aabe39e36 base: split Pd_account from Pd_session
Core uses an instance of 'Pd_session_component' as a representative
for RAM/cap quota accounts used whenever session resources are
donated to core's services. All other facets of 'Pd_sesson_component'
remain unused. Core's instance of 'Pd_session_component' is hosted
at 'Core_env'. Upon its construction, all unused facets of
'Pd_session_component' are initialized by dummy arguments in 'Core_env'.

To overcome the need for dummy arguments, this patch splits the
accounting part of the PD-session interface into a separate
'Pd_account' interface. This gives us the prospect of narrowing
core's current use of 'Pd_session_component' by 'Pd_account',
alleviating dead code and the need for any dummy arguments.

Issue #5408
2024-12-18 12:41:06 +01:00
Norman Feske
404e21017a core: pass ram, rm, io-ports to local services
This patch replaces the use of 'core_env()' in 'platform_services.cc' by
the function arguments 'core_ram', 'core_rm', and 'io_port_ranges'.

It also removes the 'Pd_session' argument from 'Io_port_root' and
'Irq_root' to avoid the reliance on the 'Pd_session' interface within
core,

Issue #5408
2024-12-18 12:41:06 +01:00
Norman Feske
f9e6c3aa0e core: don't rely on Core_env in platform.cc
Replace the use of the global 'core_env()' accessor by the explicit
delegation of interfaces.

- For allocating UTCBs in base-hw, 'Platform_thread' requires
  a way to allocate dataspaces ('Ram_allocator') accounted to the
  corresponding CPU session, a way to locally map the allocated
  dataspaces (core's 'Region_map'), and a way to determine the
  physical address (via 'Rpc_entrypoint') used for the initial
  UTCB mapping of main threads. Hence those interfaces must be
  passed to 'Platform_thread'.

- NOVA's pager code needs to look up 'Cpu_thread_component'
  objects using a map item as key. The lookup requires the
  'Rpc_entrypoint' that hold the 'Cpu_thread_component' objects.
  To make this 'Rpc_entrypoint' available, this patch adds
  the 'init_page_fault_handing' function.

- The 'Region_map_mmap' for Linux requires a way to look up
  'Linux_dataspace' objects for given dataspace capabilities.
  This lookup requires the 'Rpc_entrypoint' holding the dataspaces,
  which is now passed to 'platform.cc' via the new Linux-specific
  'Core_region_map::init' function.

Issue #5408
2024-12-18 12:41:06 +01:00
Sebastian Sumpf
c156c6012c dde_bsd: support microphone selection
Make preferred microphone configurable when a headset is plugged in by
introducing the 'mic_priority' attribute for the <config> node. Values
can be "internal" and "external".
2024-12-16 15:35:57 +01:00
Alexander Boettcher
3f2a867a49 intel/display: report no inuse mode on disabled connector
Fixes #5404
2024-12-12 16:30:36 +01:00
Benjamin Lamowski
7fcfa4ede5 squash "base: make Ram_allocator noncopyable" (new commit message)
Prevent erratic runtime behavior stemming from accidentally passing a
copy to a `Ram_allocator` by making the interface noncopyable.

In consequence, we had to provide an explicit copy constructor for
`Session_env` in server/nic_router, which will be reconsidered in
issue #5405.

Issue #5221
2024-12-12 13:59:26 +01:00
Benjamin Lamowski
294ba9d30a base: add create_vcpu to Vm_session interface
`Vm_session_component::create_vcpu()` is present across all supported
kernels, yet until now it was not part of the `Vm_session` interface.

Add the method to the `Vm_session` interface. This unifies calls in the
base library and is the basis to remove the need for a common base class
for separate `Vm_session` implementations for SVM and VMX on x86_64.

Issue #5221
2024-12-11 15:13:01 +01:00
Benjamin Lamowski
ccdbefd158 base: make Ram_allocator noncopyable
Prevent erratic runtime behavior stemming from accidentally passing a
copy to a `Ram_allocator` by making the interface noncopyable.

Issue #5221
2024-12-11 15:04:16 +01:00
Christian Prochaska
dd64164ed6 qt6: split port into modules
Fixes #5402
2024-12-11 12:22:00 +01:00
Stefan Kalkowski
d3002b26ac libusb: warn once and freeze when device vanishs
Instead of repeatedly printing error messages when a device is not
available anymore, print an error once and then sleep forever.
There is no dynamic behaviour with respect to device availability
implemented in the libusb yet. Instead, you can manage libusb
components externally.

Ref genodelabs/genode#5401
2024-12-11 12:20:37 +01:00
Stefan Kalkowski
ebb159d32d usb webcam: turn run-scripts into sculpt tests
Ref genodelabs/genode#5401
2024-12-11 12:20:31 +01:00
643 changed files with 8416 additions and 22713 deletions

View File

@@ -1,78 +0,0 @@
{
"configurations": [
{
"name": "EalánOS",
"includePath": [
"${workspaceFolder}/depot/genodelabs/api/libc/**",
"${workspaceFolder}/depot/genodelabs/api/stdcxx/**",
"${workspaceFolder}/repos/**",
"${workspaceFolder}/repos/mml/**",
"${workspaceFolder}/repos/libports/include/**",
"${workspaceFolder}/contrib/mxtasking-07a3844690ae8eb15832d93e29567a5a8e6e45af/include/**",
"${workspaceFolder}/contrib/libpfm4-b0ec09148c2be9f4a96203a3d2de4ebed6ce2da0/include/**",
"${workspaceFolder}/contrib/libc-c7cd230b11ca71979f32950803bc78b45adfa0ce/include/libc/**",
"${workspaceFolder}/contrib/libc-c7cd230b11ca71979f32950803bc78b45adfa0ce/include/spec/x86_64/libc",
"${workspaceFolder}/contrib/libc-c7cd230b11ca71979f32950803bc78b45adfa0ce/include/libc/sys/**",
"${workspaceFolder}/contrib/stdcxx-d2865c41fafbbf66051d38e7b742c4d5bc2f05a3/include/stdcxx/",
"${workspaceFolder}/contrib/stdcxx-d2865c41fafbbf66051d38e7b742c4d5bc2f05a3/include/stdcxx/std",
"${workspaceFolder}/contrib/stdcxx-d2865c41fafbbf66051d38e7b742c4d5bc2f05a3/include/stdcxx/c_std",
"${workspaceFolder}/repos/libports/include/spec/x86_64/stdcxx",
"${workspaceFolder}/repos/base-nova/src/core/include/**",
"${workspaceFolder}/repos/base-nova/src/include/**",
"${workspaceFolder}/repos/base-nova/include/**",
"${workspaceFolder}/repos/base/src/core/include/**",
"${workspaceFolder}/repos/base/src/include/**",
"${workspaceFolder}/repos/base/include/**",
"/usr/local/genode/tool/21.05/lib/gcc/x86_64-pc-elf/10.3.0/include",
"/home/mml/loopbench/**"
],
"defines": [
"__GENODE__",
"__FreeBSD__=12",
"_GLIBCXX_HAVE_MBSTATE_T",
"_GLIBCXX_ATOMIC_BUILTINS_4",
"_GLIBCXX_NO_OBSOLETE_ISINF_ISNAN_DYNAMIC"
],
"compilerPath": "/usr/local/genode/tool/21.05/bin/genode-x86-gcc",
"cStandard": "gnu17",
"cppStandard": "gnu++17",
"intelliSenseMode": "linux-gcc-x64",
"compilerArgs": [
"-nostdinc",
"-m64"
],
"configurationProvider": "ms-vscode.makefile-tools",
"forcedInclude": [
"${workspaceFolder}/contrib/libc-c7cd230b11ca71979f32950803bc78b45adfa0ce/include/libc/stdint.h"
],
"mergeConfigurations": true,
"browse": {
"limitSymbolsToIncludedHeaders": true,
"path": [
"${workspaceFolder}/contrib/libc-c7cd230b11ca71979f32950803bc78b45adfa0ce/include/libc/**",
"${workspaceFolder}/contrib/libc-c7cd230b11ca71979f32950803bc78b45adfa0ce/include/spec/x86_64/libc",
"${workspaceFolder}/contrib/libc-c7cd230b11ca71979f32950803bc78b45adfa0ce/include/libc/sys/**",
"${workspaceFolder}/contrib/stdcxx-d2865c41fafbbf66051d38e7b742c4d5bc2f05a3/include/stdcxx/",
"${workspaceFolder}/contrib/stdcxx-d2865c41fafbbf66051d38e7b742c4d5bc2f05a3/include/stdcxx/std",
"${workspaceFolder}/contrib/stdcxx-d2865c41fafbbf66051d38e7b742c4d5bc2f05a3/include/stdcxx/c_std",
"${workspaceFolder}/repos/libports/include/spec/x86_64/stdcxx"
]
}
},
{
"name": "Genode",
"includePath": [
"${workspaceFolder}/**",
"${workspaceFolder}/repos/base/**"
],
"defines": [],
"compilerPath": "/usr/local/genode/tool/21.05/bin/genode-x86-gcc",
"cStandard": "c17",
"cppStandard": "c++20",
"intelliSenseMode": "${default}",
"configurationProvider": "ms-vscode.makefile-tools",
"mergeConfigurations": true
}
],
"version": 4
}

167
.vscode/settings.json vendored
View File

@@ -1,167 +0,0 @@
{
"files.associations": {
"*.rasi": "css",
"*.bbmodel": "json",
"*.sublime-snippet": "xml",
"*.hbs": "html",
"*.ejs": "html",
"*.emu": "html",
"lesskey": "lesskey",
"*.Xresources": "xdefaults",
"i3/config": "i3",
"i3/*.conf": "i3",
"polybar/config": "ini",
"polybar/*.conf": "ini",
"*.S": "gas",
"*.html.en": "html",
"*.html.de": "html",
"stop_token": "cpp",
"*.tcc": "cpp",
"initializer_list": "cpp",
"streambuf": "cpp",
"tuple": "cpp",
"memory": "cpp",
"*.def": "cpp",
"array": "cpp",
"deque": "cpp",
"forward_list": "cpp",
"list": "cpp",
"string": "cpp",
"vector": "cpp",
"any": "cpp",
"executor": "cpp",
"internet": "cpp",
"io_context": "cpp",
"memory_resource": "cpp",
"socket": "cpp",
"string_view": "cpp",
"timer": "cpp",
"functional": "cpp",
"rope": "cpp",
"slist": "cpp",
"coroutine": "cpp",
"future": "cpp",
"scoped_allocator": "cpp",
"valarray": "cpp",
"regex": "cpp",
"cstdint": "cpp",
"bitset": "cpp",
"random": "cpp",
"optional": "cpp",
"dynamic_bitset": "cpp",
"mutex": "cpp",
"shared_mutex": "cpp",
"algorithm": "cpp",
"atomic": "cpp",
"bit": "cpp",
"cassert": "cpp",
"cctype": "cpp",
"cerrno": "cpp",
"chrono": "cpp",
"ciso646": "cpp",
"clocale": "cpp",
"cmath": "cpp",
"compare": "cpp",
"concepts": "cpp",
"cstddef": "cpp",
"cstdio": "cpp",
"cstdlib": "cpp",
"cstring": "cpp",
"ctime": "cpp",
"cwchar": "cpp",
"cwctype": "cpp",
"map": "cpp",
"unordered_map": "cpp",
"exception": "cpp",
"fstream": "cpp",
"ios": "cpp",
"iosfwd": "cpp",
"iostream": "cpp",
"istream": "cpp",
"iterator": "cpp",
"limits": "cpp",
"new": "cpp",
"numeric": "cpp",
"ostream": "cpp",
"queue": "cpp",
"ranges": "cpp",
"ratio": "cpp",
"sstream": "cpp",
"stdexcept": "cpp",
"system_error": "cpp",
"thread": "cpp",
"type_traits": "cpp",
"typeinfo": "cpp",
"utility": "cpp",
"variant": "cpp",
"charconv": "cpp",
"cfenv": "cpp",
"cinttypes": "cpp",
"csetjmp": "cpp",
"csignal": "cpp",
"cstdarg": "cpp",
"cuchar": "cpp",
"set": "cpp",
"unordered_set": "cpp",
"codecvt": "cpp",
"condition_variable": "cpp",
"iomanip": "cpp",
"*.run": "xml",
"span": "cpp",
"config.h": "c",
"bench.h": "c",
"hash_map": "cpp",
"hash_set": "cpp",
"strstream": "cpp",
"decimal": "cpp",
"buffer": "cpp",
"netfwd": "cpp",
"propagate_const": "cpp",
"source_location": "cpp",
"complex": "cpp",
"numbers": "cpp",
"typeindex": "cpp",
"bool_set": "cpp"
},
"vscode-as-git-mergetool.settingsAssistantOnStartup": false,
"makefile.makeDirectory": "build/x86_64",
"C_Cpp.errorSquiggles": "enabledIfIncludesResolve",
"C_Cpp.default.cppStandard": "gnu++17",
"C_Cpp.default.cStandard": "gnu17",
"C_Cpp.workspaceSymbols": "Just My Code",
"C_Cpp.inlayHints.parameterNames.enabled": true,
"C_Cpp.inlayHints.autoDeclarationTypes.showOnLeft": true,
"C_Cpp.intelliSenseMemoryLimit": 16384,
"makefile.makefilePath": "",
"makefile.dryrunSwitches": [
"--keep-going",
"--print-directory",
"KERNEL=nova",
"BOARD=pc",
"run/vscode",
"VERBOSE="
],
"C_Cpp.default.intelliSenseMode": "linux-gcc-x64",
"C_Cpp.default.mergeConfigurations": true,
"C_Cpp.autocompleteAddParentheses": true,
"C_Cpp.intelliSenseCacheSize": 20480,
"makefile.buildBeforeLaunch": false,
"makefile.extensionOutputFolder": ".vscode",
"makefile.configurationCachePath": ".vscode/configurationCache.log",
"explorer.excludeGitIgnore": true,
"makefile.buildLog": ".vscode/build.log",
"definition-autocompletion.update_index_on_change": true,
"definition-autocompletion.update_index_interval": 5,
"C_Cpp.intelliSenseEngineFallback": "enabled",
"makefile.extensionLog": ".vscode/extension.log",
"makefile.ignoreDirectoryCommands": false,
"html.format.wrapLineLength": 80,
"editor.wordWrap": "bounded",
"editor.wordWrapColumn": 90,
"editor.fontSize": 13,
"terminal.integrated.shellIntegration.suggestEnabled": true,
"git.mergeEditor": true,
"merge-conflict.autoNavigateNextConflict.enabled": true,
"git.ignoreLimitWarning": true,
"customizeUI.statusBarPosition": "under-panel"
}

View File

@@ -1,24 +0,0 @@
# EalánOS — An Operating System for Heterogeneous Many-core Systems
EalánOS is a research operating system, based on the [Genode OS Framework](https://genode.org/), that explores new architectural designs and resource management strategies for many-core systems with heterogeneous computing and memory resources. It is a reference implementation of the [MxKernel](https://mxkernel.org/) architecture.
## MxKernel Architecture
The MxKernel is a new operating system architecture inspired by many-core operating systems, such as [FOS](https://dl.acm.org/doi/abs/10.1145/1531793.1531805) and [Tesselation](https://www.usenix.org/event/hotpar09/tech/full_papers/liu/liu_html/), as well as hypervisors, exokernels and unikernels.
Novel approaches of the MxKernel include the use of tasks, short-lived closed units of work, instead of threads as control-flow abstraction, and the concept of elastic cells as process abstraction. The architecture has first been described in the paper [MxKernel: Rethinking Operating System Architecture for Many-core Hardware](https://ess.cs.uos.de/research/projects/MxKernel/sfma-mxkernel.pdf) presented at the [9th Workshop on Systems for Multi-core and Heterogeneous Architectures](https://sites.google.com/site/sfma2019eurosys/).
## Task-based programming
EalánOS promotes task-parallel programming by including the [MxTasking](https://github.com/jmuehlig/mxtasking.git) task-parallel runtime library. MxTasking improves on the common task-parallel programming paradigm by allowing tasks to be annotated with hints about the tasks behavior, such as memory accesses. These annotations are used by the runtime environment to implement advanced features, like automatic prefetching of data and automatic synchronization of concurrent memory accesses.
## Documentation
Because EalánOS is based on Genode, the primary documentation, for now, can be found in the book [Genode Foundations](https://genode.org/documentation/genode-foundations-22-05.pdf).
## Features added to Genode
EalánOS extends the Genode OS framework by functionality needed and helpful for many-core systems with non-uniform memory access (NUMA), such as
- A topology service that allows to query NUMA information from within a Genode component.
- A port of [MxTasking](https://github.com/jmuehlig/mxtasking.git), a task-based framework designed to aid in developing parallel applications.
- (WiP) A extension of Genode's RAM service that enables applications to allocate memory from a specific NUMA region, similar to libnuma's `numa_alloc_on_node`, and thus improve NUMA-locality of internal data objects.
- (WiP) An interface for using Hardware Performance Monitoring Counters inside Genode components. Currently, performance counters are only implemented for AMD's Zen1 microarchitecture.
### Acknowledgement
The work on EalánOS and the MxKernel architecture is supported by the German Research Foundation (DFG) as part of the priority program 2037 "[Scalable Data Management on Future Hardware](https://dfg-spp2037.de/)" under Grant numbers SP968/9-1 and SP968/9-2.
The MxTasking framework is developed as part of the same DFG project at the [DBIS group at TU Dortmund Universitiy](http://dbis.cs.tu-dortmund.de/cms/de/home/index.html) and funded under Grant numbers TE1117/2-1.

View File

@@ -61,8 +61,9 @@ class Core::Platform_thread : Interface
/**
* Constructor
*/
Platform_thread(Platform_pd &pd, size_t, const char *name,
unsigned, Affinity::Location, addr_t)
Platform_thread(Platform_pd &pd, Rpc_entrypoint &, Ram_allocator &,
Region_map &, size_t, const char *name, unsigned,
Affinity::Location, addr_t)
: _name(name), _pd(pd) { }
/**

View File

@@ -38,8 +38,11 @@ static inline bool can_use_super_page(addr_t, size_t)
}
addr_t Io_mem_session_component::_map_local(addr_t phys_base, size_t size)
Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const phys_base,
size_t const size_in)
{
size_t const size = size_in;
auto map_io_region = [] (addr_t phys_base, addr_t local_base, size_t size)
{
using namespace Fiasco;
@@ -91,14 +94,16 @@ addr_t Io_mem_session_component::_map_local(addr_t phys_base, size_t size)
size_t align = (size >= get_super_page_size()) ? get_super_page_size_log2()
: get_page_size_log2();
return platform().region_alloc().alloc_aligned(size, align).convert<addr_t>(
return platform().region_alloc().alloc_aligned(size, align).convert<Map_local_result>(
[&] (void *ptr) {
addr_t const core_local_base = (addr_t)ptr;
map_io_region(phys_base, core_local_base, size);
return core_local_base; },
return Map_local_result { .core_local_addr = core_local_base, .success = true };
},
[&] (Range_allocator::Alloc_error) -> addr_t {
[&] (Range_allocator::Alloc_error) {
error("core-local mapping of memory-mapped I/O range failed");
return 0; });
return Map_local_result();
});
}

View File

@@ -103,3 +103,6 @@ Untyped_capability Pager_entrypoint::_pager_object_cap(unsigned long badge)
{
return Capability_space::import(native_thread().l4id, Rpc_obj_key(badge));
}
void Core::init_page_fault_handling(Rpc_entrypoint &) { }

View File

@@ -20,7 +20,6 @@
/* core includes */
#include <platform.h>
#include <core_env.h>
using namespace Core;

View File

@@ -26,7 +26,7 @@ namespace Genode { struct Foc_thread_state; }
struct Genode::Foc_thread_state : Thread_state
{
Foc::l4_cap_idx_t kcap { Foc::L4_INVALID_CAP }; /* thread's gate cap in its PD */
uint16_t id { }; /* ID of gate capability */
uint32_t id { }; /* ID of gate capability */
addr_t utcb { }; /* thread's UTCB in its PD */
};

View File

@@ -30,17 +30,15 @@ class Core::Cap_id_allocator
{
public:
using id_t = uint16_t;
enum { ID_MASK = 0xffff };
using id_t = unsigned;
private:
enum {
CAP_ID_RANGE = ~0UL,
CAP_ID_MASK = ~3UL,
CAP_ID_NUM_MAX = CAP_ID_MASK >> 2,
CAP_ID_OFFSET = 1 << 2
CAP_ID_OFFSET = 1 << 2,
CAP_ID_MASK = CAP_ID_OFFSET - 1,
CAP_ID_RANGE = 1u << 28,
ID_MASK = CAP_ID_RANGE - 1,
};
Synced_range_allocator<Allocator_avl> _id_alloc;

View File

@@ -75,8 +75,8 @@ class Core::Platform_thread : Interface
/**
* Constructor for non-core threads
*/
Platform_thread(Platform_pd &, size_t, const char *name, unsigned priority,
Affinity::Location, addr_t);
Platform_thread(Platform_pd &, Rpc_entrypoint &, Ram_allocator &, Region_map &,
size_t, const char *name, unsigned priority, Affinity::Location, addr_t);
/**
* Constructor for core main-thread

View File

@@ -125,7 +125,7 @@ class Core::Vm_session_component
** Vm session interface **
**************************/
Capability<Native_vcpu> create_vcpu(Thread_capability);
Capability<Native_vcpu> create_vcpu(Thread_capability) override;
void attach_pic(addr_t) override { /* unused on Fiasco.OC */ }
void attach(Dataspace_capability, addr_t, Attach_attr) override; /* vm_session_common.cc */

View File

@@ -6,7 +6,7 @@
*/
/*
* Copyright (C) 2006-2017 Genode Labs GmbH
* Copyright (C) 2006-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
@@ -21,31 +21,37 @@
using namespace Core;
void Io_mem_session_component::_unmap_local(addr_t base, size_t, addr_t)
void Io_mem_session_component::_unmap_local(addr_t base, size_t size, addr_t)
{
if (!base)
return;
unmap_local(base, size >> 12);
platform().region_alloc().free(reinterpret_cast<void *>(base));
}
addr_t Io_mem_session_component::_map_local(addr_t base, size_t size)
Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const base,
size_t const size)
{
/* align large I/O dataspaces on a super-page boundary within core */
size_t alignment = (size >= get_super_page_size()) ? get_super_page_size_log2()
: get_page_size_log2();
/* find appropriate region for mapping */
return platform().region_alloc().alloc_aligned(size, (unsigned)alignment).convert<addr_t>(
/* find appropriate region and map it locally */
return platform().region_alloc().alloc_aligned(size, (unsigned)alignment).convert<Map_local_result>(
[&] (void *local_base) {
if (!map_local_io(base, (addr_t)local_base, size >> get_page_size_log2())) {
error("map_local_io failed");
error("map_local_io failed ", Hex_range(base, size));
platform().region_alloc().free(local_base, base);
return 0UL;
return Map_local_result();
}
return (addr_t)local_base;
return Map_local_result { .core_local_addr = addr_t(local_base),
.success = true };
},
[&] (Range_allocator::Alloc_error) {
error("allocation of virtual memory for local I/O mapping failed");
return 0UL; });
return Map_local_result(); });
}

View File

@@ -153,3 +153,6 @@ Pager_capability Pager_entrypoint::manage(Pager_object &obj)
},
[&] (Cpu_session::Create_thread_error) { return Pager_capability(); });
}
void Core::init_page_fault_handling(Rpc_entrypoint &) { }

View File

@@ -18,6 +18,7 @@
#include <dataspace/capability.h>
#include <trace/source_registry.h>
#include <util/misc_math.h>
#include <util/mmio.h>
#include <util/xml_generator.h>
/* base-internal includes */
@@ -342,6 +343,76 @@ void Core::Platform::_setup_irq_alloc()
}
struct Acpi_rsdp : public Genode::Mmio<32>
{
using Mmio<32>::Mmio;
struct Signature : Register< 0, 64> { };
struct Revision : Register<15, 8> { };
struct Rsdt : Register<16, 32> { };
struct Length : Register<20, 32> { };
struct Xsdt : Register<24, 64> { };
bool valid() const
{
const char sign[] = "RSD PTR ";
return read<Signature>() == *(Genode::uint64_t *)sign;
}
} __attribute__((packed));
static void add_acpi_rsdp(auto &region_alloc, auto &xml)
{
using namespace Foc;
using Foc::L4::Kip::Mem_desc;
l4_kernel_info_t const &kip = sigma0_map_kip();
Mem_desc const * const desc = Mem_desc::first(&kip);
if (!desc)
return;
for (unsigned i = 0; i < Mem_desc::count(&kip); ++i) {
if (desc[i].type() != Mem_desc::Mem_type::Info ||
desc[i].sub_type() != Mem_desc::Info_sub_type::Info_acpi_rsdp)
continue;
auto offset = desc[i].start() & 0xffful;
auto pages = align_addr(offset + desc[i].size(), 12) >> 12;
region_alloc.alloc_aligned(pages * 4096, 12).with_result([&] (void *core_local_ptr) {
if (!map_local_io(desc[i].start(), (addr_t)core_local_ptr, pages))
return;
Byte_range_ptr const ptr((char *)(addr_t(core_local_ptr) + offset),
pages * 4096 - offset);
auto const rsdp = Acpi_rsdp(ptr);
if (!rsdp.valid())
return;
xml.node("acpi", [&] {
xml.attribute("revision", rsdp.read<Acpi_rsdp::Revision>());
if (rsdp.read<Acpi_rsdp::Rsdt>())
xml.attribute("rsdt", String<32>(Hex(rsdp.read<Acpi_rsdp::Rsdt>())));
if (rsdp.read<Acpi_rsdp::Xsdt>())
xml.attribute("xsdt", String<32>(Hex(rsdp.read<Acpi_rsdp::Xsdt>())));
});
unmap_local(addr_t(core_local_ptr), pages);
region_alloc.free(core_local_ptr);
pages = 0;
}, [&] (Range_allocator::Alloc_error) { });
if (!pages)
return;
}
}
void Core::Platform::_setup_basics()
{
using namespace Foc;
@@ -412,6 +483,10 @@ void Core::Platform::_setup_basics()
/* image is accessible by core */
add_region(Region(img_start, img_end), _core_address_ranges());
/* requested as I/O memory by the VESA driver and ACPI (rsdp search) */
_io_mem_alloc.add_range (0, 0x2000);
ram_alloc() .remove_range(0, 0x2000);
}
@@ -517,7 +592,10 @@ Core::Platform::Platform()
xml.node("affinity-space", [&] {
xml.attribute("width", affinity_space().width());
xml.attribute("height", affinity_space().height()); });
xml.attribute("height", affinity_space().height());
});
add_acpi_rsdp(region_alloc(), xml);
});
}
);

View File

@@ -18,7 +18,6 @@
/* core includes */
#include <platform_thread.h>
#include <platform.h>
#include <core_env.h>
/* Fiasco.OC includes */
#include <foc/syscall.h>
@@ -210,7 +209,7 @@ Foc_thread_state Platform_thread::state()
s = _pager_obj->state.state;
s.kcap = _gate.remote;
s.id = (uint16_t)_gate.local.local_name();
s.id = Cap_index::id_t(_gate.local.local_name());
s.utcb = _utcb;
return s;
@@ -278,7 +277,8 @@ void Platform_thread::_finalize_construction()
}
Platform_thread::Platform_thread(Platform_pd &pd, size_t, const char *name, unsigned prio,
Platform_thread::Platform_thread(Platform_pd &pd, Rpc_entrypoint &, Ram_allocator &,
Region_map &, size_t, const char *name, unsigned prio,
Affinity::Location location, addr_t)
:
_name(name),

View File

@@ -38,7 +38,7 @@ using namespace Core;
Cap_index_allocator &Genode::cap_idx_alloc()
{
static Cap_index_allocator_tpl<Core_cap_index,10*1024> alloc;
static Cap_index_allocator_tpl<Core_cap_index, 128 * 1024> alloc;
return alloc;
}
@@ -190,7 +190,7 @@ Cap_id_allocator::Cap_id_allocator(Allocator &alloc)
:
_id_alloc(&alloc)
{
_id_alloc.add_range(CAP_ID_OFFSET, CAP_ID_RANGE);
_id_alloc.add_range(CAP_ID_OFFSET, unsigned(CAP_ID_RANGE) - unsigned(CAP_ID_OFFSET));
}
@@ -213,7 +213,7 @@ void Cap_id_allocator::free(id_t id)
Mutex::Guard lock_guard(_mutex);
if (id < CAP_ID_RANGE)
_id_alloc.free((void*)(id & CAP_ID_MASK), CAP_ID_OFFSET);
_id_alloc.free((void*)(addr_t(id & CAP_ID_MASK)), CAP_ID_OFFSET);
}

View File

@@ -12,7 +12,6 @@
*/
/* core includes */
#include <core_env.h>
#include <platform_services.h>
#include <vm_root.h>
#include <io_port_root.h>
@@ -24,15 +23,15 @@ void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &heap,
Registry<Service> &services,
Trace::Source_registry &trace_sources,
Ram_allocator &)
Ram_allocator &core_ram,
Region_map &core_rm,
Range_allocator &io_port_ranges)
{
static Vm_root vm_root(ep, heap, core_env().ram_allocator(),
core_env().local_rm(), trace_sources);
static Vm_root vm_root(ep, heap, core_ram, core_rm, trace_sources);
static Core_service<Vm_session_component> vm(services, vm_root);
static Io_port_root io_root(*core_env().pd_session(),
platform().io_port_alloc(), heap);
static Io_port_root io_root(io_port_ranges, heap);
static Core_service<Io_port_session_component> io_port(services, io_root);
}

View File

@@ -22,7 +22,6 @@
/* core includes */
#include <platform.h>
#include <core_env.h>
/* Fiasco.OC includes */
#include <foc/syscall.h>

View File

@@ -30,12 +30,13 @@ class Genode::Native_capability::Data : public Avl_node<Data>
{
public:
using id_t = uint16_t;
using id_t = unsigned;
constexpr static id_t INVALID_ID = ~0u;
private:
constexpr static uint16_t INVALID_ID = ~0;
constexpr static uint16_t UNUSED = 0;
constexpr static id_t UNUSED = 0;
uint8_t _ref_cnt; /* reference counter */
id_t _id; /* global capability id */
@@ -46,8 +47,8 @@ class Genode::Native_capability::Data : public Avl_node<Data>
bool valid() const { return _id != INVALID_ID; }
bool used() const { return _id != UNUSED; }
uint16_t id() const { return _id; }
void id(uint16_t id) { _id = id; }
id_t id() const { return _id; }
void id(id_t id) { _id = id; }
uint8_t inc();
uint8_t dec();
addr_t kcap() const;

View File

@@ -3,11 +3,11 @@
* \author Stefan Kalkowski
* \date 2010-12-06
*
* This is a Fiasco.OC-specific addition to the process enviroment.
* This is a Fiasco.OC-specific addition to the process environment.
*/
/*
* Copyright (C) 2010-2017 Genode Labs GmbH
* Copyright (C) 2010-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
@@ -59,7 +59,7 @@ static volatile int _cap_index_spinlock = SPINLOCK_UNLOCKED;
bool Cap_index::higher(Cap_index *n) { return n->_id > _id; }
Cap_index* Cap_index::find_by_id(uint16_t id)
Cap_index* Cap_index::find_by_id(id_t id)
{
if (_id == id) return this;
@@ -116,8 +116,8 @@ Cap_index* Capability_map::insert(Cap_index::id_t id)
{
Spin_lock::Guard guard(_lock);
ASSERT(!_tree.first() || !_tree.first()->find_by_id(id),
"Double insertion in cap_map()!");
if (_tree.first() && _tree.first()->find_by_id(id))
return { };
Cap_index * const i = cap_idx_alloc().alloc_range(1);
if (i) {
@@ -184,9 +184,16 @@ Cap_index* Capability_map::insert_map(Cap_index::id_t id, addr_t kcap)
_tree.insert(i);
/* map the given cap to our registry entry */
l4_task_map(L4_BASE_TASK_CAP, L4_BASE_TASK_CAP,
l4_obj_fpage(kcap, 0, L4_FPAGE_RWX),
i->kcap() | L4_ITEM_MAP | L4_MAP_ITEM_GRANT);
auto const msg = l4_task_map(L4_BASE_TASK_CAP, L4_BASE_TASK_CAP,
l4_obj_fpage(kcap, 0, L4_FPAGE_RWX),
i->kcap() | L4_ITEM_MAP | L4_MAP_ITEM_GRANT);
if (l4_error(msg)) {
_tree.remove(i);
cap_idx_alloc().free(i, 1);
return 0;
}
return i;
}

View File

@@ -55,9 +55,6 @@ static inline bool ipc_error(l4_msgtag_t tag, bool print)
}
static constexpr Cap_index::id_t INVALID_BADGE = 0xffff;
/**
* Representation of a capability during UTCB marshalling/unmarshalling
*/
@@ -116,7 +113,7 @@ static int extract_msg_from_utcb(l4_msgtag_t tag,
Cap_index::id_t const badge = (Cap_index::id_t)(*msg_words++);
if (badge == INVALID_BADGE)
if (badge == Cap_index::INVALID_ID)
continue;
/* received a delegated capability */
@@ -227,7 +224,7 @@ static l4_msgtag_t copy_msgbuf_to_utcb(Msgbuf_base &snd_msg,
for (unsigned i = 0; i < num_caps; i++) {
/* store badge as normal message word */
*msg_words++ = caps[i].valid ? caps[i].badge : INVALID_BADGE;
*msg_words++ = caps[i].valid ? caps[i].badge : Cap_index::INVALID_ID;
/* setup flexpage for valid capability to delegate */
if (caps[i].valid) {

View File

@@ -42,7 +42,6 @@ namespace Foc {
using namespace Genode;
using Exit_config = Vm_connection::Exit_config;
using Call_with_state = Vm_connection::Call_with_state;
enum Virt { VMX, SVM, UNKNOWN };
@@ -72,8 +71,7 @@ struct Foc_native_vcpu_rpc : Rpc_client<Vm_session::Native_vcpu>, Noncopyable
Capability<Vm_session::Native_vcpu> _create_vcpu(Vm_connection &vm,
Thread_capability &cap)
{
return vm.with_upgrade([&] {
return vm.call<Vm_session::Rpc_create_vcpu>(cap); });
return vm.create_vcpu(cap);
}
public:
@@ -1342,7 +1340,7 @@ struct Foc_vcpu : Thread, Noncopyable
_wake_up.up();
}
void with_state(Call_with_state &cw)
void with_state(auto const &fn)
{
if (!_dispatching) {
if (Thread::myself() != _ep_handler) {
@@ -1375,7 +1373,7 @@ struct Foc_vcpu : Thread, Noncopyable
_state_ready.down();
}
if (cw.call_with_state(_vcpu_state)
if (fn(_vcpu_state)
|| _extra_dispatch_up)
resume();
@@ -1417,7 +1415,10 @@ static enum Virt virt_type(Env &env)
** vCPU API **
**************/
void Vm_connection::Vcpu::_with_state(Call_with_state &cw) { static_cast<Foc_native_vcpu_rpc &>(_native_vcpu).vcpu.with_state(cw); }
void Vm_connection::Vcpu::_with_state(With_state::Ft const &fn)
{
static_cast<Foc_native_vcpu_rpc &>(_native_vcpu).vcpu.with_state(fn);
}
Vm_connection::Vcpu::Vcpu(Vm_connection &vm, Allocator &alloc,

View File

@@ -382,13 +382,10 @@ namespace Kernel {
* Halt processing of a signal context synchronously
*
* \param context capability ID of the targeted signal context
*
* \retval 0 suceeded
* \retval -1 failed
*/
inline int kill_signal_context(capid_t const context)
inline void kill_signal_context(capid_t const context)
{
return (int)call(call_id_kill_signal_context(), context);
call(call_id_kill_signal_context(), context);
}
/**

View File

@@ -11,13 +11,15 @@
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__SPEC__X86_64__PORT_IO_H_
#define _CORE__SPEC__X86_64__PORT_IO_H_
#ifndef _INCLUDE__SPEC__X86_64__PORT_IO_H_
#define _INCLUDE__SPEC__X86_64__PORT_IO_H_
/* core includes */
#include <types.h>
#include <base/fixed_stdint.h>
namespace Core {
namespace Hw {
using Genode::uint8_t;
using Genode::uint16_t;
/**
* Read byte from I/O port
@@ -38,4 +40,4 @@ namespace Core {
}
}
#endif /* _CORE__SPEC__X86_64__PORT_IO_H_ */
#endif /* _INCLUDE__SPEC__X86_64__PORT_IO_H_ */

View File

@@ -10,7 +10,6 @@ SRC_CC += lib/base/allocator_avl.cc
SRC_CC += lib/base/avl_tree.cc
SRC_CC += lib/base/elf_binary.cc
SRC_CC += lib/base/heap.cc
SRC_CC += lib/base/regional_heap.cc
SRC_CC += lib/base/registry.cc
SRC_CC += lib/base/log.cc
SRC_CC += lib/base/output.cc

View File

@@ -46,7 +46,6 @@ SRC_CC += ram_dataspace_factory.cc
SRC_CC += signal_transmitter_noinit.cc
SRC_CC += thread_start.cc
SRC_CC += env.cc
SRC_CC += region_map_support.cc
SRC_CC += pager.cc
SRC_CC += _main.cc
SRC_CC += kernel/cpu.cc
@@ -55,13 +54,14 @@ SRC_CC += kernel/ipc_node.cc
SRC_CC += kernel/irq.cc
SRC_CC += kernel/main.cc
SRC_CC += kernel/object.cc
SRC_CC += kernel/signal_receiver.cc
SRC_CC += kernel/signal.cc
SRC_CC += kernel/thread.cc
SRC_CC += kernel/timer.cc
SRC_CC += capability.cc
SRC_CC += stack_area_addr.cc
SRC_CC += heartbeat.cc
BOARD ?= unknown
CC_OPT_platform += -DBOARD_NAME="\"$(BOARD)\""
# provide Genode version information

View File

@@ -22,12 +22,9 @@ SRC_CC += kernel/vm_thread_on.cc
SRC_CC += spec/x86_64/virtualization/kernel/vm.cc
SRC_CC += spec/x86_64/virtualization/kernel/svm.cc
SRC_CC += spec/x86_64/virtualization/kernel/vmx.cc
SRC_CC += spec/x86_64/virtualization/vm_session_component.cc
SRC_CC += vm_session_common.cc
SRC_CC += vm_session_component.cc
SRC_CC += kernel/lock.cc
SRC_CC += spec/x86_64/pic.cc
SRC_CC += spec/x86_64/pit.cc
SRC_CC += spec/x86_64/timer.cc
SRC_CC += spec/x86_64/kernel/thread_exception.cc
SRC_CC += spec/x86_64/platform_support.cc
SRC_CC += spec/x86_64/virtualization/platform_services.cc

View File

@@ -200,6 +200,7 @@ generalize_target_names: $(CONTENT)
# supplement BOARD definition that normally comes form the build dir
sed -i "s/\?= unknown/:= $(BOARD)/" src/core/hw/target.mk
sed -i "s/\?= unknown/:= $(BOARD)/" src/bootstrap/hw/target.mk
sed -i "s/\?= unknown/:= $(BOARD)/" lib/mk/core-hw.inc
# discharge targets when building for mismatching architecture
sed -i "1aREQUIRES := $(ARCH)" src/core/hw/target.mk
sed -i "1aREQUIRES := $(ARCH)" src/bootstrap/hw/target.mk

View File

@@ -16,7 +16,6 @@
/* base includes */
#include <base/internal/globals.h>
#include <base/internal/unmanaged_singleton.h>
using namespace Genode;
@@ -26,13 +25,23 @@ size_t bootstrap_stack_size = STACK_SIZE;
uint8_t bootstrap_stack[Board::NR_OF_CPUS][STACK_SIZE]
__attribute__((aligned(get_page_size())));
Bootstrap::Platform & Bootstrap::platform() {
return *unmanaged_singleton<Bootstrap::Platform>(); }
Bootstrap::Platform & Bootstrap::platform()
{
/*
* Don't use static local variable because cmpxchg cannot be executed
* w/o MMU on ARMv6.
*/
static long _obj[(sizeof(Bootstrap::Platform)+sizeof(long))/sizeof(long)];
static Bootstrap::Platform *ptr;
if (!ptr)
ptr = construct_at<Bootstrap::Platform>(_obj);
return *ptr;
}
extern "C" void init() __attribute__ ((noreturn));
extern "C" void init()
{
Bootstrap::Platform & p = Bootstrap::platform();

View File

@@ -20,7 +20,6 @@
#include <base/internal/globals.h>
#include <base/internal/output.h>
#include <base/internal/raw_write_string.h>
#include <base/internal/unmanaged_singleton.h>
#include <board.h>
@@ -55,7 +54,11 @@ struct Buffer
};
Genode::Log &Genode::Log::log() { return unmanaged_singleton<Buffer>()->log; }
Genode::Log &Genode::Log::log()
{
static Buffer buffer { };
return buffer.log;
}
void Genode::raw_write_string(char const *str) { log(str); }

View File

@@ -27,6 +27,7 @@ namespace Bootstrap {
using Genode::addr_t;
using Genode::size_t;
using Genode::uint32_t;
using Boot_info = Hw::Boot_info<::Board::Boot_info>;
using Hw::Mmio_space;
using Hw::Mapping;

View File

@@ -18,10 +18,12 @@
#include <platform.h>
#include <multiboot.h>
#include <multiboot2.h>
#include <port_io.h>
#include <hw/memory_consts.h>
#include <hw/spec/x86_64/acpi.h>
#include <hw/spec/x86_64/apic.h>
#include <hw/spec/x86_64/x86_64.h>
using namespace Genode;
@@ -66,6 +68,108 @@ static Hw::Acpi_rsdp search_rsdp(addr_t area, addr_t area_size)
}
static uint32_t calibrate_tsc_frequency(addr_t fadt_addr)
{
uint32_t const default_freq = 2'400'000;
if (!fadt_addr) {
warning("FADT not found, returning fixed TSC frequency of ", default_freq, "kHz");
return default_freq;
}
uint32_t const sleep_ms = 10;
Hw::Acpi_fadt fadt(reinterpret_cast<Hw::Acpi_generic *>(fadt_addr));
uint32_t const freq = fadt.calibrate_freq_khz(sleep_ms, []() { return Hw::Tsc::rdtsc(); });
if (!freq) {
warning("Unable to calibrate TSC, returning fixed TSC frequency of ", default_freq, "kHz");
return default_freq;
}
return freq;
}
static Hw::Local_apic::Calibration calibrate_lapic_frequency(addr_t fadt_addr)
{
uint32_t const default_freq = TIMER_MIN_TICKS_PER_MS;
if (!fadt_addr) {
warning("FADT not found, setting minimum Local APIC frequency of ", default_freq, "kHz");
return { default_freq, 1 };
}
uint32_t const sleep_ms = 10;
Hw::Acpi_fadt fadt(reinterpret_cast<Hw::Acpi_generic *>(fadt_addr));
Hw::Local_apic lapic(Hw::Cpu_memory_map::lapic_phys_base());
auto const result =
lapic.calibrate_divider([&] {
return fadt.calibrate_freq_khz(sleep_ms, [&] {
return lapic.read<Hw::Local_apic::Tmr_current>(); }, true); });
if (!result.freq_khz) {
warning("FADT not found, setting minimum Local APIC frequency of ", default_freq, "kHz");
return { default_freq, 1 };
}
return result;
}
static void disable_pit()
{
using Hw::outb;
enum {
/* PIT constants */
PIT_CH0_DATA = 0x40,
PIT_MODE = 0x43,
};
/*
* Disable PIT timer channel. This is necessary since BIOS sets up
* channel 0 to fire periodically.
*/
outb(PIT_MODE, 0x30);
outb(PIT_CH0_DATA, 0);
outb(PIT_CH0_DATA, 0);
}
/*
* Enable dispatch serializing lfence instruction on AMD processors
*
* See Software techniques for managing speculation on AMD processors
* Revision 5.09.23
* Mitigation G-2
*/
static void amd_enable_serializing_lfence()
{
using Cpu = Hw::X86_64_cpu;
if (Hw::Vendor::get_vendor_id() != Hw::Vendor::Vendor_id::AMD)
return;
unsigned const family = Hw::Vendor::get_family();
/*
* In family 0Fh and 11h, lfence is always dispatch serializing and
* "AMD plans support for this MSR and access to this bit for all future
* processors." from family 14h on.
*/
if ((family == 0x10) || (family == 0x12) || (family >= 0x14)) {
Cpu::Amd_lfence::access_t amd_lfence = Cpu::Amd_lfence::read();
Cpu::Amd_lfence::Enable_dispatch_serializing::set(amd_lfence);
Cpu::Amd_lfence::write(amd_lfence);
}
}
Bootstrap::Platform::Board::Board()
:
core_mmio(Memory_region { 0, 0x1000 },
@@ -250,6 +354,21 @@ Bootstrap::Platform::Board::Board()
cpus = !cpus ? 1 : max_cpus;
}
/*
* Enable serializing lfence on supported AMD processors
*
* For APs this will be set up later, but we need it already to obtain
* the most acurate results when calibrating the TSC frequency.
*/
amd_enable_serializing_lfence();
auto r = calibrate_lapic_frequency(info.acpi_fadt);
info.lapic_freq_khz = r.freq_khz;
info.lapic_div = r.div;
info.tsc_freq_khz = calibrate_tsc_frequency(info.acpi_fadt);
disable_pit();
/* copy 16 bit boot code for AP CPUs and for ACPI resume */
addr_t ap_code_size = (addr_t)&_start - (addr_t)&_ap;
memcpy((void *)AP_BOOT_CODE_PAGE, &_ap, ap_code_size);
@@ -319,9 +438,12 @@ unsigned Bootstrap::Platform::enable_mmu()
if (board.cpus <= 1)
return (unsigned)cpu_id;
if (!Cpu::IA32_apic_base::Bsp::get(lapic_msr))
if (!Cpu::IA32_apic_base::Bsp::get(lapic_msr)) {
/* AP - done */
/* enable serializing lfence on supported AMD processors. */
amd_enable_serializing_lfence();
return (unsigned)cpu_id;
}
/* BSP - we're primary CPU - wake now all other CPUs */

View File

@@ -21,7 +21,7 @@
/* base-hw core includes */
#include <spec/x86_64/pic.h>
#include <spec/x86_64/pit.h>
#include <spec/x86_64/timer.h>
#include <spec/x86_64/cpu.h>
namespace Board {

View File

@@ -0,0 +1,275 @@
/*
* \brief Guest memory abstraction
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2024-11-25
*/
/*
* Copyright (C) 2015-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__GUEST_MEMORY_H_
#define _CORE__GUEST_MEMORY_H_
/* base includes */
#include <base/allocator.h>
#include <base/allocator_avl.h>
#include <vm_session/vm_session.h>
#include <dataspace/capability.h>
/* core includes */
#include <dataspace_component.h>
#include <region_map_component.h>
namespace Core { class Guest_memory; }
using namespace Core;
class Core::Guest_memory
{
private:
using Avl_region = Allocator_avl_tpl<Rm_region>;
using Attach_attr = Genode::Vm_session::Attach_attr;
Sliced_heap _sliced_heap;
Avl_region _map { &_sliced_heap };
uint8_t _remaining_print_count { 10 };
void _with_region(addr_t const addr, auto const &fn)
{
Rm_region *region = _map.metadata((void *)addr);
if (region)
fn(*region);
else
if (_remaining_print_count) {
error(__PRETTY_FUNCTION__, " unknown region");
_remaining_print_count--;
}
}
public:
enum class Attach_result {
OK,
INVALID_DS,
OUT_OF_RAM,
OUT_OF_CAPS,
REGION_CONFLICT,
};
Attach_result attach(Region_map_detach &rm_detach,
Dataspace_component &dsc,
addr_t const guest_phys,
Attach_attr attr,
auto const &map_fn)
{
/*
* unsupported - deny otherwise arbitrary physical
* memory can be mapped to a VM
*/
if (dsc.managed())
return Attach_result::INVALID_DS;
if (guest_phys & 0xffful || attr.offset & 0xffful ||
attr.size & 0xffful)
return Attach_result::INVALID_DS;
if (!attr.size) {
attr.size = dsc.size();
if (attr.offset < attr.size)
attr.size -= attr.offset;
}
if (attr.size > dsc.size())
attr.size = dsc.size();
if (attr.offset >= dsc.size() ||
attr.offset > dsc.size() - attr.size)
return Attach_result::INVALID_DS;
using Alloc_error = Range_allocator::Alloc_error;
Attach_result const retval = _map.alloc_addr(attr.size, guest_phys).convert<Attach_result>(
[&] (void *) {
Rm_region::Attr const region_attr
{
.base = guest_phys,
.size = attr.size,
.write = dsc.writeable() && attr.writeable,
.exec = attr.executable,
.off = attr.offset,
.dma = false,
};
/* store attachment info in meta data */
try {
_map.construct_metadata((void *)guest_phys,
dsc, rm_detach, region_attr);
} catch (Allocator_avl_tpl<Rm_region>::Assign_metadata_failed) {
if (_remaining_print_count) {
error("failed to store attachment info");
_remaining_print_count--;
}
return Attach_result::INVALID_DS;
}
Rm_region &region = *_map.metadata((void *)guest_phys);
/* inform dataspace about attachment */
dsc.attached_to(region);
return Attach_result::OK;
},
[&] (Alloc_error error) {
switch (error) {
case Alloc_error::OUT_OF_RAM:
return Attach_result::OUT_OF_RAM;
case Alloc_error::OUT_OF_CAPS:
return Attach_result::OUT_OF_CAPS;
case Alloc_error::DENIED:
{
/*
* Handle attach after partial detach
*/
Rm_region *region_ptr = _map.metadata((void *)guest_phys);
if (!region_ptr)
return Attach_result::REGION_CONFLICT;
Rm_region &region = *region_ptr;
bool conflict = false;
region.with_dataspace([&] (Dataspace_component &dataspace) {
(void)dataspace;
if (!(dsc.cap() == dataspace.cap()))
conflict = true;
});
if (conflict)
return Attach_result::REGION_CONFLICT;
if (guest_phys < region.base() ||
guest_phys > region.base() + region.size() - 1)
return Attach_result::REGION_CONFLICT;
}
};
return Attach_result::OK;
}
);
if (retval == Attach_result::OK) {
addr_t phys_addr = dsc.phys_addr() + attr.offset;
size_t size = attr.size;
map_fn(guest_phys, phys_addr, size);
}
return retval;
}
void detach(addr_t guest_phys,
size_t size,
auto const &unmap_fn)
{
if (!size || (guest_phys & 0xffful) || (size & 0xffful)) {
if (_remaining_print_count) {
warning("vm_session: skipping invalid memory detach addr=",
(void *)guest_phys, " size=", (void *)size);
_remaining_print_count--;
}
return;
}
addr_t const guest_phys_end = guest_phys + (size - 1);
addr_t addr = guest_phys;
do {
Rm_region *region = _map.metadata((void *)addr);
/* walk region holes page-by-page */
size_t iteration_size = 0x1000;
if (region) {
iteration_size = region->size();
detach_at(region->base(), unmap_fn);
}
if (addr >= guest_phys_end - (iteration_size - 1))
break;
addr += iteration_size;
} while (true);
}
Guest_memory(Constrained_ram_allocator &constrained_md_ram_alloc,
Region_map &region_map)
:
_sliced_heap(constrained_md_ram_alloc, region_map)
{
/* configure managed VM area */
_map.add_range(0UL, ~0UL);
}
~Guest_memory()
{
/* detach all regions */
while (true) {
addr_t out_addr = 0;
if (!_map.any_block_addr(&out_addr))
break;
detach_at(out_addr, [](addr_t, size_t) { });
}
}
void detach_at(addr_t addr,
auto const &unmap_fn)
{
_with_region(addr, [&] (Rm_region &region) {
if (!region.reserved())
reserve_and_flush(addr, unmap_fn);
/* free the reserved region */
_map.free(reinterpret_cast<void *>(region.base()));
});
}
void reserve_and_flush(addr_t addr,
auto const &unmap_fn)
{
_with_region(addr, [&] (Rm_region &region) {
/* inform dataspace */
region.with_dataspace([&] (Dataspace_component &dataspace) {
dataspace.detached_from(region);
});
region.mark_as_reserved();
unmap_fn(region.base(), region.size());
});
}
};
#endif /* _CORE__GUEST_MEMORY_H_ */

View File

@@ -21,5 +21,7 @@ using namespace Core;
void Io_mem_session_component::_unmap_local(addr_t, size_t, addr_t) { }
addr_t Io_mem_session_component::_map_local(addr_t base, size_t) { return base; }
Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const base, size_t)
{
return { .core_local_addr = base, .success = true };
}

View File

@@ -18,7 +18,7 @@
/* core includes */
#include <kernel/irq.h>
#include <irq_root.h>
#include <core_env.h>
#include <platform.h>
/* base-internal includes */
#include <base/internal/capability_space.h>

View File

@@ -66,6 +66,7 @@ namespace Kernel {
constexpr Call_arg call_id_set_cpu_state() { return 125; }
constexpr Call_arg call_id_exception_state() { return 126; }
constexpr Call_arg call_id_single_step() { return 127; }
constexpr Call_arg call_id_ack_pager_signal() { return 128; }
/**
* Invalidate TLB entries for the `pd` in region `addr`, `sz`
@@ -137,10 +138,9 @@ namespace Kernel {
* \retval 0 suceeded
* \retval !=0 failed
*/
inline int start_thread(Thread & thread, unsigned const cpu_id,
Pd & pd, Native_utcb & utcb)
inline int start_thread(Thread & thread, Pd & pd, Native_utcb & utcb)
{
return (int)call(call_id_start_thread(), (Call_arg)&thread, cpu_id,
return (int)call(call_id_start_thread(), (Call_arg)&thread,
(Call_arg)&pd, (Call_arg)&utcb);
}
@@ -148,13 +148,16 @@ namespace Kernel {
/**
* Set or unset the handler of an event that can be triggered by a thread
*
* \param thread pointer to thread kernel object
* \param thread reference to thread kernel object
* \param pager reference to pager kernel object
* \param signal_context_id capability id of the page-fault handler
*/
inline void thread_pager(Thread & thread,
inline void thread_pager(Thread &thread,
Thread &pager,
capid_t const signal_context_id)
{
call(call_id_thread_pager(), (Call_arg)&thread, signal_context_id);
call(call_id_thread_pager(), (Call_arg)&thread, (Call_arg)&pager,
signal_context_id);
}
@@ -203,6 +206,18 @@ namespace Kernel {
{
call(call_id_single_step(), (Call_arg)&thread, (Call_arg)&on);
}
/**
* Acknowledge a signal transmitted to a pager
*
* \param context signal context to acknowledge
* \param thread reference to faulting thread kernel object
* \param resolved whether fault got resolved
*/
inline void ack_pager_signal(capid_t const context, Thread &thread, bool resolved)
{
call(call_id_ack_pager_signal(), context, (Call_arg)&thread, resolved);
}
}
#endif /* _CORE__KERNEL__CORE_INTERFACE_H_ */

View File

@@ -27,35 +27,35 @@
using namespace Kernel;
/*************
** Cpu_job **
*************/
/*****************
** Cpu_context **
*****************/
void Cpu_job::_activate_own_share() { _cpu->schedule(this); }
void Cpu_context::_activate() { _cpu().schedule(*this); }
void Cpu_job::_deactivate_own_share()
void Cpu_context::_deactivate()
{
assert(_cpu->id() == Cpu::executing_id());
_cpu->scheduler().unready(*this);
assert(_cpu().id() == Cpu::executing_id());
_cpu().scheduler().unready(*this);
}
void Cpu_job::_yield()
void Cpu_context::_yield()
{
assert(_cpu->id() == Cpu::executing_id());
_cpu->scheduler().yield();
assert(_cpu().id() == Cpu::executing_id());
_cpu().scheduler().yield();
}
void Cpu_job::_interrupt(Irq::Pool &user_irq_pool, unsigned const /* cpu_id */)
void Cpu_context::_interrupt(Irq::Pool &user_irq_pool)
{
/* let the IRQ controller take a pending IRQ for handling, if any */
unsigned irq_id;
if (_cpu->pic().take_request(irq_id))
if (_cpu().pic().take_request(irq_id))
/* let the CPU of this job handle the IRQ if it is a CPU-local one */
if (!_cpu->handle_if_cpu_local_interrupt(irq_id)) {
/* let the CPU of this context handle the IRQ if it is a CPU-local one */
if (!_cpu().handle_if_cpu_local_interrupt(irq_id)) {
/* it isn't a CPU-local IRQ, so, it must be a user IRQ */
User_irq * irq = User_irq::object(user_irq_pool, irq_id);
@@ -64,38 +64,37 @@ void Cpu_job::_interrupt(Irq::Pool &user_irq_pool, unsigned const /* cpu_id */)
}
/* let the IRQ controller finish the currently taken IRQ */
_cpu->pic().finish_request();
_cpu().pic().finish_request();
}
void Cpu_job::affinity(Cpu &cpu)
void Cpu_context::affinity(Cpu &cpu)
{
_cpu = &cpu;
_cpu->scheduler().insert(*this);
_cpu().scheduler().remove(*this);
_cpu_ptr = &cpu;
_cpu().scheduler().insert(*this);
}
void Cpu_job::quota(unsigned const q)
void Cpu_context::quota(unsigned const q)
{
if (_cpu)
_cpu->scheduler().quota(*this, q);
else
Context::quota(q);
_cpu().scheduler().quota(*this, q);
}
Cpu_job::Cpu_job(Priority const p, unsigned const q)
Cpu_context::Cpu_context(Cpu &cpu,
Priority const priority,
unsigned const quota)
:
Context(p, q), _cpu(0)
{ }
Cpu_job::~Cpu_job()
Context(priority, quota), _cpu_ptr(&cpu)
{
if (!_cpu)
return;
_cpu().scheduler().insert(*this);
}
_cpu->scheduler().remove(*this);
Cpu_context::~Cpu_context()
{
_cpu().scheduler().remove(*this);
}
@@ -112,19 +111,17 @@ Cpu::Idle_thread::Idle_thread(Board::Address_space_id_allocator &addr_space_id_a
Cpu &cpu,
Pd &core_pd)
:
Thread { addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd,
Priority::min(), 0, "idle", Thread::IDLE }
Thread { addr_space_id_alloc, user_irq_pool, cpu_pool, cpu,
core_pd, Priority::min(), 0, "idle", Thread::IDLE }
{
regs->ip = (addr_t)&idle_thread_main;
affinity(cpu);
Thread::_pd = &core_pd;
}
void Cpu::schedule(Job * const job)
void Cpu::schedule(Context &context)
{
_scheduler.ready(job->context());
_scheduler.ready(static_cast<Scheduler::Context&>(context));
if (_id != executing_id() && _scheduler.need_to_schedule())
trigger_ip_interrupt();
}
@@ -142,33 +139,34 @@ bool Cpu::handle_if_cpu_local_interrupt(unsigned const irq_id)
}
Cpu_job & Cpu::schedule()
Cpu::Context & Cpu::handle_exception_and_schedule()
{
/* update scheduler */
Job & old_job = scheduled_job();
old_job.exception(*this);
Context &context = current_context();
context.exception();
if (_state == SUSPEND || _state == HALT)
return _halt_job;
/* update schedule if necessary */
if (_scheduler.need_to_schedule()) {
_timer.process_timeouts();
_scheduler.update(_timer.time());
time_t t = _scheduler.current_time_left();
_timer.set_timeout(&_timeout, t);
time_t duration = _timer.schedule_timeout();
old_job.update_execution_time(duration);
context.update_execution_time(duration);
}
/* return new job */
return scheduled_job();
/* return current context */
return current_context();
}
addr_t Cpu::stack_start()
{
return Abi::stack_align(Hw::Mm::cpu_local_memory().base +
(1024*1024*_id) + (64*1024));
(Hw::Mm::CPU_LOCAL_MEMORY_SLOT_SIZE*_id)
+ Hw::Mm::KERNEL_STACK_SIZE);
}

View File

@@ -39,9 +39,11 @@ namespace Kernel {
class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
public Genode::List<Cpu>::Element
{
private:
public:
using Job = Cpu_job;
using Context = Cpu_context;
private:
/**
* Inter-processor-interrupt object of the cpu
@@ -83,16 +85,14 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
Pd &core_pd);
};
struct Halt_job : Job
struct Halt_job : Cpu_context
{
Halt_job() : Job (0, 0) { }
Halt_job(Cpu &cpu)
: Cpu_context(cpu, 0, 0) { }
void exception(Kernel::Cpu &) override { }
void proceed(Kernel::Cpu &) override;
Kernel::Cpu_job* helping_destination() override { return this; }
} _halt_job { };
void exception() override { }
void proceed() override;
} _halt_job { *this };
enum State { RUN, HALT, SUSPEND };
@@ -143,14 +143,14 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
bool handle_if_cpu_local_interrupt(unsigned const irq_id);
/**
* Schedule 'job' at this CPU
* Schedule 'context' at this CPU
*/
void schedule(Job * const job);
void schedule(Context& context);
/**
* Return the job that should be executed at next
* Return the context that should be executed next
*/
Cpu_job& schedule();
Context& handle_exception_and_schedule();
Board::Pic & pic() { return _pic; }
Timer & timer() { return _timer; }
@@ -158,10 +158,10 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
addr_t stack_start();
/**
* Returns the currently active job
* Returns the currently scheduled context
*/
Job & scheduled_job() {
return *static_cast<Job *>(&_scheduler.current())->helping_destination(); }
Context & current_context() {
return static_cast<Context&>(_scheduler.current().helping_destination()); }
unsigned id() const { return _id; }
Scheduler &scheduler() { return _scheduler; }

View File

@@ -22,46 +22,39 @@
namespace Kernel {
class Cpu;
/**
* Context of a job (thread, VM, idle) that shall be executed by a CPU
*/
class Cpu_job;
class Cpu_context;
}
class Kernel::Cpu_job : private Scheduler::Context
/**
* Context (thread, vcpu) that shall be executed by a CPU
*/
class Kernel::Cpu_context : private Scheduler::Context
{
private:
friend class Cpu; /* static_cast from 'Scheduler::Context' to 'Cpu_job' */
friend class Cpu;
time_t _execution_time { 0 };
Cpu *_cpu_ptr;
/*
* Noncopyable
*/
Cpu_job(Cpu_job const &);
Cpu_job &operator = (Cpu_job const &);
Cpu_context(Cpu_context const &);
Cpu_context &operator = (Cpu_context const &);
protected:
Cpu * _cpu;
Cpu &_cpu() const { return *_cpu_ptr; }
/**
* Handle interrupt exception that occured during execution on CPU 'id'
* Handle interrupt exception
*/
void _interrupt(Irq::Pool &user_irq_pool, unsigned const id);
void _interrupt(Irq::Pool &user_irq_pool);
/**
* Activate our own CPU-share
*/
void _activate_own_share();
/**
* Deactivate our own CPU-share
*/
void _deactivate_own_share();
void _activate();
void _deactivate();
/**
* Yield the currently scheduled CPU share of this context
@@ -69,55 +62,37 @@ class Kernel::Cpu_job : private Scheduler::Context
void _yield();
/**
* Return wether we are allowed to help job 'j' with our CPU-share
* Return possibility to help context 'j' scheduling-wise
*/
bool _helping_possible(Cpu_job const &j) const { return j._cpu == _cpu; }
bool _helping_possible(Cpu_context const &j) const {
return j._cpu_ptr == _cpu_ptr; }
void _help(Cpu_context &context) { Context::help(context); }
using Context::ready;
using Context::helping_finished;
public:
using Context = Scheduler::Context;
using Priority = Scheduler::Priority;
/**
* Handle exception that occured during execution on CPU 'id'
*/
virtual void exception(Cpu & cpu) = 0;
Cpu_context(Cpu &cpu,
Priority const priority,
unsigned const quota);
virtual ~Cpu_context();
/**
* Continue execution on CPU 'id'
*/
virtual void proceed(Cpu & cpu) = 0;
/**
* Return which job currently uses our CPU-share
*/
virtual Cpu_job * helping_destination() = 0;
/**
* Construct a job with scheduling priority 'p' and time quota 'q'
*/
Cpu_job(Priority const p, unsigned const q);
/**
* Destructor
*/
virtual ~Cpu_job();
/**
* Link job to CPU 'cpu'
* Link context to CPU 'cpu'
*/
void affinity(Cpu &cpu);
/**
* Set CPU quota of the job to 'q'
* Set CPU quota of the context to 'q'
*/
void quota(unsigned const q);
/**
* Return wether our CPU-share is currently active
*/
bool own_share_active() { return Context::ready(); }
/**
* Update total execution time
*/
@@ -128,14 +103,15 @@ class Kernel::Cpu_job : private Scheduler::Context
*/
time_t execution_time() const { return _execution_time; }
/**
* Handle exception that occured during execution of this context
*/
virtual void exception() = 0;
/***************
** Accessors **
***************/
void cpu(Cpu &cpu) { _cpu = &cpu; }
Context &context() { return *this; }
/**
* Continue execution of this context
*/
virtual void proceed() = 0;
};
#endif /* _CORE__KERNEL__CPU_CONTEXT_H_ */

View File

@@ -11,8 +11,8 @@
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__KERNEL__SMP_H_
#define _CORE__KERNEL__SMP_H_
#ifndef _CORE__KERNEL__INTER_PROCESSOR_WORK_H_
#define _CORE__KERNEL__INTER_PROCESSOR_WORK_H_
#include <util/interface.h>
@@ -32,11 +32,11 @@ class Kernel::Inter_processor_work : Genode::Interface
{
public:
virtual void execute(Cpu &) = 0;
virtual void execute(Cpu & cpu) = 0;
protected:
Genode::List_element<Inter_processor_work> _le { this };
};
#endif /* _CORE__KERNEL__SMP_H_ */
#endif /* _CORE__KERNEL__INTER_PROCESSOR_WORK_H_ */

View File

@@ -57,19 +57,13 @@ void Ipc_node::_cancel_send()
}
bool Ipc_node::_helping() const
{
return _out.state == Out::SEND_HELPING && _out.node;
}
bool Ipc_node::ready_to_send() const
{
return _out.state == Out::READY && !_in.waiting();
}
void Ipc_node::send(Ipc_node &node, bool help)
void Ipc_node::send(Ipc_node &node)
{
node._in.queue.enqueue(_queue_item);
@@ -78,13 +72,7 @@ void Ipc_node::send(Ipc_node &node, bool help)
node._thread.ipc_await_request_succeeded();
}
_out.node = &node;
_out.state = help ? Out::SEND_HELPING : Out::SEND;
}
Thread &Ipc_node::helping_destination()
{
return _helping() ? _out.node->helping_destination() : _thread;
_out.state = Out::SEND;
}

View File

@@ -50,14 +50,14 @@ class Kernel::Ipc_node
struct Out
{
enum State { READY, SEND, SEND_HELPING, DESTRUCT };
enum State { READY, SEND, DESTRUCT };
State state { READY };
Ipc_node *node { nullptr };
bool sending() const
{
return state == SEND_HELPING || state == SEND;
return state == SEND;
}
};
@@ -76,11 +76,6 @@ class Kernel::Ipc_node
*/
void _cancel_send();
/**
* Return wether this IPC node is helping another one
*/
bool _helping() const;
/**
* Noncopyable
*/
@@ -102,28 +97,8 @@ class Kernel::Ipc_node
* Send a message and wait for the according reply
*
* \param node targeted IPC node
* \param help wether the request implies a helping relationship
*/
void send(Ipc_node &node, bool help);
/**
* Return final destination of the helping-chain
* this IPC node is part of, or its own thread otherwise
*/
Thread &helping_destination();
/**
* Call 'fn' of type 'void (Ipc_node *)' for each helper
*/
void for_each_helper(auto const &fn)
{
_in.queue.for_each([fn] (Queue_item &item) {
Ipc_node &node { item.object() };
if (node._helping())
fn(node._thread);
});
}
void send(Ipc_node &node);
/**
* Return whether this IPC node is ready to wait for messages

View File

@@ -20,7 +20,7 @@
#include <util/avl_tree.h>
/* core includes */
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
namespace Board {
@@ -161,9 +161,7 @@ class Kernel::User_irq : public Kernel::Irq
*/
void occurred() override
{
if (_context.can_submit(1)) {
_context.submit(1);
}
_context.submit(1);
disable();
}

View File

@@ -63,16 +63,16 @@ Kernel::Main *Kernel::Main::_instance;
void Kernel::Main::_handle_kernel_entry()
{
Cpu &cpu = _cpu_pool.cpu(Cpu::executing_id());
Cpu_job * new_job;
Cpu::Context * context;
{
Lock::Guard guard(_data_lock);
new_job = &cpu.schedule();
context =
&_cpu_pool.cpu(Cpu::executing_id()).handle_exception_and_schedule();
}
new_job->proceed(cpu);
context->proceed();
}

View File

@@ -19,6 +19,38 @@
using namespace Kernel;
void Scheduler::Context::help(Scheduler::Context &c)
{
_destination = &c;
c._helper_list.insert(&_helper_le);
}
void Scheduler::Context::helping_finished()
{
if (!_destination)
return;
_destination->_helper_list.remove(&_helper_le);
_destination = nullptr;
}
Scheduler::Context& Scheduler::Context::helping_destination()
{
return (_destination) ? _destination->helping_destination() : *this;
}
Scheduler::Context::~Context()
{
helping_finished();
for (Context::List_element *h = _helper_list.first(); h; h = h->next())
h->object()->helping_finished();
}
void Scheduler::_consumed(unsigned const time)
{
if (_super_period_left > time) {
@@ -149,7 +181,10 @@ void Scheduler::update(time_t time)
void Scheduler::ready(Context &c)
{
assert(!c.ready() && &c != &_idle);
assert(&c != &_idle);
if (c.ready())
return;
c._ready = true;
@@ -170,23 +205,33 @@ void Scheduler::ready(Context &c)
_slack_list.insert_head(&c._slack_le);
if (!keep_current && _state == UP_TO_DATE) _state = OUT_OF_DATE;
for (Context::List_element *helper = c._helper_list.first();
helper; helper = helper->next())
if (!helper->object()->ready()) ready(*helper->object());
}
void Scheduler::unready(Context &c)
{
assert(c.ready() && &c != &_idle);
assert(&c != &_idle);
if (!c.ready())
return;
if (&c == _current && _state == UP_TO_DATE) _state = OUT_OF_DATE;
c._ready = false;
_slack_list.remove(&c._slack_le);
if (!c._quota)
return;
if (c._quota) {
_rpl[c._priority].remove(&c._priotized_le);
_upl[c._priority].insert_tail(&c._priotized_le);
}
_rpl[c._priority].remove(&c._priotized_le);
_upl[c._priority].insert_tail(&c._priotized_le);
for (Context::List_element *helper = c._helper_list.first();
helper; helper = helper->next())
if (helper->object()->ready()) unready(*helper->object());
}

View File

@@ -65,6 +65,7 @@ class Kernel::Scheduler
friend class Scheduler_test::Context;
using List_element = Genode::List_element<Context>;
using List = Genode::List<List_element>;
unsigned _priority;
unsigned _quota;
@@ -74,10 +75,20 @@ class Kernel::Scheduler
List_element _slack_le { this };
unsigned _slack_time_left { 0 };
List_element _helper_le { this };
List _helper_list {};
Context *_destination { nullptr };
bool _ready { false };
void _reset() { _priotized_time_left = _quota; }
/**
* Noncopyable
*/
Context(const Context&) = delete;
Context& operator=(const Context&) = delete;
public:
Context(Priority const priority,
@@ -85,9 +96,14 @@ class Kernel::Scheduler
:
_priority(priority.value),
_quota(quota) { }
~Context();
bool ready() const { return _ready; }
void quota(unsigned const q) { _quota = q; }
void help(Context &c);
void helping_finished();
Context& helping_destination();
};
private:

View File

@@ -1,18 +1,19 @@
/*
* \brief Kernel backend for asynchronous inter-process communication
* \author Martin Stein
* \author Stefan Kalkowski
* \date 2012-11-30
*/
/*
* Copyright (C) 2012-2019 Genode Labs GmbH
* Copyright (C) 2012-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
/* core includes */
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <kernel/thread.h>
using namespace Kernel;
@@ -26,7 +27,7 @@ void Signal_handler::cancel_waiting()
{
if (_receiver) {
_receiver->_handler_cancelled(*this);
_receiver = 0;
_receiver = nullptr;
}
}
@@ -71,28 +72,20 @@ void Signal_context::_deliverable()
void Signal_context::_delivered()
{
_submits = 0;
_ack = 0;
_ack = false;
}
void Signal_context::_killer_cancelled() { _killer = 0; }
bool Signal_context::can_submit(unsigned const n) const
{
if (_killed || _submits >= (unsigned)~0 - n)
return false;
return true;
}
void Signal_context::_killer_cancelled() { _killer = nullptr; }
void Signal_context::submit(unsigned const n)
{
if (_killed || _submits >= (unsigned)~0 - n)
if (_killed)
return;
_submits += n;
if (_submits < ((unsigned)~0 - n))
_submits += n;
if (_ack)
_deliverable();
@@ -105,32 +98,19 @@ void Signal_context::ack()
return;
if (!_killed) {
_ack = 1;
_ack = true;
_deliverable();
return;
}
if (_killer) {
_killer->_context = 0;
_killer->_context = nullptr;
_killer->_thread.signal_context_kill_done();
_killer = 0;
_killer = nullptr;
}
}
bool Signal_context::can_kill() const
{
/* check if in a kill operation or already killed */
if (_killed) {
if (_ack)
return true;
return false;
}
return true;
}
void Signal_context::kill(Signal_context_killer &k)
{
/* check if in a kill operation or already killed */
@@ -139,13 +119,13 @@ void Signal_context::kill(Signal_context_killer &k)
/* kill directly if there is no unacknowledged delivery */
if (_ack) {
_killed = 1;
_killed = true;
return;
}
/* wait for delivery acknowledgement */
_killer = &k;
_killed = 1;
_killed = true;
_killer->_context = this;
_killer->_thread.signal_context_kill_pending();
}
@@ -231,24 +211,17 @@ void Signal_receiver::_add_context(Signal_context &c) {
_contexts.enqueue(c._contexts_fe); }
bool Signal_receiver::can_add_handler(Signal_handler const &h) const
bool Signal_receiver::add_handler(Signal_handler &h)
{
if (h._receiver)
return false;
return true;
}
void Signal_receiver::add_handler(Signal_handler &h)
{
if (h._receiver)
return;
_handlers.enqueue(h._handlers_fe);
h._receiver = this;
h._thread.signal_wait_for_signal();
_listen();
return true;
}

View File

@@ -1,18 +1,19 @@
/*
* \brief Kernel backend for asynchronous inter-process communication
* \author Martin Stein
* \author Stefan Kalkowski
* \date 2012-11-30
*/
/*
* Copyright (C) 2012-2017 Genode Labs GmbH
* Copyright (C) 2012-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__KERNEL__SIGNAL_RECEIVER_H_
#define _CORE__KERNEL__SIGNAL_RECEIVER_H_
#ifndef _CORE__KERNEL__SIGNAL_H_
#define _CORE__KERNEL__SIGNAL_H_
/* Genode includes */
#include <base/signal.h>
@@ -165,11 +166,7 @@ class Kernel::Signal_context
* Submit the signal
*
* \param n number of submits
*
* \retval 0 succeeded
* \retval -1 failed
*/
bool can_submit(unsigned const n) const;
void submit(unsigned const n);
/**
@@ -180,12 +177,8 @@ class Kernel::Signal_context
/**
* Destruct context or prepare to do it as soon as delivery is done
*
* \param killer object that shall receive progress reports
*
* \retval 0 succeeded
* \retval -1 failed
* \param k object that shall receive progress reports
*/
bool can_kill() const;
void kill(Signal_context_killer &k);
/**
@@ -270,8 +263,7 @@ class Kernel::Signal_receiver
* \retval 0 succeeded
* \retval -1 failed
*/
bool can_add_handler(Signal_handler const &h) const;
void add_handler(Signal_handler &h);
bool add_handler(Signal_handler &h);
/**
* Syscall to create a signal receiver

View File

@@ -169,7 +169,7 @@ Thread::Destroy::Destroy(Thread & caller, Core::Kernel_object<Thread> & to_delet
:
caller(caller), thread_to_destroy(to_delete)
{
thread_to_destroy->_cpu->work_list().insert(&_le);
thread_to_destroy->_cpu().work_list().insert(&_le);
caller._become_inactive(AWAITS_RESTART);
}
@@ -177,7 +177,7 @@ Thread::Destroy::Destroy(Thread & caller, Core::Kernel_object<Thread> & to_delet
void
Thread::Destroy::execute(Cpu &)
{
thread_to_destroy->_cpu->work_list().remove(&_le);
thread_to_destroy->_cpu().work_list().remove(&_le);
thread_to_destroy.destruct();
caller._restart();
}
@@ -239,7 +239,8 @@ void Thread::ipc_send_request_succeeded()
assert(_state == AWAITS_IPC);
user_arg_0(0);
_state = ACTIVE;
if (!Cpu_job::own_share_active()) { _activate_used_shares(); }
_activate();
helping_finished();
}
@@ -248,7 +249,8 @@ void Thread::ipc_send_request_failed()
assert(_state == AWAITS_IPC);
user_arg_0(-1);
_state = ACTIVE;
if (!Cpu_job::own_share_active()) { _activate_used_shares(); }
_activate();
helping_finished();
}
@@ -268,32 +270,16 @@ void Thread::ipc_await_request_failed()
}
void Thread::_deactivate_used_shares()
{
Cpu_job::_deactivate_own_share();
_ipc_node.for_each_helper([&] (Thread &thread) {
thread._deactivate_used_shares(); });
}
void Thread::_activate_used_shares()
{
Cpu_job::_activate_own_share();
_ipc_node.for_each_helper([&] (Thread &thread) {
thread._activate_used_shares(); });
}
void Thread::_become_active()
{
if (_state != ACTIVE && !_paused) { _activate_used_shares(); }
if (_state != ACTIVE && !_paused) Cpu_context::_activate();
_state = ACTIVE;
}
void Thread::_become_inactive(State const s)
{
if (_state == ACTIVE && !_paused) { _deactivate_used_shares(); }
if (_state == ACTIVE && !_paused) Cpu_context::_deactivate();
_state = s;
}
@@ -301,17 +287,13 @@ void Thread::_become_inactive(State const s)
void Thread::_die() { _become_inactive(DEAD); }
Cpu_job * Thread::helping_destination() {
return &_ipc_node.helping_destination(); }
size_t Thread::_core_to_kernel_quota(size_t const quota) const
{
using Genode::Cpu_session;
/* we assert at timer construction that cpu_quota_us in ticks fits size_t */
size_t const ticks = (size_t)
_cpu->timer().us_to_ticks(Kernel::cpu_quota_us);
_cpu().timer().us_to_ticks(Kernel::cpu_quota_us);
return Cpu_session::quota_lim_downscale(quota, ticks);
}
@@ -319,24 +301,20 @@ size_t Thread::_core_to_kernel_quota(size_t const quota) const
void Thread::_call_thread_quota()
{
Thread * const thread = (Thread *)user_arg_1();
thread->Cpu_job::quota((unsigned)(_core_to_kernel_quota(user_arg_2())));
thread->Cpu_context::quota((unsigned)(_core_to_kernel_quota(user_arg_2())));
}
void Thread::_call_start_thread()
{
/* lookup CPU */
Cpu & cpu = _cpu_pool.cpu((unsigned)user_arg_2());
user_arg_0(0);
Thread &thread = *(Thread*)user_arg_1();
assert(thread._state == AWAITS_START);
thread.affinity(cpu);
/* join protection domain */
thread._pd = (Pd *) user_arg_3();
switch (thread._ipc_init(*(Native_utcb *)user_arg_4(), *this)) {
thread._pd = (Pd *) user_arg_2();
switch (thread._ipc_init(*(Native_utcb *)user_arg_3(), *this)) {
case Ipc_alloc_result::OK:
break;
case Ipc_alloc_result::EXHAUSTED:
@@ -356,7 +334,8 @@ void Thread::_call_start_thread()
* semantic changes, and additional core threads are started
* across cpu cores.
*/
if (thread._pd == &_core_pd && cpu.id() != _cpu_pool.primary_cpu().id())
if (thread._pd == &_core_pd &&
thread._cpu().id() != _cpu_pool.primary_cpu().id())
Genode::raw("Error: do not start core threads"
" on CPU cores different than boot cpu");
@@ -367,8 +346,8 @@ void Thread::_call_start_thread()
void Thread::_call_pause_thread()
{
Thread &thread = *reinterpret_cast<Thread*>(user_arg_1());
if (thread._state == ACTIVE && !thread._paused) {
thread._deactivate_used_shares(); }
if (thread._state == ACTIVE && !thread._paused)
thread._deactivate();
thread._paused = true;
}
@@ -377,8 +356,8 @@ void Thread::_call_pause_thread()
void Thread::_call_resume_thread()
{
Thread &thread = *reinterpret_cast<Thread*>(user_arg_1());
if (thread._state == ACTIVE && thread._paused) {
thread._activate_used_shares(); }
if (thread._state == ACTIVE && thread._paused)
thread._activate();
thread._paused = false;
}
@@ -406,6 +385,7 @@ void Thread::_call_restart_thread()
_die();
return;
}
user_arg_0(thread._restart());
}
@@ -413,7 +393,10 @@ void Thread::_call_restart_thread()
bool Thread::_restart()
{
assert(_state == ACTIVE || _state == AWAITS_RESTART);
if (_state != AWAITS_RESTART) { return false; }
if (_state == ACTIVE && _exception_state == NO_EXCEPTION)
return false;
_exception_state = NO_EXCEPTION;
_become_active();
return true;
@@ -451,7 +434,7 @@ void Thread::_cancel_blocking()
void Thread::_call_yield_thread()
{
Cpu_job::_yield();
Cpu_context::_yield();
}
@@ -461,12 +444,11 @@ void Thread::_call_delete_thread()
*(Core::Kernel_object<Thread>*)user_arg_1();
/**
* Delete a thread immediately if it has no cpu assigned yet,
* or it is assigned to this cpu, or the assigned cpu did not scheduled it.
* Delete a thread immediately if it is assigned to this cpu,
* or the assigned cpu did not scheduled it.
*/
if (!to_delete->_cpu ||
(to_delete->_cpu->id() == Cpu::executing_id() ||
&to_delete->_cpu->scheduled_job() != &*to_delete)) {
if (to_delete->_cpu().id() == Cpu::executing_id() ||
&to_delete->_cpu().current_context() != &*to_delete) {
_call_delete<Thread>();
return;
}
@@ -475,7 +457,7 @@ void Thread::_call_delete_thread()
* Construct a cross-cpu work item and send an IPI
*/
_destroy.construct(*this, to_delete);
to_delete->_cpu->trigger_ip_interrupt();
to_delete->_cpu().trigger_ip_interrupt();
}
@@ -484,8 +466,8 @@ void Thread::_call_delete_pd()
Core::Kernel_object<Pd> & pd =
*(Core::Kernel_object<Pd>*)user_arg_1();
if (_cpu->active(pd->mmu_regs))
_cpu->switch_to(_core_pd.mmu_regs);
if (_cpu().active(pd->mmu_regs))
_cpu().switch_to(_core_pd.mmu_regs);
_call_delete<Pd>();
}
@@ -517,7 +499,7 @@ void Thread::_call_await_request_msg()
void Thread::_call_timeout()
{
Timer & t = _cpu->timer();
Timer & t = _cpu().timer();
_timeout_sigid = (Kernel::capid_t)user_arg_2();
t.set_timeout(this, t.us_to_ticks(user_arg_1()));
}
@@ -525,13 +507,13 @@ void Thread::_call_timeout()
void Thread::_call_timeout_max_us()
{
user_ret_time(_cpu->timer().timeout_max_us());
user_ret_time(_cpu().timer().timeout_max_us());
}
void Thread::_call_time()
{
Timer & t = _cpu->timer();
Timer & t = _cpu().timer();
user_ret_time(t.ticks_to_us(t.time()));
}
@@ -540,11 +522,8 @@ void Thread::timeout_triggered()
{
Signal_context * const c =
pd().cap_tree().find<Signal_context>(_timeout_sigid);
if (!c || !c->can_submit(1)) {
Genode::raw(*this, ": failed to submit timeout signal");
return;
}
c->submit(1);
if (c) c->submit(1);
else Genode::warning(*this, ": failed to submit timeout signal");
}
@@ -558,7 +537,7 @@ void Thread::_call_send_request_msg()
_become_inactive(DEAD);
return;
}
bool const help = Cpu_job::_helping_possible(*dst);
bool const help = Cpu_context::_helping_possible(*dst);
oir = oir->find(dst->pd());
if (!_ipc_node.ready_to_send()) {
@@ -572,11 +551,12 @@ void Thread::_call_send_request_msg()
return;
}
_ipc_capid = oir ? oir->capid() : cap_id_invalid();
_ipc_node.send(dst->_ipc_node, help);
_ipc_node.send(dst->_ipc_node);
}
_state = AWAITS_IPC;
if (!help || !dst->own_share_active()) { _deactivate_used_shares(); }
if (help) Cpu_context::_help(*dst);
if (!help || !dst->ready()) _deactivate();
}
@@ -593,7 +573,9 @@ void Thread::_call_pager()
{
/* override event route */
Thread &thread = *(Thread *)user_arg_1();
thread._pager = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_2());
Thread &pager = *(Thread *)user_arg_2();
Signal_context &sc = *pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_3());
thread._fault_context.construct(pager, sc);
}
@@ -617,12 +599,11 @@ void Thread::_call_await_signal()
return;
}
/* register handler at the receiver */
if (!r->can_add_handler(_signal_handler)) {
if (!r->add_handler(_signal_handler)) {
Genode::raw("failed to register handler at signal receiver");
user_arg_0(-1);
return;
}
r->add_handler(_signal_handler);
user_arg_0(0);
}
@@ -639,11 +620,10 @@ void Thread::_call_pending_signal()
}
/* register handler at the receiver */
if (!r->can_add_handler(_signal_handler)) {
if (!r->add_handler(_signal_handler)) {
user_arg_0(-1);
return;
}
r->add_handler(_signal_handler);
if (_state == AWAITS_SIGNAL) {
_cancel_blocking();
@@ -678,20 +658,7 @@ void Thread::_call_submit_signal()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_1());
if(!c) {
/* cannot submit unknown signal context */
user_arg_0(-1);
return;
}
/* trigger signal context */
if (!c->can_submit((unsigned)user_arg_2())) {
Genode::raw("failed to submit signal context");
user_arg_0(-1);
return;
}
c->submit((unsigned)user_arg_2());
user_arg_0(0);
if(c) c->submit((unsigned)user_arg_2());
}
@@ -699,13 +666,8 @@ void Thread::_call_ack_signal()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_1());
if (!c) {
Genode::raw(*this, ": cannot ack unknown signal context");
return;
}
/* acknowledge */
c->ack();
if (c) c->ack();
else Genode::warning(*this, ": cannot ack unknown signal context");
}
@@ -713,19 +675,8 @@ void Thread::_call_kill_signal_context()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_1());
if (!c) {
Genode::raw(*this, ": cannot kill unknown signal context");
user_arg_0(-1);
return;
}
/* kill signal context */
if (!c->can_kill()) {
Genode::raw("failed to kill signal context");
user_arg_0(-1);
return;
}
c->kill(_signal_context_killer);
if (c) c->kill(_signal_context_killer);
else Genode::warning(*this, ": cannot kill unknown signal context");
}
@@ -744,7 +695,7 @@ void Thread::_call_new_irq()
(Genode::Irq_session::Polarity) (user_arg_3() & 0b11);
_call_new<User_irq>((unsigned)user_arg_2(), trigger, polarity, *c,
_cpu->pic(), _user_irq_pool);
_cpu().pic(), _user_irq_pool);
}
@@ -845,6 +796,25 @@ void Thread::_call_single_step() {
}
void Thread::_call_ack_pager_signal()
{
Signal_context * const c = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_1());
if (!c)
Genode::raw(*this, ": cannot ack unknown signal context");
else
c->ack();
Thread &thread = *(Thread*)user_arg_2();
thread.helping_finished();
bool resolved = user_arg_3() ||
thread._exception_state == NO_EXCEPTION;
if (resolved) thread._restart();
else thread._become_inactive(AWAITS_RESTART);
}
void Thread::_call()
{
/* switch over unrestricted kernel calls */
@@ -886,13 +856,15 @@ void Thread::_call()
switch (call_id) {
case call_id_new_thread():
_call_new<Thread>(_addr_space_id_alloc, _user_irq_pool, _cpu_pool,
_core_pd, (unsigned) user_arg_2(),
(unsigned) _core_to_kernel_quota(user_arg_3()),
(char const *) user_arg_4(), USER);
_cpu_pool.cpu((unsigned)user_arg_2()),
_core_pd, (unsigned) user_arg_3(),
(unsigned) _core_to_kernel_quota(user_arg_4()),
(char const *) user_arg_5(), USER);
return;
case call_id_new_core_thread():
_call_new<Thread>(_addr_space_id_alloc, _user_irq_pool, _cpu_pool,
_core_pd, (char const *) user_arg_2());
_cpu_pool.cpu((unsigned)user_arg_2()),
_core_pd, (char const *) user_arg_3());
return;
case call_id_thread_quota(): _call_thread_quota(); return;
case call_id_delete_thread(): _call_delete_thread(); return;
@@ -925,6 +897,7 @@ void Thread::_call()
case call_id_set_cpu_state(): _call_set_cpu_state(); return;
case call_id_exception_state(): _call_exception_state(); return;
case call_id_single_step(): _call_single_step(); return;
case call_id_ack_pager_signal(): _call_ack_pager_signal(); return;
default:
Genode::raw(*this, ": unknown kernel call");
_die();
@@ -933,18 +906,37 @@ void Thread::_call()
}
void Thread::_signal_to_pager()
{
if (!_fault_context.constructed()) {
Genode::warning(*this, " could not send signal to pager");
_die();
return;
}
/* first signal to pager to wake it up */
_fault_context->sc.submit(1);
/* only help pager thread if runnable and scheduler allows it */
bool const help = Cpu_context::_helping_possible(_fault_context->pager)
&& (_fault_context->pager._state == ACTIVE);
if (help) Cpu_context::_help(_fault_context->pager);
else _become_inactive(AWAITS_RESTART);
}
void Thread::_mmu_exception()
{
using namespace Genode;
using Genode::log;
_become_inactive(AWAITS_RESTART);
_exception_state = MMU_FAULT;
Cpu::mmu_fault(*regs, _fault);
_fault.ip = regs->ip;
if (_fault.type == Thread_fault::UNKNOWN) {
Genode::warning(*this, " raised unhandled MMU fault ", _fault);
_die();
return;
}
@@ -959,17 +951,16 @@ void Thread::_mmu_exception()
Hw::Mm::core_stack_area().size };
regs->for_each_return_address(stack, [&] (void **p) {
log(*p); });
_die();
return;
}
if (_pager && _pager->can_submit(1)) {
_pager->submit(1);
}
_signal_to_pager();
}
void Thread::_exception()
{
_become_inactive(AWAITS_RESTART);
_exception_state = EXCEPTION;
if (_type != USER) {
@@ -977,18 +968,14 @@ void Thread::_exception()
_die();
}
if (_pager && _pager->can_submit(1)) {
_pager->submit(1);
} else {
Genode::raw(*this, " could not send signal to pager on exception");
_die();
}
_signal_to_pager();
}
Thread::Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
Cpu &cpu,
Pd &core_pd,
unsigned const priority,
unsigned const quota,
@@ -996,7 +983,7 @@ Thread::Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Type type)
:
Kernel::Object { *this },
Cpu_job { priority, quota },
Cpu_context { cpu, priority, quota },
_addr_space_id_alloc { addr_space_id_alloc },
_user_irq_pool { user_irq_pool },
_cpu_pool { cpu_pool },
@@ -1033,8 +1020,8 @@ Core_main_thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Cpu_pool &cpu_pool,
Pd &core_pd)
:
Core_object<Thread>(
core_pd, addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd, "core")
Core_object<Thread>(core_pd, addr_space_id_alloc, user_irq_pool, cpu_pool,
cpu_pool.primary_cpu(), core_pd, "core")
{
using namespace Core;
@@ -1050,7 +1037,6 @@ Core_main_thread(Board::Address_space_id_allocator &addr_space_id_alloc,
regs->sp = (addr_t)&__initial_stack_base[0] + DEFAULT_STACK_SIZE;
regs->ip = (addr_t)&_core_start;
affinity(_cpu_pool.primary_cpu());
_utcb = &_utcb_instance;
Thread::_pd = &core_pd;
_become_active();

View File

@@ -20,7 +20,7 @@
/* base-hw core includes */
#include <kernel/cpu_context.h>
#include <kernel/inter_processor_work.h>
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <kernel/ipc_node.h>
#include <object.h>
#include <kernel/interface.h>
@@ -53,7 +53,7 @@ struct Kernel::Thread_fault
/**
* Kernel back-end for userland execution-contexts
*/
class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
class Kernel::Thread : private Kernel::Object, public Cpu_context, private Timeout
{
public:
@@ -173,7 +173,15 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
size_t _ipc_rcv_caps { 0 };
Genode::Native_utcb *_utcb { nullptr };
Pd *_pd { nullptr };
Signal_context *_pager { nullptr };
struct Fault_context
{
Thread &pager;
Signal_context &sc;
};
Genode::Constructible<Fault_context> _fault_context {};
Thread_fault _fault { };
State _state;
Signal_handler _signal_handler { *this };
@@ -216,21 +224,16 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
*/
void _become_inactive(State const s);
/**
* Activate our CPU-share and those of our helpers
*/
void _activate_used_shares();
/**
* Deactivate our CPU-share and those of our helpers
*/
void _deactivate_used_shares();
/**
* Suspend unrecoverably from execution
*/
void _die();
/**
* In case of fault, signal to pager, and help or block
*/
void _signal_to_pager();
/**
* Handle an exception thrown by the memory management unit
*/
@@ -306,6 +309,7 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
void _call_set_cpu_state();
void _call_exception_state();
void _call_single_step();
void _call_ack_pager_signal();
template <typename T>
void _call_new(auto &&... args)
@@ -345,6 +349,7 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
Cpu &cpu,
Pd &core_pd,
unsigned const priority,
unsigned const quota,
@@ -359,11 +364,12 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
Cpu &cpu,
Pd &core_pd,
char const *const label)
:
Thread(addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd,
Scheduler::Priority::min(), 0, label, CORE)
Thread(addr_space_id_alloc, user_irq_pool, cpu_pool, cpu,
core_pd, Scheduler::Priority::min(), 0, label, CORE)
{ }
~Thread();
@@ -400,13 +406,14 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
* \retval capability id of the new kernel object
*/
static capid_t syscall_create(Core::Kernel_object<Thread> &t,
unsigned const cpu_id,
unsigned const priority,
size_t const quota,
char const * const label)
{
return (capid_t)call(call_id_new_thread(), (Call_arg)&t,
(Call_arg)priority, (Call_arg)quota,
(Call_arg)label);
(Call_arg)cpu_id, (Call_arg)priority,
(Call_arg)quota, (Call_arg)label);
}
/**
@@ -418,10 +425,11 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
* \retval capability id of the new kernel object
*/
static capid_t syscall_create(Core::Kernel_object<Thread> &t,
unsigned const cpu_id,
char const * const label)
{
return (capid_t)call(call_id_new_core_thread(), (Call_arg)&t,
(Call_arg)label);
(Call_arg)cpu_id, (Call_arg)label);
}
/**
@@ -458,13 +466,12 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
void signal_receive_signal(void * const base, size_t const size);
/*************
** Cpu_job **
*************/
/*****************
** Cpu_context **
*****************/
void exception(Cpu & cpu) override;
void proceed(Cpu & cpu) override;
Cpu_job * helping_destination() override;
void exception() override;
void proceed() override;
/*************

View File

@@ -18,7 +18,7 @@
/* core includes */
#include <kernel/cpu_context.h>
#include <kernel/pd.h>
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <board.h>
@@ -31,7 +31,7 @@ namespace Kernel {
}
class Kernel::Vm : private Kernel::Object, public Cpu_job
class Kernel::Vm : private Kernel::Object, public Cpu_context
{
public:
@@ -66,7 +66,7 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
void _pause_vcpu()
{
if (_scheduled != INACTIVE)
Cpu_job::_deactivate_own_share();
Cpu_context::_deactivate();
_scheduled = INACTIVE;
}
@@ -135,7 +135,7 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
void run()
{
_sync_from_vmm();
if (_scheduled != ACTIVE) Cpu_job::_activate_own_share();
if (_scheduled != ACTIVE) Cpu_context::_activate();
_scheduled = ACTIVE;
}
@@ -146,13 +146,12 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
}
/*************
** Cpu_job **
*************/
/*****************
** Cpu_context **
*****************/
void exception(Cpu & cpu) override;
void proceed(Cpu & cpu) override;
Cpu_job * helping_destination() override { return this; }
void exception() override;
void proceed() override;
};
#endif /* _CORE__KERNEL__VM_H_ */

View File

@@ -19,9 +19,30 @@
/* base-internal includes */
#include <base/internal/capability_space.h>
#include <base/internal/native_thread.h>
using namespace Core;
static unsigned _nr_of_cpus = 0;
static void *_pager_thread_memory = nullptr;
void Core::init_pager_thread_per_cpu_memory(unsigned const cpus, void * mem)
{
_nr_of_cpus = cpus;
_pager_thread_memory = mem;
}
void Core::init_page_fault_handling(Rpc_entrypoint &) { }
/*************
** Mapping **
*************/
void Mapping::prepare_map_operation() const { }
/***************
** Ipc_pager **
@@ -51,13 +72,11 @@ void Pager_object::wake_up()
}
void Pager_object::start_paging(Kernel_object<Kernel::Signal_receiver> & receiver)
void Pager_object::start_paging(Kernel_object<Kernel::Signal_receiver> &receiver,
Platform_thread &pager_thread)
{
using Object = Kernel_object<Kernel::Signal_context>;
using Entry = Object_pool<Pager_object>::Entry;
create(*receiver, (unsigned long)this);
Entry::cap(Object::_cap);
_pager_thread = &pager_thread;
}
@@ -75,11 +94,11 @@ void Pager_object::print(Output &out) const
Pager_object::Pager_object(Cpu_session_capability cpu_session_cap,
Thread_capability thread_cap, addr_t const badge,
Affinity::Location, Session_label const &,
Affinity::Location location, Session_label const &,
Cpu_session::Name const &)
:
Object_pool<Pager_object>::Entry(Kernel_object<Kernel::Signal_context>::_cap),
_badge(badge), _cpu_session_cap(cpu_session_cap), _thread_cap(thread_cap)
_badge(badge), _location(location),
_cpu_session_cap(cpu_session_cap), _thread_cap(thread_cap)
{ }
@@ -87,27 +106,115 @@ Pager_object::Pager_object(Cpu_session_capability cpu_session_cap,
** Pager_entrypoint **
**********************/
void Pager_entrypoint::dissolve(Pager_object &o)
void Pager_entrypoint::Thread::entry()
{
Kernel::kill_signal_context(Capability_space::capid(o.cap()));
remove(&o);
while (1) {
/* receive fault */
if (Kernel::await_signal(Capability_space::capid(_kobj.cap())))
continue;
Pager_object *po = *(Pager_object**)Thread::myself()->utcb()->data();
if (!po)
continue;
Untyped_capability cap = po->cap();
/* fetch fault data */
Platform_thread * const pt = (Platform_thread *)po->badge();
if (!pt) {
warning("failed to get platform thread of faulter");
Kernel::ack_signal(Capability_space::capid(cap));
continue;
}
if (pt->exception_state() ==
Kernel::Thread::Exception_state::EXCEPTION) {
if (!po->submit_exception_signal())
warning("unresolvable exception: "
"pd='", pt->pd().label(), "', "
"thread='", pt->label(), "', "
"ip=", Hex(pt->state().cpu.ip));
pt->fault_resolved(cap, false);
continue;
}
_fault = pt->fault_info();
/* try to resolve fault directly via local region managers */
if (po->pager(*this) == Pager_object::Pager_result::STOP) {
pt->fault_resolved(cap, false);
continue;
}
/* apply mapping that was determined by the local region managers */
{
Locked_ptr<Address_space> locked_ptr(pt->address_space());
if (!locked_ptr.valid()) {
pt->fault_resolved(cap, false);
continue;
}
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
Cache cacheable = Genode::CACHED;
if (!_mapping.cached)
cacheable = Genode::UNCACHED;
if (_mapping.write_combined)
cacheable = Genode::WRITE_COMBINED;
Hw::Page_flags const flags {
.writeable = _mapping.writeable ? Hw::RW : Hw::RO,
.executable = _mapping.executable ? Hw::EXEC : Hw::NO_EXEC,
.privileged = Hw::USER,
.global = Hw::NO_GLOBAL,
.type = _mapping.io_mem ? Hw::DEVICE : Hw::RAM,
.cacheable = cacheable
};
as->insert_translation(_mapping.dst_addr, _mapping.src_addr,
1UL << _mapping.size_log2, flags);
}
pt->fault_resolved(cap, true);
}
}
Pager_entrypoint::Pager_entrypoint(Rpc_cap_factory &)
Pager_entrypoint::Thread::Thread(Affinity::Location cpu)
:
Thread(Weight::DEFAULT_WEIGHT, "pager_ep", PAGER_EP_STACK_SIZE,
Type::NORMAL),
Genode::Thread(Weight::DEFAULT_WEIGHT, "pager_ep", PAGER_EP_STACK_SIZE, cpu),
_kobj(_kobj.CALLED_FROM_CORE)
{
start();
}
void Pager_entrypoint::dissolve(Pager_object &o)
{
Kernel::kill_signal_context(Capability_space::capid(o.cap()));
}
Pager_capability Pager_entrypoint::manage(Pager_object &o)
{
o.start_paging(_kobj);
insert(&o);
unsigned const cpu = o.location().xpos();
if (cpu >= _cpus) {
error("Invalid location of pager object ", cpu);
} else {
o.start_paging(_threads[cpu]._kobj,
*_threads[cpu].native_thread().platform_thread);
}
return reinterpret_cap_cast<Pager_object>(o.cap());
}
Pager_entrypoint::Pager_entrypoint(Rpc_cap_factory &)
:
_cpus(_nr_of_cpus),
_threads((Thread*)_pager_thread_memory)
{
for (unsigned i = 0; i < _cpus; i++)
construct_at<Thread>((void*)&_threads[i], Affinity::Location(i, 0));
}

View File

@@ -17,12 +17,11 @@
/* Genode includes */
#include <base/session_label.h>
#include <base/thread.h>
#include <base/object_pool.h>
#include <base/signal.h>
#include <pager/capability.h>
/* core includes */
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <hw/mapping.h>
#include <mapping.h>
#include <object.h>
@@ -30,6 +29,9 @@
namespace Core {
class Platform;
class Platform_thread;
/**
* Interface used by generic region_map code
*/
@@ -53,6 +55,10 @@ namespace Core {
using Pager_capability = Capability<Pager_object>;
enum { PAGER_EP_STACK_SIZE = sizeof(addr_t) * 2048 };
extern void init_page_fault_handling(Rpc_entrypoint &);
void init_pager_thread_per_cpu_memory(unsigned const cpus, void * mem);
}
@@ -93,17 +99,17 @@ class Core::Ipc_pager
};
class Core::Pager_object : private Object_pool<Pager_object>::Entry,
private Kernel_object<Kernel::Signal_context>
class Core::Pager_object : private Kernel_object<Kernel::Signal_context>
{
friend class Pager_entrypoint;
friend class Object_pool<Pager_object>;
private:
unsigned long const _badge;
Affinity::Location _location;
Cpu_session_capability _cpu_session_cap;
Thread_capability _thread_cap;
Platform_thread *_pager_thread { nullptr };
/**
* User-level signal handler registered for this pager object via
@@ -111,6 +117,12 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
*/
Signal_context_capability _exception_sigh { };
/*
* Noncopyable
*/
Pager_object(const Pager_object&) = delete;
Pager_object& operator=(const Pager_object&) = delete;
public:
/**
@@ -123,11 +135,15 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
Affinity::Location, Session_label const&,
Cpu_session::Name const&);
virtual ~Pager_object() {}
/**
* User identification of pager object
*/
unsigned long badge() const { return _badge; }
Affinity::Location location() { return _location; }
/**
* Resume faulter
*/
@@ -158,7 +174,8 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
*
* \param receiver signal receiver that receives the page faults
*/
void start_paging(Kernel_object<Kernel::Signal_receiver> & receiver);
void start_paging(Kernel_object<Kernel::Signal_receiver> &receiver,
Platform_thread &pager_thread);
/**
* Called when a page-fault finally could not be resolved
@@ -167,6 +184,11 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
void print(Output &out) const;
void with_pager(auto const &fn)
{
if (_pager_thread) fn(*_pager_thread);
}
/******************
** Pure virtual **
@@ -192,24 +214,44 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
Cpu_session_capability cpu_session_cap() const { return _cpu_session_cap; }
Thread_capability thread_cap() const { return _thread_cap; }
using Object_pool<Pager_object>::Entry::cap;
Untyped_capability cap() {
return Kernel_object<Kernel::Signal_context>::_cap; }
};
class Core::Pager_entrypoint : public Object_pool<Pager_object>,
public Thread,
private Ipc_pager
class Core::Pager_entrypoint
{
private:
Kernel_object<Kernel::Signal_receiver> _kobj;
friend class Platform;
class Thread : public Genode::Thread,
private Ipc_pager
{
private:
friend class Pager_entrypoint;
Kernel_object<Kernel::Signal_receiver> _kobj;
public:
explicit Thread(Affinity::Location);
/**********************
** Thread interface **
**********************/
void entry() override;
};
unsigned const _cpus;
Thread *_threads;
public:
/**
* Constructor
*/
Pager_entrypoint(Rpc_cap_factory &);
explicit Pager_entrypoint(Rpc_cap_factory &);
/**
* Associate pager object 'obj' with entry point
@@ -220,13 +262,6 @@ class Core::Pager_entrypoint : public Object_pool<Pager_object>,
* Dissolve pager object 'obj' from entry point
*/
void dissolve(Pager_object &obj);
/**********************
** Thread interface **
**********************/
void entry() override;
};
#endif /* _CORE__PAGER_H_ */

View File

@@ -0,0 +1,79 @@
/*
* \brief Allocate an object with a physical address
* \author Norman Feske
* \author Benjamin Lamowski
* \date 2024-12-02
*/
/*
* Copyright (C) 2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__PHYS_ALLOCATED_H_
#define _CORE__PHYS_ALLOCATED_H_
/* base includes */
#include <base/allocator.h>
#include <base/attached_ram_dataspace.h>
#include <util/noncopyable.h>
/* core-local includes */
#include <types.h>
namespace Core {
template <typename T>
class Phys_allocated;
}
using namespace Core;
template <typename T>
class Core::Phys_allocated : Genode::Noncopyable
{
private:
Rpc_entrypoint &_ep;
Ram_allocator &_ram;
Region_map &_rm;
Attached_ram_dataspace _ds { _ram, _rm, sizeof(T) };
public:
T &obj = *_ds.local_addr<T>();
Phys_allocated(Rpc_entrypoint &ep,
Ram_allocator &ram,
Region_map &rm)
:
_ep(ep), _ram(ram), _rm(rm)
{
construct_at<T>(&obj);
}
Phys_allocated(Rpc_entrypoint &ep,
Ram_allocator &ram,
Region_map &rm,
auto const &construct_fn)
:
_ep(ep), _ram(ram), _rm(rm)
{
construct_fn(*this, &obj);
}
~Phys_allocated() { obj.~T(); }
addr_t phys_addr() {
addr_t phys_addr { };
_ep.apply(_ds.cap(), [&](Dataspace_component *dsc) {
phys_addr = dsc->phys_addr();
});
return phys_addr;
}
};
#endif /* _CORE__PHYS_ALLOCATED_H_ */

View File

@@ -19,6 +19,7 @@
/* base-hw core includes */
#include <map_local.h>
#include <pager.h>
#include <platform.h>
#include <platform_pd.h>
#include <kernel/main.h>
@@ -31,7 +32,6 @@
/* base internal includes */
#include <base/internal/crt0.h>
#include <base/internal/stack_area.h>
#include <base/internal/unmanaged_singleton.h>
/* base includes */
#include <trace/source_registry.h>
@@ -60,8 +60,9 @@ Hw::Page_table::Allocator & Platform::core_page_table_allocator()
using Allocator = Hw::Page_table::Allocator;
using Array = Allocator::Array<Hw::Page_table::CORE_TRANS_TABLE_COUNT>;
addr_t virt_addr = Hw::Mm::core_page_tables().base + sizeof(Hw::Page_table);
return *unmanaged_singleton<Array::Allocator>(_boot_info().table_allocator,
virt_addr);
static Array::Allocator alloc { _boot_info().table_allocator, virt_addr };
return alloc;
}
@@ -70,6 +71,7 @@ addr_t Platform::core_main_thread_phys_utcb()
return core_phys_addr(_boot_info().core_main_thread_utcb);
}
void Platform::_init_io_mem_alloc()
{
/* add entire adress space minus the RAM memory regions */
@@ -81,8 +83,9 @@ void Platform::_init_io_mem_alloc()
Hw::Memory_region_array const & Platform::_core_virt_regions()
{
return *unmanaged_singleton<Hw::Memory_region_array>(
Hw::Memory_region(stack_area_virtual_base(), stack_area_virtual_size()));
static Hw::Memory_region_array array {
Hw::Memory_region(stack_area_virtual_base(), stack_area_virtual_size()) };
return array;
}
@@ -251,6 +254,10 @@ Platform::Platform()
);
}
unsigned const cpus = _boot_info().cpus;
size_t size = cpus * sizeof(Pager_entrypoint::Thread);
init_pager_thread_per_cpu_memory(cpus, _core_mem_alloc.alloc(size));
class Idle_thread_trace_source : public Trace::Source::Info_accessor,
private Trace::Control,
private Trace::Source

View File

@@ -15,7 +15,6 @@
/* core includes */
#include <platform_thread.h>
#include <platform_pd.h>
#include <core_env.h>
#include <rm_session_component.h>
#include <map_local.h>
@@ -30,48 +29,19 @@
using namespace Core;
Ram_dataspace_capability Platform_thread::Utcb::_allocate_utcb(bool core_thread)
addr_t Platform_thread::Utcb::_attach(Region_map &core_rm)
{
Ram_dataspace_capability ds;
if (core_thread)
return ds;
try {
ds = core_env().pd_session()->alloc(sizeof(Native_utcb), CACHED);
} catch (...) {
error("failed to allocate UTCB");
throw Out_of_ram();
}
return ds;
}
addr_t Platform_thread::Utcb::_core_local_address(addr_t utcb_addr,
bool core_thread)
{
if (core_thread)
return utcb_addr;
addr_t ret = 0;
Region_map::Attr attr { };
attr.writeable = true;
core_env().rm_session()->attach(_ds, attr).with_result(
[&] (Region_map::Range range) {
ret = range.start; },
[&] (Region_map::Attach_error) {
error("failed to attach UTCB of new thread within core"); });
return ret;
return core_rm.attach(_ds, attr).convert<addr_t>(
[&] (Region_map::Range range) { return range.start; },
[&] (Region_map::Attach_error) {
error("failed to attach UTCB of new thread within core");
return 0ul; });
}
Platform_thread::Utcb::Utcb(addr_t pd_addr, bool core_thread)
:
_ds(_allocate_utcb(core_thread)),
_core_addr(_core_local_address(pd_addr, core_thread))
static addr_t _alloc_core_local_utcb(addr_t core_addr)
{
/*
* All non-core threads use the typical dataspace/rm_session
@@ -80,27 +50,25 @@ Platform_thread::Utcb::Utcb(addr_t pd_addr, bool core_thread)
* physical and virtual memory allocators to create/attach its
* UTCBs. Therefore, we've to allocate and map those here.
*/
if (core_thread) {
platform().ram_alloc().try_alloc(sizeof(Native_utcb)).with_result(
return platform().ram_alloc().try_alloc(sizeof(Native_utcb)).convert<addr_t>(
[&] (void *utcb_phys) {
map_local((addr_t)utcb_phys, _core_addr,
sizeof(Native_utcb) / get_page_size());
},
[&] (Range_allocator::Alloc_error) {
error("failed to allocate UTCB for core/kernel thread!");
throw Out_of_ram();
}
);
}
[&] (void *utcb_phys) {
map_local((addr_t)utcb_phys, core_addr,
sizeof(Native_utcb) / get_page_size());
return addr_t(utcb_phys);
},
[&] (Range_allocator::Alloc_error) {
error("failed to allocate UTCB for core/kernel thread!");
return 0ul;
});
}
Platform_thread::Utcb::~Utcb()
{
/* detach UTCB from core/kernel */
core_env().rm_session()->detach((addr_t)_core_addr);
}
Platform_thread::Utcb::Utcb(addr_t core_addr)
:
core_addr(core_addr),
phys_addr(_alloc_core_local_utcb(core_addr))
{ }
void Platform_thread::_init() { }
@@ -122,28 +90,33 @@ Platform_thread::Platform_thread(Label const &label, Native_utcb &utcb)
_label(label),
_pd(_kernel_main_get_core_platform_pd()),
_pager(nullptr),
_utcb((addr_t)&utcb, true),
_utcb((addr_t)&utcb),
_main_thread(false),
_location(Affinity::Location()),
_kobj(_kobj.CALLED_FROM_CORE, _label.string()) { }
_kobj(_kobj.CALLED_FROM_CORE, _location.xpos(), _label.string())
{ }
Platform_thread::Platform_thread(Platform_pd &pd,
Rpc_entrypoint &ep,
Ram_allocator &ram,
Region_map &core_rm,
size_t const quota,
Label const &label,
unsigned const virt_prio,
Affinity::Location const location,
addr_t const utcb)
addr_t /* utcb */)
:
_label(label),
_pd(pd),
_pager(nullptr),
_utcb(utcb, false),
_utcb(ep, ram, core_rm),
_priority(_scale_priority(virt_prio)),
_quota((unsigned)quota),
_main_thread(!pd.has_any_thread),
_location(location),
_kobj(_kobj.CALLED_FROM_CORE, _priority, _quota, _label.string())
_kobj(_kobj.CALLED_FROM_CORE, _location.xpos(),
_priority, _quota, _label.string())
{
_address_space = pd.weak_ptr();
pd.has_any_thread = true;
@@ -165,9 +138,6 @@ Platform_thread::~Platform_thread()
locked_ptr->flush(user_utcb_main_thread(), sizeof(Native_utcb),
Address_space::Core_local_addr{0});
}
/* free UTCB */
core_env().pd_session()->free(_utcb._ds);
}
@@ -185,35 +155,23 @@ void Platform_thread::start(void * const ip, void * const sp)
/* attach UTCB in case of a main thread */
if (_main_thread) {
/* lookup dataspace component for physical address */
auto lambda = [&] (Dataspace_component *dsc) {
if (!dsc) return -1;
/* lock the address space */
Locked_ptr<Address_space> locked_ptr(_address_space);
if (!locked_ptr.valid()) {
error("invalid RM client");
return -1;
};
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
if (!as->insert_translation(user_utcb_main_thread(), dsc->phys_addr(),
sizeof(Native_utcb), Hw::PAGE_FLAGS_UTCB)) {
error("failed to attach UTCB");
return -1;
}
return 0;
};
if (core_env().entrypoint().apply(_utcb._ds, lambda))
Locked_ptr<Address_space> locked_ptr(_address_space);
if (!locked_ptr.valid()) {
error("unable to start thread in invalid address space");
return;
};
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
if (!as->insert_translation(user_utcb_main_thread(), _utcb.phys_addr,
sizeof(Native_utcb), Hw::PAGE_FLAGS_UTCB)) {
error("failed to attach UTCB");
return;
}
}
/* initialize thread registers */
_kobj->regs->ip = reinterpret_cast<addr_t>(ip);
_kobj->regs->sp = reinterpret_cast<addr_t>(sp);
/* start executing new thread */
unsigned const cpu = _location.xpos();
Native_utcb &utcb = *Thread::myself()->utcb();
/* reset capability counter */
@@ -223,16 +181,20 @@ void Platform_thread::start(void * const ip, void * const sp)
utcb.cap_add(Capability_space::capid(_pd.parent()));
utcb.cap_add(Capability_space::capid(_utcb._ds));
}
Kernel::start_thread(*_kobj, cpu, _pd.kernel_pd(), *(Native_utcb*)_utcb._core_addr);
Kernel::start_thread(*_kobj, _pd.kernel_pd(),
*(Native_utcb*)_utcb.core_addr);
}
void Platform_thread::pager(Pager_object &pager)
void Platform_thread::pager(Pager_object &po)
{
using namespace Kernel;
thread_pager(*_kobj, Capability_space::capid(pager.cap()));
_pager = &pager;
po.with_pager([&] (Platform_thread &pt) {
thread_pager(*_kobj, *pt._kobj,
Capability_space::capid(po.cap())); });
_pager = &po;
}
@@ -278,3 +240,9 @@ void Platform_thread::restart()
{
Kernel::restart_thread(Capability_space::capid(_kobj.cap()));
}
void Platform_thread::fault_resolved(Untyped_capability cap, bool resolved)
{
Kernel::ack_pager_signal(Capability_space::capid(cap), *_kobj, resolved);
}

View File

@@ -19,6 +19,7 @@
#include <base/ram_allocator.h>
#include <base/thread.h>
#include <base/trace/types.h>
#include <base/rpc_server.h>
/* base-internal includes */
#include <base/internal/native_utcb.h>
@@ -26,6 +27,7 @@
/* core includes */
#include <address_space.h>
#include <object.h>
#include <dataspace_component.h>
/* kernel includes */
#include <kernel/core_interface.h>
@@ -55,17 +57,59 @@ class Core::Platform_thread : Noncopyable
using Label = String<32>;
struct Utcb
struct Utcb : Noncopyable
{
struct {
Ram_allocator *_ram_ptr = nullptr;
Region_map *_core_rm_ptr = nullptr;
};
Ram_dataspace_capability _ds { }; /* UTCB ds of non-core threads */
addr_t const _core_addr; /* UTCB address within core/kernel */
addr_t const core_addr; /* UTCB address within core/kernel */
addr_t const phys_addr;
Ram_dataspace_capability _allocate_utcb(bool core_thread);
addr_t _core_local_address(addr_t utcb_addr, bool core_thread);
/*
* \throw Out_of_ram
* \throw Out_of_caps
*/
Ram_dataspace_capability _allocate(Ram_allocator &ram)
{
return ram.alloc(sizeof(Native_utcb), CACHED);
}
Utcb(addr_t pd_addr, bool core_thread);
~Utcb();
addr_t _attach(Region_map &);
static addr_t _ds_phys(Rpc_entrypoint &ep, Dataspace_capability ds)
{
return ep.apply(ds, [&] (Dataspace_component *dsc) {
return dsc ? dsc->phys_addr() : 0; });
}
/**
* Constructor used for core-local threads
*/
Utcb(addr_t core_addr);
/**
* Constructor used for threads outside of core
*/
Utcb(Rpc_entrypoint &ep, Ram_allocator &ram, Region_map &core_rm)
:
_core_rm_ptr(&core_rm),
_ds(_allocate(ram)),
core_addr(_attach(core_rm)),
phys_addr(_ds_phys(ep, _ds))
{ }
~Utcb()
{
if (_core_rm_ptr)
_core_rm_ptr->detach(core_addr);
if (_ram_ptr && _ds.valid())
_ram_ptr->free(_ds);
}
};
Label const _label;
@@ -126,7 +170,8 @@ class Core::Platform_thread : Noncopyable
* \param virt_prio unscaled processor-scheduling priority
* \param utcb core local pointer to userland stack
*/
Platform_thread(Platform_pd &, size_t const quota, Label const &label,
Platform_thread(Platform_pd &, Rpc_entrypoint &, Ram_allocator &,
Region_map &, size_t const quota, Label const &label,
unsigned const virt_prio, Affinity::Location,
addr_t const utcb);
@@ -171,6 +216,8 @@ class Core::Platform_thread : Noncopyable
void restart();
void fault_resolved(Untyped_capability, bool);
/**
* Pause this thread
*/

View File

@@ -1,94 +0,0 @@
/*
* \brief RM- and pager implementations specific for base-hw and core
* \author Martin Stein
* \author Stefan Kalkowski
* \date 2012-02-12
*/
/*
* Copyright (C) 2012-2017 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
/* base-hw core includes */
#include <pager.h>
#include <platform_pd.h>
#include <platform_thread.h>
using namespace Core;
void Pager_entrypoint::entry()
{
Untyped_capability cap;
while (1) {
if (cap.valid()) Kernel::ack_signal(Capability_space::capid(cap));
/* receive fault */
if (Kernel::await_signal(Capability_space::capid(_kobj.cap()))) continue;
Pager_object *po = *(Pager_object**)Thread::myself()->utcb()->data();
cap = po->cap();
if (!po) continue;
/* fetch fault data */
Platform_thread * const pt = (Platform_thread *)po->badge();
if (!pt) {
warning("failed to get platform thread of faulter");
continue;
}
if (pt->exception_state() ==
Kernel::Thread::Exception_state::EXCEPTION) {
if (!po->submit_exception_signal())
warning("unresolvable exception: "
"pd='", pt->pd().label(), "', "
"thread='", pt->label(), "', "
"ip=", Hex(pt->state().cpu.ip));
continue;
}
_fault = pt->fault_info();
/* try to resolve fault directly via local region managers */
if (po->pager(*this) == Pager_object::Pager_result::STOP)
continue;
/* apply mapping that was determined by the local region managers */
{
Locked_ptr<Address_space> locked_ptr(pt->address_space());
if (!locked_ptr.valid()) continue;
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
Cache cacheable = Genode::CACHED;
if (!_mapping.cached)
cacheable = Genode::UNCACHED;
if (_mapping.write_combined)
cacheable = Genode::WRITE_COMBINED;
Hw::Page_flags const flags {
.writeable = _mapping.writeable ? Hw::RW : Hw::RO,
.executable = _mapping.executable ? Hw::EXEC : Hw::NO_EXEC,
.privileged = Hw::USER,
.global = Hw::NO_GLOBAL,
.type = _mapping.io_mem ? Hw::DEVICE : Hw::RAM,
.cacheable = cacheable
};
as->insert_translation(_mapping.dst_addr, _mapping.src_addr,
1UL << _mapping.size_log2, flags);
}
/* let pager object go back to no-fault state */
po->wake_up();
}
}
void Mapping::prepare_map_operation() const { }

View File

@@ -19,7 +19,7 @@
/* core includes */
#include <object.h>
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <assertion.h>
namespace Core {

View File

@@ -23,32 +23,35 @@
using namespace Kernel;
extern "C" void kernel_to_user_context_switch(Cpu::Context*, Cpu::Fpu_context*);
extern "C" void kernel_to_user_context_switch(Core::Cpu::Context*,
Core::Cpu::Fpu_context*);
void Thread::_call_suspend() { }
void Thread::exception(Cpu & cpu)
void Thread::exception()
{
using Ctx = Core::Cpu::Context;
switch (regs->cpu_exception) {
case Cpu::Context::SUPERVISOR_CALL:
case Ctx::SUPERVISOR_CALL:
_call();
return;
case Cpu::Context::PREFETCH_ABORT:
case Cpu::Context::DATA_ABORT:
case Ctx::PREFETCH_ABORT:
case Ctx::DATA_ABORT:
_mmu_exception();
return;
case Cpu::Context::INTERRUPT_REQUEST:
case Cpu::Context::FAST_INTERRUPT_REQUEST:
_interrupt(_user_irq_pool, cpu.id());
case Ctx::INTERRUPT_REQUEST:
case Ctx::FAST_INTERRUPT_REQUEST:
_interrupt(_user_irq_pool);
return;
case Cpu::Context::UNDEFINED_INSTRUCTION:
case Ctx::UNDEFINED_INSTRUCTION:
Genode::raw(*this, ": undefined instruction at ip=",
Genode::Hex(regs->ip));
_die();
return;
case Cpu::Context::RESET:
case Ctx::RESET:
return;
default:
Genode::raw(*this, ": triggered an unknown exception ",
@@ -71,17 +74,17 @@ void Kernel::Thread::Tlb_invalidation::execute(Cpu &) { }
void Thread::Flush_and_stop_cpu::execute(Cpu &) { }
void Cpu::Halt_job::proceed(Kernel::Cpu &) { }
void Cpu::Halt_job::proceed() { }
void Thread::proceed(Cpu & cpu)
void Thread::proceed()
{
if (!cpu.active(pd().mmu_regs) && type() != CORE)
cpu.switch_to(pd().mmu_regs);
if (!_cpu().active(pd().mmu_regs) && type() != CORE)
_cpu().switch_to(pd().mmu_regs);
regs->cpu_exception = cpu.stack_start();
kernel_to_user_context_switch((static_cast<Cpu::Context*>(&*regs)),
(static_cast<Cpu::Fpu_context*>(&*regs)));
regs->cpu_exception = _cpu().stack_start();
kernel_to_user_context_switch((static_cast<Core::Cpu::Context*>(&*regs)),
(static_cast<Core::Cpu::Fpu_context*>(&*regs)));
}

View File

@@ -16,12 +16,11 @@
/* core includes */
#include <platform.h>
#include <platform_pd.h>
#include <platform_services.h>
#include <core_env.h>
#include <core_service.h>
#include <map_local.h>
#include <vm_root.h>
#include <platform.h>
using namespace Core;
@@ -32,11 +31,13 @@ extern addr_t hypervisor_exception_vector;
/*
* Add ARM virtualization specific vm service
*/
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sh,
Registry<Service> &services,
Core::Trace::Source_registry &trace_sources,
Ram_allocator &)
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sh,
Registry<Service> &services,
Trace::Source_registry &trace_sources,
Ram_allocator &core_ram,
Region_map &core_rm,
Range_allocator &)
{
map_local(Platform::core_phys_addr((addr_t)&hypervisor_exception_vector),
Hw::Mm::hypervisor_exception_vector().base,
@@ -51,8 +52,7 @@ void Core::platform_add_local_services(Rpc_entrypoint &ep,
Hw::Mm::hypervisor_stack().size / get_page_size(),
Hw::PAGE_FLAGS_KERN_DATA);
static Vm_root vm_root(ep, sh, core_env().ram_allocator(),
core_env().local_rm(), trace_sources);
static Vm_root vm_root(ep, sh, core_ram, core_rm, trace_sources);
static Core_service<Vm_session_component> vm_service(services, vm_root);
},
[&] (Range_allocator::Alloc_error) {

View File

@@ -14,15 +14,11 @@
/* Genode includes */
#include <util/construct_at.h>
/* base internal includes */
#include <base/internal/unmanaged_singleton.h>
/* core includes */
#include <kernel/core_interface.h>
#include <vm_session_component.h>
#include <platform.h>
#include <cpu_thread_component.h>
#include <core_env.h>
using namespace Core;
@@ -87,29 +83,14 @@ void * Vm_session_component::_alloc_table()
}
using Vmid_allocator = Bit_allocator<256>;
static Vmid_allocator &alloc()
{
static Vmid_allocator * allocator = nullptr;
if (!allocator) {
allocator = unmanaged_singleton<Vmid_allocator>();
/* reserve VM ID 0 for the hypervisor */
addr_t id = allocator->alloc();
assert (id == 0);
}
return *allocator;
}
Genode::addr_t Vm_session_component::_alloc_vcpu_data(Genode::addr_t ds_addr)
{
return ds_addr;
}
Vm_session_component::Vm_session_component(Rpc_entrypoint &ds_ep,
Vm_session_component::Vm_session_component(Vmid_allocator & vmid_alloc,
Rpc_entrypoint &ds_ep,
Resources resources,
Label const &,
Diag,
@@ -127,7 +108,8 @@ Vm_session_component::Vm_session_component(Rpc_entrypoint &ds_ep,
_table(*construct_at<Board::Vm_page_table>(_alloc_table())),
_table_array(*(new (cma()) Board::Vm_page_table_array([] (void * virt) {
return (addr_t)cma().phys_addr(virt);}))),
_id({(unsigned)alloc().alloc(), cma().phys_addr(&_table)})
_vmid_alloc(vmid_alloc),
_id({(unsigned)_vmid_alloc.alloc(), cma().phys_addr(&_table)})
{
/* configure managed VM area */
_map.add_range(0, 0UL - 0x1000);
@@ -162,5 +144,5 @@ Vm_session_component::~Vm_session_component()
/* free guest-to-host page tables */
destroy(platform().core_mem_alloc(), &_table);
destroy(platform().core_mem_alloc(), &_table_array);
alloc().free(_id.id);
_vmid_alloc.free(_id.id);
}

View File

@@ -28,14 +28,13 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Identity & id)
:
Kernel::Object { *this },
Cpu_job(Scheduler::Priority::min(), 0),
Cpu_context(cpu, Scheduler::Priority::min(), 0),
_user_irq_pool(user_irq_pool),
_state(data),
_context(context),
_id(id),
_vcpu_context(cpu)
{
affinity(cpu);
/* once constructed, exit with a startup exception */
pause();
_state.cpu_exception = Genode::VCPU_EXCEPTION_STARTUP;
@@ -46,12 +45,12 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Vm::~Vm() {}
void Vm::exception(Cpu & cpu)
void Vm::exception()
{
switch(_state.cpu_exception) {
case Genode::Cpu_state::INTERRUPT_REQUEST: [[fallthrough]];
case Genode::Cpu_state::FAST_INTERRUPT_REQUEST:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
return;
case Genode::Cpu_state::DATA_ABORT:
_state.dfar = Cpu::Dfar::read();
@@ -69,19 +68,19 @@ bool secure_irq(unsigned const i);
extern "C" void monitor_mode_enter_normal_world(Genode::Vcpu_state&, void*);
void Vm::proceed(Cpu & cpu)
void Vm::proceed()
{
unsigned const irq = _state.irq_injection;
if (irq) {
if (cpu.pic().secure(irq)) {
if (_cpu().pic().secure(irq)) {
Genode::raw("Refuse to inject secure IRQ into VM");
} else {
cpu.pic().trigger(irq);
_cpu().pic().trigger(irq);
_state.irq_injection = 0;
}
}
monitor_mode_enter_normal_world(_state, (void*) cpu.stack_start());
monitor_mode_enter_normal_world(_state, (void*) _cpu().stack_start());
}

View File

@@ -17,7 +17,6 @@
/* core includes */
#include <platform.h>
#include <platform_services.h>
#include <core_env.h>
#include <core_service.h>
#include <vm_root.h>
#include <map_local.h>
@@ -29,11 +28,13 @@ extern int monitor_mode_exception_vector;
/*
* Add TrustZone specific vm service
*/
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sliced_heap,
Registry<Service> &local_services,
Core::Trace::Source_registry &trace_sources,
Ram_allocator &)
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sliced_heap,
Registry<Service> &services,
Trace::Source_registry &trace_sources,
Ram_allocator &core_ram,
Region_map &core_rm,
Range_allocator &)
{
static addr_t const phys_base =
Platform::core_phys_addr((addr_t)&monitor_mode_exception_vector);
@@ -41,8 +42,7 @@ void Core::platform_add_local_services(Rpc_entrypoint &ep,
map_local(phys_base, Hw::Mm::system_exception_vector().base, 1,
Hw::PAGE_FLAGS_KERN_TEXT);
static Vm_root vm_root(ep, sliced_heap, core_env().ram_allocator(),
core_env().local_rm(), trace_sources);
static Vm_root vm_root(ep, sliced_heap, core_ram, core_rm, trace_sources);
static Core_service<Vm_session_component> vm_service(local_services, vm_root);
static Core_service<Vm_session_component> vm_service(services, vm_root);
}

View File

@@ -58,7 +58,7 @@ Genode::addr_t Vm_session_component::_alloc_vcpu_data(Genode::addr_t ds_addr)
}
Vm_session_component::Vm_session_component(Rpc_entrypoint &ep,
Vm_session_component::Vm_session_component(Vmid_allocator &vmids, Rpc_entrypoint &ep,
Resources resources,
Label const &,
Diag,
@@ -74,6 +74,7 @@ Vm_session_component::Vm_session_component(Rpc_entrypoint &ep,
_region_map(region_map),
_table(*construct_at<Board::Vm_page_table>(_alloc_table())),
_table_array(dummy_array()),
_vmid_alloc(vmids),
_id({id_alloc++, nullptr})
{
if (_id.id) {

View File

@@ -101,7 +101,7 @@ void Board::Vcpu_context::Vm_irq::handle(Vm & vm, unsigned irq) {
void Board::Vcpu_context::Vm_irq::occurred()
{
Vm *vm = dynamic_cast<Vm*>(&_cpu.scheduled_job());
Vm *vm = dynamic_cast<Vm*>(&_cpu.current_context());
if (!vm) Genode::raw("VM interrupt while VM is not runnning!");
else handle(*vm, _irq_nr);
}
@@ -140,14 +140,13 @@ Kernel::Vm::Vm(Irq::Pool & user_irq_pool,
Identity & id)
:
Kernel::Object { *this },
Cpu_job(Scheduler::Priority::min(), 0),
Cpu_context(cpu, Scheduler::Priority::min(), 0),
_user_irq_pool(user_irq_pool),
_state(data),
_context(context),
_id(id),
_vcpu_context(cpu)
{
affinity(cpu);
/* once constructed, exit with a startup exception */
pause();
_state.cpu_exception = Genode::VCPU_EXCEPTION_STARTUP;
@@ -164,29 +163,29 @@ Kernel::Vm::~Vm()
}
void Kernel::Vm::exception(Cpu & cpu)
void Kernel::Vm::exception()
{
switch(_state.cpu_exception) {
case Genode::Cpu_state::INTERRUPT_REQUEST:
case Genode::Cpu_state::FAST_INTERRUPT_REQUEST:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
break;
default:
pause();
_context.submit(1);
}
if (cpu.pic().ack_virtual_irq(_vcpu_context.pic))
if (_cpu().pic().ack_virtual_irq(_vcpu_context.pic))
inject_irq(Board::VT_MAINTAINANCE_IRQ);
_vcpu_context.vtimer_irq.disable();
}
void Kernel::Vm::proceed(Cpu & cpu)
void Kernel::Vm::proceed()
{
if (_state.timer.irq) _vcpu_context.vtimer_irq.enable();
cpu.pic().insert_virtual_irq(_vcpu_context.pic, _state.irqs.virtual_irq);
_cpu().pic().insert_virtual_irq(_vcpu_context.pic, _state.irqs.virtual_irq);
/*
* the following values have to be enforced by the hypervisor
@@ -202,7 +201,7 @@ void Kernel::Vm::proceed(Cpu & cpu)
_state.esr_el2 = Cpu::Hstr::init();
_state.hpfar_el2 = Cpu::Hcr::init();
Hypervisor::switch_world(_state, host_context(cpu));
Hypervisor::switch_world(_state, host_context(_cpu()));
}

View File

@@ -27,7 +27,7 @@ using namespace Kernel;
void Thread::_call_suspend() { }
void Thread::exception(Cpu & cpu)
void Thread::exception()
{
switch (regs->exception_type) {
case Cpu::RESET: return;
@@ -35,7 +35,7 @@ void Thread::exception(Cpu & cpu)
case Cpu::IRQ_LEVEL_EL1: [[fallthrough]];
case Cpu::FIQ_LEVEL_EL0: [[fallthrough]];
case Cpu::FIQ_LEVEL_EL1:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
return;
case Cpu::SYNC_LEVEL_EL0: [[fallthrough]];
case Cpu::SYNC_LEVEL_EL1:
@@ -94,51 +94,51 @@ void Kernel::Thread::Tlb_invalidation::execute(Cpu &) { }
void Thread::Flush_and_stop_cpu::execute(Cpu &) { }
void Cpu::Halt_job::proceed(Kernel::Cpu &) { }
void Cpu::Halt_job::proceed() { }
bool Kernel::Pd::invalidate_tlb(Cpu & cpu, addr_t addr, size_t size)
{
using namespace Genode;
bool Kernel::Pd::invalidate_tlb(Cpu & cpu, addr_t addr, size_t size)
{
using namespace Genode;
/* only apply to the active cpu */
if (cpu.id() != Cpu::executing_id())
return false;
/* only apply to the active cpu */
if (cpu.id() != Cpu::executing_id())
return false;
/**
* The kernel part of the address space is mapped as global
* therefore we have to invalidate it differently
*/
if (addr >= Hw::Mm::supervisor_exception_vector().base) {
for (addr_t end = addr+size; addr < end; addr += get_page_size())
asm volatile ("tlbi vaae1is, %0" :: "r" (addr >> 12));
return false;
}
/**
* Too big mappings will result in long running invalidation loops,
* just invalidate the whole tlb for the ASID then.
*/
if (size > 8 * get_page_size()) {
asm volatile ("tlbi aside1is, %0"
:: "r" ((uint64_t)mmu_regs.id() << 48));
return false;
}
/**
* The kernel part of the address space is mapped as global
* therefore we have to invalidate it differently
*/
if (addr >= Hw::Mm::supervisor_exception_vector().base) {
for (addr_t end = addr+size; addr < end; addr += get_page_size())
asm volatile ("tlbi vaae1is, %0" :: "r" (addr >> 12));
asm volatile ("tlbi vae1is, %0"
:: "r" (addr >> 12 | (uint64_t)mmu_regs.id() << 48));
return false;
}
/**
* Too big mappings will result in long running invalidation loops,
* just invalidate the whole tlb for the ASID then.
*/
if (size > 8 * get_page_size()) {
asm volatile ("tlbi aside1is, %0"
:: "r" ((uint64_t)mmu_regs.id() << 48));
return false;
}
for (addr_t end = addr+size; addr < end; addr += get_page_size())
asm volatile ("tlbi vae1is, %0"
:: "r" (addr >> 12 | (uint64_t)mmu_regs.id() << 48));
return false;
}
void Thread::proceed()
{
if (!_cpu().active(pd().mmu_regs) && type() != CORE)
_cpu().switch_to(pd().mmu_regs);
void Thread::proceed(Cpu & cpu)
{
if (!cpu.active(pd().mmu_regs) && type() != CORE)
cpu.switch_to(pd().mmu_regs);
kernel_to_user_context_switch((static_cast<Cpu::Context*>(&*regs)),
(void*)cpu.stack_start());
kernel_to_user_context_switch((static_cast<Core::Cpu::Context*>(&*regs)),
(void*)_cpu().stack_start());
}

View File

@@ -76,7 +76,7 @@ void Board::Vcpu_context::Vm_irq::handle(Vm & vm, unsigned irq) {
void Board::Vcpu_context::Vm_irq::occurred()
{
Vm *vm = dynamic_cast<Vm*>(&_cpu.scheduled_job());
Vm *vm = dynamic_cast<Vm*>(&_cpu.current_context());
if (!vm) Genode::raw("VM interrupt while VM is not runnning!");
else handle(*vm, _irq_nr);
}
@@ -115,15 +115,13 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Identity & id)
:
Kernel::Object { *this },
Cpu_job(Scheduler::Priority::min(), 0),
Cpu_context(cpu, Scheduler::Priority::min(), 0),
_user_irq_pool(user_irq_pool),
_state(data),
_context(context),
_id(id),
_vcpu_context(cpu)
{
affinity(cpu);
_state.id_aa64isar0_el1 = Cpu::Id_aa64isar0_el1::read();
_state.id_aa64isar1_el1 = Cpu::Id_aa64isar1_el1::read();
_state.id_aa64mmfr0_el1 = Cpu::Id_aa64mmfr0_el1::read();
@@ -167,14 +165,14 @@ Vm::~Vm()
}
void Vm::exception(Cpu & cpu)
void Vm::exception()
{
switch (_state.exception_type) {
case Cpu::IRQ_LEVEL_EL0: [[fallthrough]];
case Cpu::IRQ_LEVEL_EL1: [[fallthrough]];
case Cpu::FIQ_LEVEL_EL0: [[fallthrough]];
case Cpu::FIQ_LEVEL_EL1:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
break;
case Cpu::SYNC_LEVEL_EL0: [[fallthrough]];
case Cpu::SYNC_LEVEL_EL1: [[fallthrough]];
@@ -188,17 +186,17 @@ void Vm::exception(Cpu & cpu)
" not implemented!");
};
if (cpu.pic().ack_virtual_irq(_vcpu_context.pic))
if (_cpu().pic().ack_virtual_irq(_vcpu_context.pic))
inject_irq(Board::VT_MAINTAINANCE_IRQ);
_vcpu_context.vtimer_irq.disable();
}
void Vm::proceed(Cpu & cpu)
void Vm::proceed()
{
if (_state.timer.irq) _vcpu_context.vtimer_irq.enable();
cpu.pic().insert_virtual_irq(_vcpu_context.pic, _state.irqs.virtual_irq);
_cpu().pic().insert_virtual_irq(_vcpu_context.pic, _state.irqs.virtual_irq);
/*
* the following values have to be enforced by the hypervisor
@@ -208,7 +206,7 @@ void Vm::proceed(Cpu & cpu)
Cpu::Vttbr_el2::Asid::set(vttbr_el2, _id.id);
addr_t guest = Hw::Mm::el2_addr(&_state);
addr_t pic = Hw::Mm::el2_addr(&_vcpu_context.pic);
addr_t host = Hw::Mm::el2_addr(&host_context(cpu));
addr_t host = Hw::Mm::el2_addr(&host_context(_cpu()));
Hypervisor::switch_world(guest, host, pic, vttbr_el2);
}

View File

@@ -49,6 +49,10 @@ using namespace Kernel;
CALL_4_FILL_ARG_REGS \
register Call_arg arg_4_reg asm("a4") = arg_4;
#define CALL_6_FILL_ARG_REGS \
CALL_5_FILL_ARG_REGS \
register Call_arg arg_5_reg asm("a5") = arg_5;
extern Genode::addr_t _kernel_entry;
/*
@@ -75,6 +79,7 @@ extern Genode::addr_t _kernel_entry;
#define CALL_3_SWI CALL_2_SWI, "r" (arg_2_reg)
#define CALL_4_SWI CALL_3_SWI, "r" (arg_3_reg)
#define CALL_5_SWI CALL_4_SWI, "r" (arg_4_reg)
#define CALL_6_SWI CALL_5_SWI, "r" (arg_5_reg)
/******************
@@ -137,3 +142,16 @@ Call_ret Kernel::call(Call_arg arg_0,
asm volatile(CALL_5_SWI : "ra");
return arg_0_reg;
}
Call_ret Kernel::call(Call_arg arg_0,
Call_arg arg_1,
Call_arg arg_2,
Call_arg arg_3,
Call_arg arg_4,
Call_arg arg_5)
{
CALL_6_FILL_ARG_REGS
asm volatile(CALL_6_SWI : "ra");
return arg_0_reg;
}

View File

@@ -25,21 +25,21 @@ void Thread::Tlb_invalidation::execute(Cpu &) { }
void Thread::Flush_and_stop_cpu::execute(Cpu &) { }
void Cpu::Halt_job::proceed(Kernel::Cpu &) { }
void Cpu::Halt_job::proceed() { }
void Thread::exception(Cpu & cpu)
void Thread::exception()
{
using Context = Core::Cpu::Context;
using Stval = Core::Cpu::Stval;
if (regs->is_irq()) {
/* cpu-local timer interrupt */
if (regs->irq() == cpu.timer().interrupt_id()) {
cpu.handle_if_cpu_local_interrupt(cpu.timer().interrupt_id());
if (regs->irq() == _cpu().timer().interrupt_id()) {
_cpu().handle_if_cpu_local_interrupt(_cpu().timer().interrupt_id());
} else {
/* interrupt controller */
_interrupt(_user_irq_pool, 0);
_interrupt(_user_irq_pool);
}
return;
}
@@ -113,7 +113,7 @@ void Kernel::Thread::_call_cache_line_size()
}
void Kernel::Thread::proceed(Cpu & cpu)
void Kernel::Thread::proceed()
{
/*
* The sstatus register defines to which privilege level
@@ -123,8 +123,8 @@ void Kernel::Thread::proceed(Cpu & cpu)
Cpu::Sstatus::Spp::set(v, (type() == USER) ? 0 : 1);
Cpu::Sstatus::write(v);
if (!cpu.active(pd().mmu_regs) && type() != CORE)
cpu.switch_to(_pd->mmu_regs);
if (!_cpu().active(pd().mmu_regs) && type() != CORE)
_cpu().switch_to(_pd->mmu_regs);
asm volatile("csrw sscratch, %1 \n"
"mv x31, %0 \n"

View File

@@ -55,9 +55,9 @@ void Kernel::Thread::Flush_and_stop_cpu::execute(Cpu &cpu)
}
void Kernel::Cpu::Halt_job::Halt_job::proceed(Kernel::Cpu &cpu)
void Kernel::Cpu::Halt_job::Halt_job::proceed()
{
switch (cpu.state()) {
switch (_cpu().state()) {
case HALT:
while (true) {
asm volatile ("hlt"); }
@@ -83,7 +83,7 @@ void Kernel::Cpu::Halt_job::Halt_job::proceed(Kernel::Cpu &cpu)
/* adhere to ACPI specification */
asm volatile ("wbinvd" : : : "memory");
fadt.suspend(cpu.suspend.typ_a, cpu.suspend.typ_b);
fadt.suspend(_cpu().suspend.typ_a, _cpu().suspend.typ_b);
Genode::raw("kernel: unexpected resume");
});
@@ -143,7 +143,7 @@ void Kernel::Thread::_call_suspend()
/* single core CPU case */
if (cpu_count == 1) {
/* current CPU triggers final ACPI suspend outside kernel lock */
_cpu->next_state_suspend();
_cpu().next_state_suspend();
return;
}
@@ -176,12 +176,12 @@ void Kernel::Thread::_call_cache_line_size()
}
void Kernel::Thread::proceed(Cpu & cpu)
void Kernel::Thread::proceed()
{
if (!cpu.active(pd().mmu_regs) && type() != CORE)
cpu.switch_to(pd().mmu_regs);
if (!_cpu().active(pd().mmu_regs) && type() != CORE)
_cpu().switch_to(pd().mmu_regs);
cpu.switch_to(*regs);
_cpu().switch_to(*regs);
asm volatile("fxrstor (%1) \n"
"mov %0, %%rsp \n"

View File

@@ -20,7 +20,7 @@
using namespace Kernel;
void Thread::exception(Cpu & cpu)
void Thread::exception()
{
using Genode::Cpu_state;
@@ -45,7 +45,7 @@ void Thread::exception(Cpu & cpu)
if (regs->trapno >= Cpu_state::INTERRUPTS_START &&
regs->trapno <= Cpu_state::INTERRUPTS_END) {
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
return;
}

View File

@@ -47,6 +47,8 @@ Local_interrupt_controller(Global_interrupt_controller &global_irq_ctrl)
void Local_interrupt_controller::init()
{
using Hw::outb;
/* Start initialization sequence in cascade mode */
outb(PIC_CMD_MASTER, 0x11);
outb(PIC_CMD_SLAVE, 0x11);

View File

@@ -1,80 +0,0 @@
/*
* \brief Timer driver for core
* \author Adrian-Ken Rueegsegger
* \author Reto Buerki
* \date 2015-02-06
*/
/*
* Copyright (C) 2015-2017 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _SRC__CORE__SPEC__ARM__PIT_H_
#define _SRC__CORE__SPEC__ARM__PIT_H_
/* Genode includes */
#include <util/mmio.h>
#include <base/stdint.h>
/* core includes */
#include <port_io.h>
namespace Board { class Timer; }
/**
* LAPIC-based timer driver for core
*/
struct Board::Timer: Genode::Mmio<Hw::Cpu_memory_map::LAPIC_SIZE>
{
enum {
/* PIT constants */
PIT_TICK_RATE = 1193182ul,
PIT_SLEEP_MS = 50,
PIT_SLEEP_TICS = (PIT_TICK_RATE / 1000) * PIT_SLEEP_MS,
PIT_CH0_DATA = 0x40,
PIT_CH2_DATA = 0x42,
PIT_CH2_GATE = 0x61,
PIT_MODE = 0x43,
};
/* Timer registers */
struct Tmr_lvt : Register<0x320, 32>
{
struct Vector : Bitfield<0, 8> { };
struct Delivery : Bitfield<8, 3> { };
struct Mask : Bitfield<16, 1> { };
struct Timer_mode : Bitfield<17, 2> { };
};
struct Tmr_initial : Register <0x380, 32> { };
struct Tmr_current : Register <0x390, 32> { };
struct Divide_configuration : Register <0x03e0, 32>
{
struct Divide_value_0_2 : Bitfield<0, 2> { };
struct Divide_value_2_1 : Bitfield<3, 1> { };
struct Divide_value :
Genode::Bitset_2<Divide_value_0_2, Divide_value_2_1>
{
enum { MAX = 6 };
};
};
struct Calibration_failed : Genode::Exception { };
Divide_configuration::access_t divider = 0;
Genode::uint32_t ticks_per_ms = 0;
/* Measure LAPIC timer frequency using PIT channel 2 */
Genode::uint32_t pit_calc_timer_freq(void);
Timer(unsigned);
void init();
};
#endif /* _SRC__CORE__SPEC__ARM__PIT_H_ */

View File

@@ -59,8 +59,8 @@ void Platform::_init_additional_platform_info(Xml_generator &xml)
xml.attribute("vmx", Hw::Virtualization_support::has_vmx());
});
xml.node("tsc", [&] {
xml.attribute("invariant", Hw::Lapic::invariant_tsc());
xml.attribute("freq_khz", Hw::Lapic::tsc_freq());
xml.attribute("invariant", Hw::Tsc::invariant_tsc());
xml.attribute("freq_khz", _boot_info().plat_info.tsc_freq_khz);
});
});
}

View File

@@ -15,9 +15,6 @@
#include <hw/spec/x86_64/x86_64.h>
/* Genode includes */
#include <drivers/timer/util.h>
/* core includes */
#include <kernel/timer.h>
#include <platform.h>
@@ -25,37 +22,9 @@
using namespace Core;
using namespace Kernel;
uint32_t Board::Timer::pit_calc_timer_freq(void)
{
uint32_t t_start, t_end;
/* set channel gate high and disable speaker */
outb(PIT_CH2_GATE, (uint8_t)((inb(0x61) & ~0x02) | 0x01));
/* set timer counter (mode 0, binary count) */
outb(PIT_MODE, 0xb0);
outb(PIT_CH2_DATA, PIT_SLEEP_TICS & 0xff);
outb(PIT_CH2_DATA, PIT_SLEEP_TICS >> 8);
write<Tmr_initial>(~0U);
t_start = read<Tmr_current>();
while ((inb(PIT_CH2_GATE) & 0x20) == 0)
{
asm volatile("pause" : : : "memory");
}
t_end = read<Tmr_current>();
write<Tmr_initial>(0);
return (t_start - t_end) / PIT_SLEEP_MS;
}
Board::Timer::Timer(unsigned)
:
Mmio({(char *)Platform::mmio_to_virt(Hw::Cpu_memory_map::lapic_phys_base()), Mmio::SIZE})
Local_apic(Platform::mmio_to_virt(Hw::Cpu_memory_map::lapic_phys_base()))
{
init();
}
@@ -75,28 +44,10 @@ void Board::Timer::init()
return;
}
/* calibrate LAPIC frequency to fullfill our requirements */
for (Divide_configuration::access_t div = Divide_configuration::Divide_value::MAX;
div && ticks_per_ms < TIMER_MIN_TICKS_PER_MS; div--)
{
if (!div){
raw("Failed to calibrate timer frequency");
throw Calibration_failed();
}
write<Divide_configuration::Divide_value>((uint8_t)div);
/* Calculate timer frequency */
ticks_per_ms = pit_calc_timer_freq();
divider = div;
}
/**
* Disable PIT timer channel. This is necessary since BIOS sets up
* channel 0 to fire periodically.
*/
outb(Board::Timer::PIT_MODE, 0x30);
outb(Board::Timer::PIT_CH0_DATA, 0);
outb(Board::Timer::PIT_CH0_DATA, 0);
Platform::apply_with_boot_info([&](auto const &boot_info) {
ticks_per_ms = boot_info.plat_info.lapic_freq_khz;
divider = boot_info.plat_info.lapic_div;
});
}

View File

@@ -0,0 +1,40 @@
/*
* \brief Timer driver for core
* \author Adrian-Ken Rueegsegger
* \author Reto Buerki
* \date 2015-02-06
*/
/*
* Copyright (C) 2015-2017 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _SRC__CORE__SPEC__ARM__PIT_H_
#define _SRC__CORE__SPEC__ARM__PIT_H_
/* Genode includes */
#include <base/stdint.h>
/* hw includes */
#include <hw/spec/x86_64/apic.h>
namespace Board { class Timer; }
/**
* LAPIC-based timer driver for core
*/
struct Board::Timer: public Hw::Local_apic
{
Divide_configuration::access_t divider = 0;
Genode::uint32_t ticks_per_ms = 0;
Timer(unsigned);
void init();
};
#endif /* _SRC__CORE__SPEC__ARM__PIT_H_ */

View File

@@ -0,0 +1,128 @@
/*
* \brief Vm_session vCPU
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2024-11-26
*/
/*
* Copyright (C) 2015-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__VCPU_H_
#define _CORE__VCPU_H_
/* base includes */
#include <base/attached_dataspace.h>
#include <vm_session/vm_session.h>
/* base-hw includes */
#include <hw_native_vcpu/hw_native_vcpu.h>
#include <kernel/vm.h>
/* core includes */
#include <phys_allocated.h>
#include <region_map_component.h>
namespace Core { struct Vcpu; }
class Core::Vcpu : public Rpc_object<Vm_session::Native_vcpu, Vcpu>
{
private:
struct Data_pages {
uint8_t _[Vcpu_data::size()];
};
Kernel::Vm::Identity &_id;
Rpc_entrypoint &_ep;
Vcpu_data _vcpu_data { };
Kernel_object<Kernel::Vm> _kobj { };
Constrained_ram_allocator &_ram;
Ram_dataspace_capability _ds_cap { };
Region_map &_region_map;
Affinity::Location _location;
Phys_allocated<Data_pages> _vcpu_data_pages;
constexpr size_t vcpu_state_size()
{
return align_addr(sizeof(Board::Vcpu_state),
get_page_size_log2());
}
public:
Vcpu(Kernel::Vm::Identity &id,
Rpc_entrypoint &ep,
Constrained_ram_allocator &constrained_ram_alloc,
Region_map &region_map,
Affinity::Location location)
:
_id(id),
_ep(ep),
_ram(constrained_ram_alloc),
_ds_cap( {_ram.alloc(vcpu_state_size(), Cache::UNCACHED)} ),
_region_map(region_map),
_location(location),
_vcpu_data_pages(ep, constrained_ram_alloc, region_map)
{
Region_map::Attr attr { };
attr.writeable = true;
_vcpu_data.vcpu_state = _region_map.attach(_ds_cap, attr).convert<Vcpu_state *>(
[&] (Region_map::Range range) { return (Vcpu_state *)range.start; },
[&] (Region_map::Attach_error) -> Vcpu_state * {
error("failed to attach VCPU data within core");
return nullptr;
});
if (!_vcpu_data.vcpu_state) {
_ram.free(_ds_cap);
throw Attached_dataspace::Region_conflict();
}
_vcpu_data.virt_area = &_vcpu_data_pages.obj;
_vcpu_data.phys_addr = _vcpu_data_pages.phys_addr();
ep.manage(this);
}
~Vcpu()
{
_region_map.detach((addr_t)_vcpu_data.vcpu_state);
_ram.free(_ds_cap);
_ep.dissolve(this);
}
/*******************************
** Native_vcpu RPC interface **
*******************************/
Capability<Dataspace> state() const { return _ds_cap; }
Native_capability native_vcpu() { return _kobj.cap(); }
void exception_handler(Signal_context_capability handler)
{
using Genode::warning;
if (!handler.valid()) {
warning("invalid signal");
return;
}
if (_kobj.constructed()) {
warning("Cannot register vcpu handler twice");
return;
}
unsigned const cpu = _location.xpos();
if (!_kobj.create(cpu, (void *)&_vcpu_data,
Capability_space::capid(handler), _id))
warning("Cannot instantiate vm kernel object, invalid signal context?");
}
};
#endif /* _CORE__VCPU_H_ */

View File

@@ -22,7 +22,6 @@
#include <cpu.h>
#include <cpu/vcpu_state_virtualization.h>
#include <hw/spec/x86_64/x86_64.h>
#include <spec/x86_64/virtualization/vm_page_table.h>
#include <spec/x86_64/virtualization/svm.h>
#include <spec/x86_64/virtualization/vmx.h>
@@ -34,10 +33,6 @@ namespace Board {
using Vcpu_data = Genode::Vcpu_data;
using Vcpu_state = Genode::Vcpu_state;
enum {
VCPU_MAX = 16
};
enum Platform_exitcodes : uint64_t {
EXIT_NPF = 0xfc,
EXIT_INIT = 0xfd,

View File

@@ -267,7 +267,7 @@ void Vmcb::write_vcpu_state(Vcpu_state &state)
/* Guest activity state (actv) not used by SVM */
state.actv_state.set_charged();
state.tsc.charge(Hw::Lapic::rdtsc());
state.tsc.charge(Hw::Tsc::rdtsc());
state.tsc_offset.charge(v.read<Vmcb_buf::Tsc_offset>());
state.efer.charge(v.read<Vmcb_buf::Efer>());

View File

@@ -41,15 +41,12 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Identity & id)
:
Kernel::Object { *this },
Cpu_job(Scheduler::Priority::min(), 0),
Cpu_context(cpu, Scheduler::Priority::min(), 0),
_user_irq_pool(user_irq_pool),
_state(*data.vcpu_state),
_context(context),
_id(id),
_vcpu_context(id.id, data)
{
affinity(cpu);
}
_vcpu_context(id.id, data) { }
Vm::~Vm()
@@ -57,10 +54,10 @@ Vm::~Vm()
}
void Vm::proceed(Cpu & cpu)
void Vm::proceed()
{
using namespace Board;
cpu.switch_to(*_vcpu_context.regs);
_cpu().switch_to(*_vcpu_context.regs);
if (_vcpu_context.exit_reason == EXIT_INIT) {
_vcpu_context.regs->trapno = TRAP_VMSKIP;
@@ -83,7 +80,7 @@ void Vm::proceed(Cpu & cpu)
}
void Vm::exception(Cpu & cpu)
void Vm::exception()
{
using namespace Board;
@@ -121,18 +118,18 @@ void Vm::exception(Cpu & cpu)
* it needs to handle an exit.
*/
if (_vcpu_context.exit_reason == EXIT_PAUSED)
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
else
pause = true;
break;
case Cpu_state::INTERRUPTS_START ... Cpu_state::INTERRUPTS_END:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
break;
case TRAP_VMSKIP:
/* vCPU is running for the first time */
_vcpu_context.initialize(cpu,
_vcpu_context.initialize(_cpu(),
reinterpret_cast<addr_t>(_id.table));
_vcpu_context.tsc_aux_host = cpu.id();
_vcpu_context.tsc_aux_host = _cpu().id();
/*
* We set the artificial startup exit code, stop the
* vCPU thread and ask the VMM to handle it.
@@ -256,7 +253,7 @@ void Board::Vcpu_context::write_vcpu_state(Vcpu_state &state)
state.r14.charge(regs->r14);
state.r15.charge(regs->r15);
state.tsc.charge(Hw::Lapic::rdtsc());
state.tsc.charge(Hw::Tsc::rdtsc());
tsc_aux_guest = Cpu::Ia32_tsc_aux::read();
state.tsc_aux.charge(tsc_aux_guest);

View File

@@ -599,7 +599,7 @@ void Vmcs::write_vcpu_state(Genode::Vcpu_state &state)
state.actv_state.charge(
static_cast<uint32_t>(read(E_GUEST_ACTIVITY_STATE)));
state.tsc.charge(Hw::Lapic::rdtsc());
state.tsc.charge(Hw::Tsc::rdtsc());
state.tsc_offset.charge(read(E_TSC_OFFSET));
state.efer.charge(read(E_GUEST_IA32_EFER));

View File

@@ -16,7 +16,6 @@
#include <base/service.h>
/* core includes */
#include <core_env.h>
#include <platform.h>
#include <platform_services.h>
#include <vm_root.h>
@@ -30,16 +29,15 @@ void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sliced_heap,
Registry<Service> &local_services,
Trace::Source_registry &trace_sources,
Ram_allocator &)
Ram_allocator &core_ram,
Region_map &core_rm,
Range_allocator &io_port_ranges)
{
static Io_port_root io_port_root(*core_env().pd_session(),
platform().io_port_alloc(), sliced_heap);
static Io_port_root io_port_root(io_port_ranges, sliced_heap);
static Vm_root vm_root(ep, sliced_heap, core_env().ram_allocator(),
core_env().local_rm(), trace_sources);
static Vm_root vm_root(ep, sliced_heap, core_ram, core_rm, trace_sources);
static Core_service<Vm_session_component> vm_service(local_services, vm_root);
static Core_service<Session_object<Vm_session>> vm_service(local_services, vm_root);
static Core_service<Io_port_session_component>
io_port_ls(local_services, io_port_root);
static Core_service<Io_port_session_component> io_port_ls(local_services, io_port_root);
}

View File

@@ -0,0 +1,234 @@
/*
* \brief SVM VM session component for 'base-hw'
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2024-09-20
*/
/*
* Copyright (C) 2015-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__SVM_VM_SESSION_COMPONENT_H_
#define _CORE__SVM_VM_SESSION_COMPONENT_H_
/* base includes */
#include <base/allocator.h>
#include <base/session_object.h>
#include <base/registry.h>
#include <vm_session/vm_session.h>
#include <dataspace/capability.h>
/* base-hw includes */
#include <spec/x86_64/virtualization/hpt.h>
/* core includes */
#include <cpu_thread_component.h>
#include <region_map_component.h>
#include <kernel/vm.h>
#include <trace/source_registry.h>
#include <vcpu.h>
#include <vmid_allocator.h>
#include <guest_memory.h>
#include <phys_allocated.h>
namespace Core { class Svm_session_component; }
class Core::Svm_session_component
:
public Session_object<Vm_session>
{
private:
using Vm_page_table = Hw::Hpt;
using Vm_page_table_array =
Vm_page_table::Allocator::Array<Kernel::DEFAULT_TRANSLATION_TABLE_MAX>;
/*
* Noncopyable
*/
Svm_session_component(Svm_session_component const &);
Svm_session_component &operator = (Svm_session_component const &);
struct Detach : Region_map_detach
{
Svm_session_component &_session;
Detach(Svm_session_component &session) : _session(session)
{ }
void detach_at(addr_t at) override
{
_session._detach_at(at);
}
void reserve_and_flush(addr_t at) override
{
_session._reserve_and_flush(at);
}
void unmap_region(addr_t base, size_t size) override
{
Genode::error(__func__, " unimplemented ", base, " ", size);
}
} _detach { *this };
Registry<Registered<Vcpu>> _vcpus { };
Rpc_entrypoint &_ep;
Constrained_ram_allocator _constrained_ram_alloc;
Region_map &_region_map;
Heap _heap;
Phys_allocated<Vm_page_table> _table;
Phys_allocated<Vm_page_table_array> _table_array;
Guest_memory _memory;
Vmid_allocator &_vmid_alloc;
Kernel::Vm::Identity _id;
uint8_t _remaining_print_count { 10 };
void _detach_at(addr_t addr)
{
_memory.detach_at(addr,
[&](addr_t vm_addr, size_t size) {
_table.obj.remove_translation(vm_addr, size, _table_array.obj.alloc()); });
}
void _reserve_and_flush(addr_t addr)
{
_memory.reserve_and_flush(addr, [&](addr_t vm_addr, size_t size) {
_table.obj.remove_translation(vm_addr, size, _table_array.obj.alloc()); });
}
public:
Svm_session_component(Vmid_allocator & vmid_alloc,
Rpc_entrypoint &ds_ep,
Resources resources,
Label const &label,
Diag diag,
Ram_allocator &ram_alloc,
Region_map &region_map,
Trace::Source_registry &)
:
Session_object(ds_ep, resources, label, diag),
_ep(ds_ep),
_constrained_ram_alloc(ram_alloc, _ram_quota_guard(), _cap_quota_guard()),
_region_map(region_map),
_heap(_constrained_ram_alloc, region_map),
_table(_ep, _constrained_ram_alloc, _region_map),
_table_array(_ep, _constrained_ram_alloc, _region_map,
[] (Phys_allocated<Vm_page_table_array> &table_array, auto *obj_ptr) {
construct_at<Vm_page_table_array>(obj_ptr, [&] (void *virt) {
return table_array.phys_addr() + ((addr_t) obj_ptr - (addr_t)virt);
});
}),
_memory(_constrained_ram_alloc, region_map),
_vmid_alloc(vmid_alloc),
_id({(unsigned)_vmid_alloc.alloc(), (void *)_table.phys_addr()})
{ }
~Svm_session_component()
{
_vcpus.for_each([&] (Registered<Vcpu> &vcpu) {
destroy(_heap, &vcpu); });
_vmid_alloc.free(_id.id);
}
/**************************
** Vm session interface **
**************************/
void attach(Dataspace_capability cap, addr_t guest_phys, Attach_attr attr) override
{
bool out_of_tables = false;
bool invalid_mapping = false;
auto const &map_fn = [&](addr_t vm_addr, addr_t phys_addr, size_t size) {
Page_flags const pflags { RW, EXEC, USER, NO_GLOBAL, RAM, CACHED };
try {
_table.obj.insert_translation(vm_addr, phys_addr, size, pflags, _table_array.obj.alloc());
} catch(Hw::Out_of_tables &) {
if (_remaining_print_count) {
Genode::error("Translation table needs too much RAM");
_remaining_print_count--;
}
out_of_tables = true;
} catch(...) {
if (_remaining_print_count) {
Genode::error("Invalid mapping ", Genode::Hex(phys_addr), " -> ",
Genode::Hex(vm_addr), " (", size, ")");
}
invalid_mapping = true;
}
};
if (!cap.valid())
throw Invalid_dataspace();
/* check dataspace validity */
_ep.apply(cap, [&] (Dataspace_component *ptr) {
if (!ptr)
throw Invalid_dataspace();
Dataspace_component &dsc = *ptr;
Guest_memory::Attach_result result =
_memory.attach(_detach, dsc, guest_phys, attr, map_fn);
if (out_of_tables)
throw Out_of_ram();
if (invalid_mapping)
throw Invalid_dataspace();
switch (result) {
case Guest_memory::Attach_result::OK : break;
case Guest_memory::Attach_result::INVALID_DS : throw Invalid_dataspace(); break;
case Guest_memory::Attach_result::OUT_OF_RAM : throw Out_of_ram(); break;
case Guest_memory::Attach_result::OUT_OF_CAPS : throw Out_of_caps(); break;
case Guest_memory::Attach_result::REGION_CONFLICT: throw Region_conflict(); break;
}
});
}
void attach_pic(addr_t) override
{ }
void detach(addr_t guest_phys, size_t size) override
{
_memory.detach(guest_phys, size, [&](addr_t vm_addr, size_t size) {
_table.obj.remove_translation(vm_addr, size, _table_array.obj.alloc()); });
}
Capability<Native_vcpu> create_vcpu(Thread_capability tcap) override
{
Affinity::Location vcpu_location;
_ep.apply(tcap, [&] (Cpu_thread_component *ptr) {
if (!ptr) return;
vcpu_location = ptr->platform_thread().affinity();
});
Vcpu &vcpu = *new (_heap)
Registered<Vcpu>(_vcpus,
_id,
_ep,
_constrained_ram_alloc,
_region_map,
vcpu_location);
return vcpu.cap();
}
};
#endif /* _CORE__SVM_VM_SESSION_COMPONENT_H_ */

View File

@@ -1,106 +0,0 @@
/*
* \brief VM page table abstraction between VMX and SVM for x86
* \author Benjamin Lamowski
* \date 2024-04-23
*/
/*
* Copyright (C) 2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__SPEC__PC__VIRTUALIZATION__VM_PAGE_TABLE_H_
#define _CORE__SPEC__PC__VIRTUALIZATION__VM_PAGE_TABLE_H_
#include <base/log.h>
#include <util/construct_at.h>
#include <spec/x86_64/virtualization/ept.h>
#include <spec/x86_64/virtualization/hpt.h>
namespace Board {
using namespace Genode;
struct Vm_page_table
{
/* Both Ept and Hpt need to actually use this allocator */
using Allocator = Genode::Page_table_allocator<1UL << SIZE_LOG2_4KB>;
template <class T, class U>
struct is_same {
static const bool value = false;
};
template <class T>
struct is_same <T, T> {
static const bool value = true;
};
static_assert(is_same<Allocator, Hw::Ept::Allocator>::value,
"Ept uses different allocator");
static_assert(is_same<Allocator, Hw::Hpt::Allocator>::value,
"Hpt uses different allocator");
static constexpr size_t ALIGNM_LOG2 = Hw::SIZE_LOG2_4KB;
enum Virt_type {
VIRT_TYPE_NONE,
VIRT_TYPE_VMX,
VIRT_TYPE_SVM
};
union {
Hw::Ept ept;
Hw::Hpt hpt;
};
void insert_translation(addr_t vo,
addr_t pa,
size_t size,
Page_flags const & flags,
Allocator & alloc)
{
if (virt_type() == VIRT_TYPE_VMX)
ept.insert_translation(vo, pa, size, flags, alloc);
else if (virt_type() == VIRT_TYPE_SVM)
hpt.insert_translation(vo, pa, size, flags, alloc);
}
void remove_translation(addr_t vo, size_t size, Allocator & alloc)
{
if (virt_type() == VIRT_TYPE_VMX)
ept.remove_translation(vo, size, alloc);
else if (virt_type() == VIRT_TYPE_SVM)
hpt.remove_translation(vo, size, alloc);
}
static Virt_type virt_type() {
static Virt_type virt_type { VIRT_TYPE_NONE };
if (virt_type == VIRT_TYPE_NONE) {
if (Hw::Virtualization_support::has_vmx())
virt_type = VIRT_TYPE_VMX;
else if (Hw::Virtualization_support::has_svm())
virt_type = VIRT_TYPE_SVM;
else
error("Failed to detect Virtualization technology");
}
return virt_type;
}
Vm_page_table()
{
if (virt_type() == VIRT_TYPE_VMX)
Genode::construct_at<Hw::Ept>(this);
else if (virt_type() == VIRT_TYPE_SVM)
Genode::construct_at<Hw::Hpt>(this);
}
};
using Vm_page_table_array =
Vm_page_table::Allocator::Array<Kernel::DEFAULT_TRANSLATION_TABLE_MAX>;
};
#endif /* _CORE__SPEC__PC__VIRTUALIZATION__VM_PAGE_TABLE_H_ */

View File

@@ -1,196 +0,0 @@
/*
* \brief VM session component for 'base-hw'
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2015-02-17
*/
/*
* Copyright (C) 2015-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
/* Genode includes */
#include <util/construct_at.h>
/* base internal includes */
#include <base/internal/unmanaged_singleton.h>
/* core includes */
#include <kernel/core_interface.h>
#include <vm_session_component.h>
#include <platform.h>
#include <cpu_thread_component.h>
#include <core_env.h>
using namespace Core;
static Core_mem_allocator & cma() {
return static_cast<Core_mem_allocator&>(platform().core_mem_alloc()); }
void Vm_session_component::_attach(addr_t phys_addr, addr_t vm_addr, size_t size)
{
using namespace Hw;
Page_flags pflags { RW, EXEC, USER, NO_GLOBAL, RAM, CACHED };
try {
_table.insert_translation(vm_addr, phys_addr, size, pflags,
_table_array.alloc());
return;
} catch(Hw::Out_of_tables &) {
Genode::error("Translation table needs to much RAM");
} catch(...) {
Genode::error("Invalid mapping ", Genode::Hex(phys_addr), " -> ",
Genode::Hex(vm_addr), " (", size, ")");
}
}
void Vm_session_component::_attach_vm_memory(Dataspace_component &dsc,
addr_t const vm_addr,
Attach_attr const attribute)
{
_attach(dsc.phys_addr() + attribute.offset, vm_addr, attribute.size);
}
void Vm_session_component::attach_pic(addr_t )
{ }
void Vm_session_component::_detach_vm_memory(addr_t vm_addr, size_t size)
{
_table.remove_translation(vm_addr, size, _table_array.alloc());
}
void * Vm_session_component::_alloc_table()
{
/* get some aligned space for the translation table */
return cma().alloc_aligned(sizeof(Board::Vm_page_table),
Board::Vm_page_table::ALIGNM_LOG2).convert<void *>(
[&] (void *table_ptr) {
return table_ptr; },
[&] (Range_allocator::Alloc_error) -> void * {
/* XXX handle individual error conditions */
error("failed to allocate kernel object");
throw Insufficient_ram_quota(); }
);
}
using Vmid_allocator = Genode::Bit_allocator<256>;
static Vmid_allocator &alloc()
{
static Vmid_allocator * allocator = nullptr;
if (!allocator) {
allocator = unmanaged_singleton<Vmid_allocator>();
/* reserve VM ID 0 for the hypervisor */
addr_t id = allocator->alloc();
assert (id == 0);
}
return *allocator;
}
Genode::addr_t Vm_session_component::_alloc_vcpu_data(Genode::addr_t ds_addr)
{
/*
* XXX these allocations currently leak memory on VM Session
* destruction. This cannot be easily fixed because the
* Core Mem Allocator does not implement free().
*
* Normally we would use constrained_md_ram_alloc to make the allocation,
* but to get the physical address of the pages in virt_area, we need
* to use the Core Mem Allocator.
*/
Vcpu_data * vcpu_data = (Vcpu_data *) cma()
.try_alloc(sizeof(Board::Vcpu_data))
.convert<void *>(
[&](void *ptr) { return ptr; },
[&](Range_allocator::Alloc_error) -> void * {
/* XXX handle individual error conditions */
error("failed to allocate kernel object");
throw Insufficient_ram_quota();
});
vcpu_data->virt_area = cma()
.alloc_aligned(Vcpu_data::size(), 12)
.convert<void *>(
[&](void *ptr) { return ptr; },
[&](Range_allocator::Alloc_error) -> void * {
/* XXX handle individual error conditions */
error("failed to allocate kernel object");
throw Insufficient_ram_quota();
});
vcpu_data->vcpu_state = (Vcpu_state *) ds_addr;
vcpu_data->phys_addr = (addr_t)cma().phys_addr(vcpu_data->virt_area);
return (Genode::addr_t) vcpu_data;
}
Vm_session_component::Vm_session_component(Rpc_entrypoint &ds_ep,
Resources resources,
Label const &,
Diag,
Ram_allocator &ram_alloc,
Region_map &region_map,
unsigned,
Trace::Source_registry &)
:
Ram_quota_guard(resources.ram_quota),
Cap_quota_guard(resources.cap_quota),
_ep(ds_ep),
_constrained_md_ram_alloc(ram_alloc, _ram_quota_guard(), _cap_quota_guard()),
_sliced_heap(_constrained_md_ram_alloc, region_map),
_region_map(region_map),
_table(*construct_at<Board::Vm_page_table>(_alloc_table())),
_table_array(*(new (cma()) Board::Vm_page_table_array([] (void * virt) {
return (addr_t)cma().phys_addr(virt);}))),
_id({(unsigned)alloc().alloc(), cma().phys_addr(&_table)})
{
/* configure managed VM area */
_map.add_range(0UL, ~0UL);
}
Vm_session_component::~Vm_session_component()
{
/* detach all regions */
while (true) {
addr_t out_addr = 0;
if (!_map.any_block_addr(&out_addr))
break;
detach_at(out_addr);
}
/* free region in allocator */
for (unsigned i = 0; i < _vcpu_id_alloc; i++) {
if (!_vcpus[i].constructed())
continue;
Vcpu & vcpu = *_vcpus[i];
if (vcpu.ds_cap.valid()) {
_region_map.detach(vcpu.ds_addr);
_constrained_md_ram_alloc.free(vcpu.ds_cap);
}
}
/* free guest-to-host page tables */
destroy(platform().core_mem_alloc(), &_table);
destroy(platform().core_mem_alloc(), &_table_array);
alloc().free(_id.id);
}

View File

@@ -0,0 +1,234 @@
/*
* \brief VMX VM session component for 'base-hw'
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2024-09-20
*/
/*
* Copyright (C) 2015-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__VMX_VM_SESSION_COMPONENT_H_
#define _CORE__VMX_VM_SESSION_COMPONENT_H_
/* base includes */
#include <base/allocator.h>
#include <base/session_object.h>
#include <base/registry.h>
#include <vm_session/vm_session.h>
#include <dataspace/capability.h>
/* base-hw includes */
#include <spec/x86_64/virtualization/ept.h>
/* core includes */
#include <cpu_thread_component.h>
#include <region_map_component.h>
#include <kernel/vm.h>
#include <trace/source_registry.h>
#include <vcpu.h>
#include <vmid_allocator.h>
#include <guest_memory.h>
#include <phys_allocated.h>
namespace Core { class Vmx_session_component; }
class Core::Vmx_session_component
:
public Session_object<Vm_session>
{
private:
using Vm_page_table = Hw::Ept;
using Vm_page_table_array =
Vm_page_table::Allocator::Array<Kernel::DEFAULT_TRANSLATION_TABLE_MAX>;
/*
* Noncopyable
*/
Vmx_session_component(Vmx_session_component const &);
Vmx_session_component &operator = (Vmx_session_component const &);
struct Detach : Region_map_detach
{
Vmx_session_component &_session;
Detach(Vmx_session_component &session) : _session(session)
{ }
void detach_at(addr_t at) override
{
_session._detach_at(at);
}
void reserve_and_flush(addr_t at) override
{
_session._reserve_and_flush(at);
}
void unmap_region(addr_t base, size_t size) override
{
Genode::error(__func__, " unimplemented ", base, " ", size);
}
} _detach { *this };
Registry<Registered<Vcpu>> _vcpus { };
Rpc_entrypoint &_ep;
Constrained_ram_allocator _constrained_ram_alloc;
Region_map &_region_map;
Heap _heap;
Phys_allocated<Vm_page_table> _table;
Phys_allocated<Vm_page_table_array> _table_array;
Guest_memory _memory;
Vmid_allocator &_vmid_alloc;
Kernel::Vm::Identity _id;
uint8_t _remaining_print_count { 10 };
void _detach_at(addr_t addr)
{
_memory.detach_at(addr,
[&](addr_t vm_addr, size_t size) {
_table.obj.remove_translation(vm_addr, size, _table_array.obj.alloc()); });
}
void _reserve_and_flush(addr_t addr)
{
_memory.reserve_and_flush(addr, [&](addr_t vm_addr, size_t size) {
_table.obj.remove_translation(vm_addr, size, _table_array.obj.alloc()); });
}
public:
Vmx_session_component(Vmid_allocator & vmid_alloc,
Rpc_entrypoint &ds_ep,
Resources resources,
Label const &label,
Diag diag,
Ram_allocator &ram_alloc,
Region_map &region_map,
Trace::Source_registry &)
:
Session_object(ds_ep, resources, label, diag),
_ep(ds_ep),
_constrained_ram_alloc(ram_alloc, _ram_quota_guard(), _cap_quota_guard()),
_region_map(region_map),
_heap(_constrained_ram_alloc, region_map),
_table(_ep, _constrained_ram_alloc, _region_map),
_table_array(_ep, _constrained_ram_alloc, _region_map,
[] (Phys_allocated<Vm_page_table_array> &table_array, auto *obj_ptr) {
construct_at<Vm_page_table_array>(obj_ptr, [&] (void *virt) {
return table_array.phys_addr() + ((addr_t) obj_ptr - (addr_t)virt);
});
}),
_memory(_constrained_ram_alloc, region_map),
_vmid_alloc(vmid_alloc),
_id({(unsigned)_vmid_alloc.alloc(), (void *)_table.phys_addr()})
{ }
~Vmx_session_component()
{
_vcpus.for_each([&] (Registered<Vcpu> &vcpu) {
destroy(_heap, &vcpu); });
_vmid_alloc.free(_id.id);
}
/**************************
** Vm session interface **
**************************/
void attach(Dataspace_capability cap, addr_t guest_phys, Attach_attr attr) override
{
bool out_of_tables = false;
bool invalid_mapping = false;
auto const &map_fn = [&](addr_t vm_addr, addr_t phys_addr, size_t size) {
Page_flags const pflags { RW, EXEC, USER, NO_GLOBAL, RAM, CACHED };
try {
_table.obj.insert_translation(vm_addr, phys_addr, size, pflags, _table_array.obj.alloc());
} catch(Hw::Out_of_tables &) {
if (_remaining_print_count) {
Genode::error("Translation table needs too much RAM");
_remaining_print_count--;
}
out_of_tables = true;
} catch(...) {
if (_remaining_print_count) {
Genode::error("Invalid mapping ", Genode::Hex(phys_addr), " -> ",
Genode::Hex(vm_addr), " (", size, ")");
}
invalid_mapping = true;
}
};
if (!cap.valid())
throw Invalid_dataspace();
/* check dataspace validity */
_ep.apply(cap, [&] (Dataspace_component *ptr) {
if (!ptr)
throw Invalid_dataspace();
Dataspace_component &dsc = *ptr;
Guest_memory::Attach_result result =
_memory.attach(_detach, dsc, guest_phys, attr, map_fn);
if (out_of_tables)
throw Out_of_ram();
if (invalid_mapping)
throw Invalid_dataspace();
switch (result) {
case Guest_memory::Attach_result::OK : break;
case Guest_memory::Attach_result::INVALID_DS : throw Invalid_dataspace(); break;
case Guest_memory::Attach_result::OUT_OF_RAM : throw Out_of_ram(); break;
case Guest_memory::Attach_result::OUT_OF_CAPS : throw Out_of_caps(); break;
case Guest_memory::Attach_result::REGION_CONFLICT: throw Region_conflict(); break;
}
});
}
void attach_pic(addr_t) override
{ }
void detach(addr_t guest_phys, size_t size) override
{
_memory.detach(guest_phys, size, [&](addr_t vm_addr, size_t size) {
_table.obj.remove_translation(vm_addr, size, _table_array.obj.alloc()); });
}
Capability<Native_vcpu> create_vcpu(Thread_capability tcap) override
{
Affinity::Location vcpu_location;
_ep.apply(tcap, [&] (Cpu_thread_component *ptr) {
if (!ptr) return;
vcpu_location = ptr->platform_thread().affinity();
});
Vcpu &vcpu = *new (_heap)
Registered<Vcpu>(_vcpus,
_id,
_ep,
_constrained_ram_alloc,
_region_map,
vcpu_location);
return vcpu.cap();
}
};
#endif /* _CORE__VMX_VM_SESSION_COMPONENT_H_ */

View File

@@ -0,0 +1,99 @@
/*
* \brief x86_64 specific Vm root interface
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2012-10-08
*/
/*
* Copyright (C) 2012-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__INCLUDE__VM_ROOT_H_
#define _CORE__INCLUDE__VM_ROOT_H_
/* Genode includes */
#include <root/component.h>
/* Hw includes */
#include <hw/spec/x86_64/x86_64.h>
/* core includes */
#include <virtualization/vmx_session_component.h>
#include <virtualization/svm_session_component.h>
#include <vmid_allocator.h>
namespace Core { class Vm_root; }
class Core::Vm_root : public Root_component<Session_object<Vm_session>>
{
private:
Ram_allocator &_ram_allocator;
Region_map &_local_rm;
Trace::Source_registry &_trace_sources;
Vmid_allocator _vmid_alloc { };
protected:
Session_object<Vm_session> *_create_session(const char *args) override
{
Session::Resources resources = session_resources_from_args(args);
if (Hw::Virtualization_support::has_svm())
return new (md_alloc())
Svm_session_component(_vmid_alloc,
*ep(),
resources,
session_label_from_args(args),
session_diag_from_args(args),
_ram_allocator, _local_rm,
_trace_sources);
if (Hw::Virtualization_support::has_vmx())
return new (md_alloc())
Vmx_session_component(_vmid_alloc,
*ep(),
session_resources_from_args(args),
session_label_from_args(args),
session_diag_from_args(args),
_ram_allocator, _local_rm,
_trace_sources);
Genode::error( "No virtualization support detected.");
throw Core::Service_denied();
}
void _upgrade_session(Session_object<Vm_session> *vm, const char *args) override
{
vm->upgrade(ram_quota_from_args(args));
vm->upgrade(cap_quota_from_args(args));
}
public:
/**
* Constructor
*
* \param session_ep entrypoint managing vm_session components
* \param md_alloc meta-data allocator to be used by root component
*/
Vm_root(Rpc_entrypoint &session_ep,
Allocator &md_alloc,
Ram_allocator &ram_alloc,
Region_map &local_rm,
Trace::Source_registry &trace_sources)
:
Root_component<Session_object<Vm_session>>(&session_ep, &md_alloc),
_ram_allocator(ram_alloc),
_local_rm(local_rm),
_trace_sources(trace_sources)
{ }
};
#endif /* _CORE__INCLUDE__VM_ROOT_H_ */

View File

@@ -0,0 +1,87 @@
/*
* \brief base-hw specific Vm root interface
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2012-10-08
*/
/*
* Copyright (C) 2012-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__INCLUDE__VM_ROOT_H_
#define _CORE__INCLUDE__VM_ROOT_H_
/* Genode includes */
#include <root/component.h>
/* core includes */
#include <vm_session_component.h>
#include <vmid_allocator.h>
namespace Core { class Vm_root; }
class Core::Vm_root : public Root_component<Vm_session_component>
{
private:
Ram_allocator &_ram_allocator;
Region_map &_local_rm;
Trace::Source_registry &_trace_sources;
Vmid_allocator _vmid_alloc { };
protected:
Vm_session_component *_create_session(const char *args) override
{
unsigned priority = 0;
Arg a = Arg_string::find_arg(args, "priority");
if (a.valid()) {
priority = (unsigned)a.ulong_value(0);
/* clamp priority value to valid range */
priority = min((unsigned)Cpu_session::PRIORITY_LIMIT - 1, priority);
}
return new (md_alloc())
Vm_session_component(_vmid_alloc,
*ep(),
session_resources_from_args(args),
session_label_from_args(args),
session_diag_from_args(args),
_ram_allocator, _local_rm, priority,
_trace_sources);
}
void _upgrade_session(Vm_session_component *vm, const char *args) override
{
vm->upgrade(ram_quota_from_args(args));
vm->upgrade(cap_quota_from_args(args));
}
public:
/**
* Constructor
*
* \param session_ep entrypoint managing vm_session components
* \param md_alloc meta-data allocator to be used by root component
*/
Vm_root(Rpc_entrypoint &session_ep,
Allocator &md_alloc,
Ram_allocator &ram_alloc,
Region_map &local_rm,
Trace::Source_registry &trace_sources)
:
Root_component<Vm_session_component>(&session_ep, &md_alloc),
_ram_allocator(ram_alloc),
_local_rm(local_rm),
_trace_sources(trace_sources)
{ }
};
#endif /* _CORE__INCLUDE__VM_ROOT_H_ */

View File

@@ -19,7 +19,6 @@
#include <vm_session_component.h>
#include <platform.h>
#include <cpu_thread_component.h>
#include <core_env.h>
using namespace Core;

View File

@@ -30,6 +30,9 @@
#include <kernel/vm.h>
#include <trace/source_registry.h>
#include <vmid_allocator.h>
namespace Core { class Vm_session_component; }
@@ -88,6 +91,7 @@ class Core::Vm_session_component
Region_map &_region_map;
Board::Vm_page_table &_table;
Board::Vm_page_table_array &_table_array;
Vmid_allocator &_vmid_alloc;
Kernel::Vm::Identity _id;
unsigned _vcpu_id_alloc { 0 };
@@ -113,8 +117,9 @@ class Core::Vm_session_component
using Cap_quota_guard::upgrade;
using Rpc_object<Vm_session, Vm_session_component>::cap;
Vm_session_component(Rpc_entrypoint &, Resources, Label const &,
Diag, Ram_allocator &ram, Region_map &, unsigned,
Vm_session_component(Vmid_allocator &, Rpc_entrypoint &,
Resources, Label const &, Diag,
Ram_allocator &ram, Region_map &, unsigned,
Trace::Source_registry &);
~Vm_session_component();
@@ -136,7 +141,7 @@ class Core::Vm_session_component
void attach_pic(addr_t) override;
void detach(addr_t, size_t) override;
Capability<Native_vcpu> create_vcpu(Thread_capability);
Capability<Native_vcpu> create_vcpu(Thread_capability) override;
};
#endif /* _CORE__VM_SESSION_COMPONENT_H_ */

View File

@@ -0,0 +1,33 @@
/*
* \brief VM ID allocator
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2024-11-21
*/
/*
* Copyright (C) 2015-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__VMID_ALLOCATOR_H_
#define _CORE__VMID_ALLOCATOR_H_
#include <util/bit_allocator.h>
namespace Core { struct Vmid_allocator; }
struct Core::Vmid_allocator
: Genode::Bit_allocator<256>
{
Vmid_allocator()
{
/* reserve VM ID 0 for the hypervisor */
addr_t id = alloc();
assert (id == 0);
}
};
#endif /* _CORE__VMID_ALLOCATOR_H_ */

View File

@@ -94,6 +94,8 @@ struct Hw::Acpi_fadt : Genode::Mmio<276>
struct Smi_cmd : Register<0x30, 32> { };
struct Acpi_enable : Register<0x34, 8> { };
struct Pm_tmr_len : Register< 91, 8> { };
struct Pm1a_cnt_blk : Register < 64, 32> {
struct Slp_typ : Bitfield < 10, 3> { };
struct Slp_ena : Bitfield < 13, 1> { };
@@ -123,6 +125,13 @@ struct Hw::Acpi_fadt : Genode::Mmio<276>
};
struct Pm1b_cnt_blk_ext_addr : Register < 184 + 4, 64> { };
struct X_pm_tmr_blk : Register < 208, 32> {
struct Addressspace : Bitfield < 0, 8> { };
struct Width : Bitfield < 8, 8> { };
};
struct X_pm_tmr_blk_addr : Register < 208 + 4, 64> { };
struct Gpe0_blk_ext : Register < 220, 32> {
struct Addressspace : Bitfield < 0, 8> { };
struct Width : Bitfield < 8, 8> { };
@@ -232,6 +241,45 @@ struct Hw::Acpi_fadt : Genode::Mmio<276>
return pm1_a | pm1_b;
}
/* see ACPI spec version 6.5 4.8.3.3. Power Management Timer (PM_TMR) */
uint32_t read_pm_tmr()
{
if (read<Pm_tmr_len>() != 4)
return 0;
addr_t const tmr_addr = read<X_pm_tmr_blk_addr>();
if (!tmr_addr)
return 0;
uint8_t const tmr_addr_type =
read<Hw::Acpi_fadt::X_pm_tmr_blk::Addressspace>();
/* I/O port address, most likely */
if (tmr_addr_type == 1) return inl((uint16_t)tmr_addr);
/* System Memory space address */
if (tmr_addr_type == 0) return *(uint32_t *)tmr_addr;
return 0;
}
uint32_t calibrate_freq_khz(uint32_t sleep_ms, auto get_value_fn, bool reverse = false)
{
unsigned const acpi_timer_freq = 3'579'545;
uint32_t const initial = read_pm_tmr();
if (!initial) return 0;
uint64_t const t1 = get_value_fn();
while ((read_pm_tmr() - initial) < (acpi_timer_freq * sleep_ms / 1000))
asm volatile ("pause":::"memory");
uint64_t const t2 = get_value_fn();
return (uint32_t)((reverse ? (t1 - t2) : (t2 - t1)) / sleep_ms);
}
void write_cnt_blk(unsigned value_a, unsigned value_b)
{
_write<Pm1_cnt_len, Pm1a_cnt_blk, Pm1a_cnt_blk_ext::Width,

View File

@@ -18,6 +18,9 @@ namespace Hw { class Local_apic; }
#include <hw/spec/x86_64/x86_64.h>
/* Genode includes */
#include <drivers/timer/util.h>
struct Hw::Local_apic : Genode::Mmio<Hw::Cpu_memory_map::LAPIC_SIZE>
{
struct Id : Register<0x020, 32> { };
@@ -58,6 +61,57 @@ struct Hw::Local_apic : Genode::Mmio<Hw::Cpu_memory_map::LAPIC_SIZE>
struct Destination : Bitfield<24, 8> { };
};
/* Timer registers */
struct Tmr_lvt : Register<0x320, 32>
{
struct Vector : Bitfield<0, 8> { };
struct Delivery : Bitfield<8, 3> { };
struct Mask : Bitfield<16, 1> { };
struct Timer_mode : Bitfield<17, 2> { };
};
struct Tmr_initial : Register <0x380, 32> { };
struct Tmr_current : Register <0x390, 32> { };
struct Divide_configuration : Register <0x03e0, 32>
{
struct Divide_value_0_2 : Bitfield<0, 2> { };
struct Divide_value_2_1 : Bitfield<3, 1> { };
struct Divide_value :
Genode::Bitset_2<Divide_value_0_2, Divide_value_2_1>
{
enum { MAX = 6 };
};
};
struct Calibration { uint32_t freq_khz; uint32_t div; };
Calibration calibrate_divider(auto calibration_fn)
{
Calibration result { };
/* calibrate LAPIC frequency to fullfill our requirements */
for (Divide_configuration::access_t div = Divide_configuration::Divide_value::MAX;
div && result.freq_khz < TIMER_MIN_TICKS_PER_MS; div--) {
if (!div) {
raw("Failed to calibrate Local APIC frequency");
return { 0, 1 };
}
write<Divide_configuration::Divide_value>((uint8_t)div);
write<Tmr_initial>(~0U);
/* Calculate timer frequency */
result.freq_khz = calibration_fn();
result.div = div;
write<Tmr_initial>(0);
}
return result;
}
Local_apic(addr_t const addr) : Mmio({(char*)addr, Mmio::SIZE}) {}
};

View File

@@ -118,6 +118,12 @@ struct Hw::X86_64_cpu
/* AMD host save physical address */
X86_64_MSR_REGISTER(Amd_vm_hsavepa, 0xC0010117);
/* Non-architectural MSR used to make lfence serializing */
X86_64_MSR_REGISTER(Amd_lfence, 0xC0011029,
struct Enable_dispatch_serializing : Bitfield<1, 1> { }; /* Enable lfence dispatch serializing */
)
X86_64_MSR_REGISTER(Platform_id, 0x17,
struct Bus_ratio : Bitfield<8, 5> { }; /* Bus ratio on Core 2, see SDM 19.7.3 */
);

View File

@@ -40,10 +40,13 @@ struct Hw::Pc_board::Serial : Genode::X86_uart
struct Hw::Pc_board::Boot_info
{
Acpi_rsdp acpi_rsdp { };
Framebuffer framebuffer { };
Genode::addr_t efi_system_table { 0 };
Genode::addr_t acpi_fadt { 0 };
Acpi_rsdp acpi_rsdp { };
Framebuffer framebuffer { };
Genode::addr_t efi_system_table { 0 };
Genode::addr_t acpi_fadt { 0 };
Genode::uint32_t tsc_freq_khz { 0 };
Genode::uint32_t lapic_freq_khz { 0 };
Genode::uint32_t lapic_div { 0 };
Boot_info() {}
Boot_info(Acpi_rsdp const &acpi_rsdp,

View File

@@ -22,8 +22,8 @@
namespace Hw {
struct Cpu_memory_map;
struct Virtualization_support;
class Vendor;
class Lapic;
class Vendor;
struct Tsc;
}
@@ -107,172 +107,34 @@ public:
};
class Hw::Lapic
struct Hw::Tsc
{
private:
static bool _has_tsc_dl()
{
using Cpu = Hw::X86_64_cpu;
Cpu::Cpuid_1_ecx::access_t ecx = Cpu::Cpuid_1_ecx::read();
return (bool)Cpu::Cpuid_1_ecx::Tsc_deadline::get(ecx);
}
/*
* Adapted from Christian Prochaska's and Alexander Boettcher's
* implementation for Nova.
* Provide serialized access to the Timestamp Counter
*
* For details, see Vol. 3B of the Intel SDM (September 2023):
* 20.7.3 Determining the Processor Base Frequency
* See #5430 for more information.
*/
static unsigned _read_tsc_freq()
static Genode::uint64_t rdtsc()
{
Genode::uint32_t low, high;
asm volatile(
"lfence;"
"rdtsc;"
"lfence;"
: "=a"(low), "=d"(high)
:
: "memory"
);
return (Genode::uint64_t)(high) << 32 | low;
}
static bool invariant_tsc()
{
using Cpu = Hw::X86_64_cpu;
if (Vendor::get_vendor_id() != Vendor::INTEL)
return 0;
unsigned const model = Vendor::get_model();
unsigned const family = Vendor::get_family();
enum
{
Cpu_id_clock = 0x15,
Cpu_id_base_freq = 0x16
};
Cpu::Cpuid_0_eax::access_t eax_0 = Cpu::Cpuid_0_eax::read();
/*
* If CPUID leaf 15 is available, return the frequency reported there.
*/
if (eax_0 >= Cpu_id_clock) {
Cpu::Cpuid_15_eax::access_t eax_15 = Cpu::Cpuid_15_eax::read();
Cpu::Cpuid_15_ebx::access_t ebx_15 = Cpu::Cpuid_15_ebx::read();
Cpu::Cpuid_15_ecx::access_t ecx_15 = Cpu::Cpuid_15_ecx::read();
if (eax_15 && ebx_15) {
if (ecx_15)
return static_cast<unsigned>(
((Genode::uint64_t)(ecx_15) * ebx_15) / eax_15 / 1000
);
if (family == 6) {
if (model == 0x5c) /* Goldmont */
return static_cast<unsigned>((19200ull * ebx_15) / eax_15);
if (model == 0x55) /* Xeon */
return static_cast<unsigned>((25000ull * ebx_15) / eax_15);
}
if (family >= 6)
return static_cast<unsigned>((24000ull * ebx_15) / eax_15);
}
}
/*
* Specific methods for family 6 models
*/
if (family == 6) {
unsigned freq_tsc = 0U;
if (model == 0x2a ||
model == 0x2d || /* Sandy Bridge */
model >= 0x3a) /* Ivy Bridge and later */
{
Cpu::Platform_info::access_t platform_info = Cpu::Platform_info::read();
Genode::uint64_t ratio = Cpu::Platform_info::Ratio::get(platform_info);
freq_tsc = static_cast<unsigned>(ratio * 100000);
} else if (model == 0x1a ||
model == 0x1e ||
model == 0x1f ||
model == 0x2e || /* Nehalem */
model == 0x25 ||
model == 0x2c ||
model == 0x2f) /* Xeon Westmere */
{
Cpu::Platform_info::access_t platform_info = Cpu::Platform_info::read();
Genode::uint64_t ratio = Cpu::Platform_info::Ratio::get(platform_info);
freq_tsc = static_cast<unsigned>(ratio * 133330);
} else if (model == 0x17 || model == 0xf) { /* Core 2 */
Cpu::Fsb_freq::access_t fsb_freq = Cpu::Fsb_freq::read();
Genode::uint64_t freq_bus = Cpu::Fsb_freq::Speed::get(fsb_freq);
switch (freq_bus) {
case 0b101: freq_bus = 100000; break;
case 0b001: freq_bus = 133330; break;
case 0b011: freq_bus = 166670; break;
case 0b010: freq_bus = 200000; break;
case 0b000: freq_bus = 266670; break;
case 0b100: freq_bus = 333330; break;
case 0b110: freq_bus = 400000; break;
default: freq_bus = 0; break;
}
Cpu::Platform_id::access_t platform_id = Cpu::Platform_id::read();
Genode::uint64_t ratio = Cpu::Platform_id::Bus_ratio::get(platform_id);
freq_tsc = static_cast<unsigned>(freq_bus * ratio);
}
if (!freq_tsc)
Genode::warning("TSC: family 6 Intel platform info reports bus frequency of 0");
else
return freq_tsc;
}
/*
* Finally, using Processor Frequency Information for a rough estimate
*/
if (eax_0 >= Cpu_id_base_freq) {
Cpu::Cpuid_16_eax::access_t base_mhz = Cpu::Cpuid_16_eax::read();
if (base_mhz) {
Genode::warning("TSC: using processor base frequency: ", base_mhz, " MHz");
return base_mhz * 1000;
} else {
Genode::warning("TSC: CPUID reported processor base frequency of 0");
}
}
return 0;
Cpu::Cpuid_80000007_eax::access_t eax = Cpu::Cpuid_80000007_eax::read();
return Cpu::Cpuid_80000007_eax::Invariant_tsc::get(eax);
}
static unsigned _measure_tsc_freq()
{
const unsigned Tsc_fixed_value = 2400;
Genode::warning("TSC: calibration not yet implemented, using fixed value of ", Tsc_fixed_value, " MHz");
/* TODO: implement TSC calibration on AMD */
return Tsc_fixed_value * 1000;
}
public:
static Genode::uint64_t rdtsc()
{
Genode::uint32_t low, high;
asm volatile("rdtsc" : "=a"(low), "=d"(high));
return (Genode::uint64_t)(high) << 32 | low;
}
static bool invariant_tsc()
{
using Cpu = Hw::X86_64_cpu;
Cpu::Cpuid_80000007_eax::access_t eax =
Cpu::Cpuid_80000007_eax::read();
return Cpu::Cpuid_80000007_eax::Invariant_tsc::get(eax);
}
static unsigned tsc_freq()
{
unsigned freq = _read_tsc_freq();
if (freq)
return freq;
else
return _measure_tsc_freq();
}
};
struct Hw::Virtualization_support

View File

@@ -21,6 +21,7 @@ namespace Hw {
using Genode::addr_t;
using Genode::size_t;
using Genode::uint32_t;
using Genode::get_page_size;
using Genode::get_page_size_log2;

View File

@@ -27,7 +27,6 @@
using namespace Genode;
using Exit_config = Vm_connection::Exit_config;
using Call_with_state = Vm_connection::Call_with_state;
/****************************
@@ -56,8 +55,7 @@ struct Hw_vcpu : Rpc_client<Vm_session::Native_vcpu>, Noncopyable
Hw_vcpu(Env &, Vm_connection &, Vcpu_handler_base &);
void with_state(Call_with_state &);
void with_state(auto const &);
};
@@ -72,7 +70,7 @@ Hw_vcpu::Hw_vcpu(Env &env, Vm_connection &vm, Vcpu_handler_base &handler)
}
void Hw_vcpu::with_state(Call_with_state &cw)
void Hw_vcpu::with_state(auto const &fn)
{
if (Thread::myself() != _ep_handler) {
error("vCPU state requested outside of vcpu_handler EP");
@@ -80,7 +78,7 @@ void Hw_vcpu::with_state(Call_with_state &cw)
}
Kernel::pause_vm(Capability_space::capid(_kernel_vcpu));
if (cw.call_with_state(_local_state()))
if (fn(_local_state()))
Kernel::run_vm(Capability_space::capid(_kernel_vcpu));
}
@@ -90,8 +88,7 @@ Capability<Vm_session::Native_vcpu> Hw_vcpu::_create_vcpu(Vm_connection &vm,
{
Thread &tep { *reinterpret_cast<Thread *>(&handler.rpc_ep()) };
return vm.with_upgrade([&] {
return vm.call<Vm_session::Rpc_create_vcpu>(tep.cap()); });
return vm.create_vcpu(tep.cap());
}
@@ -99,7 +96,10 @@ Capability<Vm_session::Native_vcpu> Hw_vcpu::_create_vcpu(Vm_connection &vm,
** vCPU API **
**************/
void Vm_connection::Vcpu::_with_state(Call_with_state &cw) { static_cast<Hw_vcpu &>(_native_vcpu).with_state(cw); }
void Vm_connection::Vcpu::_with_state(With_state::Ft const &fn)
{
static_cast<Hw_vcpu &>(_native_vcpu).with_state(fn);
}
Vm_connection::Vcpu::Vcpu(Vm_connection &vm, Allocator &alloc,

Some files were not shown because too many files have changed in this diff Show More