Merge branch 'staging' of github.com:mmueller41/genode into pc-ixgbe

This commit is contained in:
Michael Mueller
2025-01-28 13:55:39 +01:00
1308 changed files with 22385 additions and 15312 deletions

View File

@@ -1 +1 @@
24.08
24.11

View File

@@ -1,517 +0,0 @@
=======================
The Genode build system
=======================
Norman Feske
Abstract
########
The Genode OS Framework comes with a custom build system that is designed for
the creation of highly modular and portable systems software. Understanding
its basic concepts is pivotal for using the full potential of the framework.
This document introduces those concepts and the best practises of putting them
to good use. Beside building software components from source code, common
and repetitive development tasks are the testing of individual components
and the integration of those components into complex system scenarios. To
streamline such tasks, the build system is accompanied with special tooling
support. This document introduces those tools.
Build directories and repositories
##################################
The build system is supposed to never touch the source tree. The procedure of
building components and integrating them into system scenarios is done at
a distinct build directory. One build directory targets a specific platform,
i.e., a kernel and hardware architecture. Because the source tree is decoupled
from the build directory, one source tree can have many different build
directories associated, each targeted at another platform.
The recommended way for creating a build directory is the use of the
'create_builddir' tool located at '<genode-dir>/tool/'. By starting the tool
without arguments, its usage information will be printed. For creating a new
build directory, one of the listed target platforms must be specified.
Furthermore, the location of the new build directory has to be specified via
the 'BUILD_DIR=' argument. For example:
! cd <genode-dir>
! ./tool/create_builddir linux_x86 BUILD_DIR=/tmp/build.linux_x86
This command will create a new build directory for the Linux/x86 platform
at _/tmp/build.linux_x86/_.
Build-directory configuration via 'build.conf'
==============================================
The fresh build directory will contain a 'Makefile', which is a symlink to
_tool/builddir/build.mk_. This makefile is the front end of the build system
and not supposed to be edited. Beside the makefile, there is a _etc/_
subdirectory that contains the build-directory configuration. For most
platforms, there is only a single _build.conf_ file, which defines the parts of
the Genode source tree incorporated in the build process. Those parts are
called _repositories_.
The repository concept allows for keeping the source code well separated for
different concerns. For example, the platform-specific code for each target
platform is located in a dedicated _base-<platform>_ repository. Also, different
abstraction levels and features of the system are residing in different
repositories. The _etc/build.conf_ file defines the set of repositories to
consider in the build process. At build time, the build system overlays the
directory structures of all repositories specified via the 'REPOSITORIES'
declaration to form a single logical source tree. By changing the list of
'REPOSITORIES', the view of the build system on the source tree can be altered.
The _etc/build.conf_ as found in a fresh created build directory will list the
_base-<platform>_ repository of the platform selected at the 'create_builddir'
command line as well as the 'base', 'os', and 'demo' repositories needed for
compiling Genode's default demonstration scenario. Furthermore, there are a
number of commented-out lines that can be uncommented for enabling additional
repositories.
Note that the order of the repositories listed in the 'REPOSITORIES' declaration
is important. Front-most repositories shadow subsequent repositories. This
makes the repository mechanism a powerful tool for tweaking existing repositories:
By adding a custom repository in front of another one, customized versions of
single files (e.g., header files or target description files) can be supplied to
the build system without changing the original repository.
Building targets
================
To build all targets contained in the list of 'REPOSITORIES' as defined in
_etc/build.conf_, simply issue 'make'. This way, all components that are
compatible with the build directory's base platform will be built. In practice,
however, only some of those components may be of interest. Hence, the build
can be tailored to those components which are of actual interest by specifying
source-code subtrees. For example, using the following command
! make core server/nitpicker
the build system builds all targets found in the 'core' and 'server/nitpicker'
source directories. You may specify any number of subtrees to the build
system. As indicated by the build output, the build system revisits
each library that is used by each target found in the specified subtrees.
This is very handy for developing libraries because instead of re-building
your library and then your library-using program, you just build your program
and that's it. This concept even works recursively, which means that libraries
may depend on other libraries.
In practice, you won't ever need to build the _whole tree_ but only the
targets that you are interested in.
Cleaning the build directory
============================
To remove all but kernel-related generated files, use
! make clean
To remove all generated files, use
! make cleanall
Both 'clean' and 'cleanall' won't remove any files from the _bin/_
subdirectory. This makes the _bin/_ a safe place for files that are
unrelated to the build process, yet required for the integration stage, e.g.,
binary data.
Controlling the verbosity of the build process
==============================================
To understand the inner workings of the build process in more detail, you can
tell the build system to display each directory change by specifying
! make VERBOSE_DIR=
If you are interested in the arguments that are passed to each invocation of
'make', you can make them visible via
! make VERBOSE_MK=
Furthermore, you can observe each single shell-command invocation by specifying
! make VERBOSE=
Of course, you can combine these verboseness toggles for maximizing the noise.
Enabling parallel builds
========================
To utilize multiple CPU cores during the build process, you may invoke 'make'
with the '-j' argument. If manually specifying this argument becomes an
inconvenience, you may add the following line to your _etc/build.conf_ file:
! MAKE += -j<N>
This way, the build system will always use '<N>' CPUs for building.
Caching inter-library dependencies
==================================
The build system allows to repeat the last build without performing any
library-dependency checks by using:
! make again
The use of this feature can significantly improve the work flow during
development because in contrast to source-codes, library dependencies rarely
change. So the time needed for re-creating inter-library dependencies at each
build can be saved.
Repository directory layout
###########################
Each Genode repository has the following layout:
Directory | Description
------------------------------------------------------------
'doc/' | Documentation, specific for the repository
------------------------------------------------------------
'etc/' | Default configuration of the build process
------------------------------------------------------------
'mk/' | The build system
------------------------------------------------------------
'include/' | Globally visible header files
------------------------------------------------------------
'src/' | Source codes and target build descriptions
------------------------------------------------------------
'lib/mk/' | Library build descriptions
Creating targets and libraries
##############################
Target descriptions
===================
A good starting point is to look at the init target. The source code of init is
located at _os/src/init/_. In this directory, you will find a target description
file named _target.mk_. This file contains the building instructions and it is
usually very simple. The build process is controlled by defining the following
variables.
Build variables to be defined by you
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:'TARGET': is the name of the binary to be created. This is the
only *mandatory variable* to be defined in a _target.mk_ file.
:'REQUIRES': expresses the requirements that must be satisfied in order to
build the target. You find more details about the underlying mechanism in
Section [Specializations].
:'LIBS': is the list of libraries that are used by the target.
:'SRC_CC': contains the list of '.cc' source files. The default search location
for source codes is the directory, where the _target.mk_ file resides.
:'SRC_C': contains the list of '.c' source files.
:'SRC_S': contains the list of assembly '.s' source files.
:'SRC_BIN': contains binary data files to be linked to the target.
:'INC_DIR': is the list of include search locations. Directories should
always be appended by using +=. Never use an assignment!
:'EXT_OBJECTS': is a list of Genode-external objects or libraries. This
variable is mostly used for interfacing Genode with legacy software
components.
Rarely used variables
---------------------
:'CC_OPT': contains additional compiler options to be used for '.c' as
well as for '.cc' files.
:'CC_CXX_OPT': contains additional compiler options to be used for the
C++ compiler only.
:'CC_C_OPT': contains additional compiler options to be used for the
C compiler only.
Specifying search locations
~~~~~~~~~~~~~~~~~~~~~~~~~~~
When specifying search locations for header files via the 'INC_DIR' variable or
for source files via 'vpath', relative pathnames are illegal to use. Instead,
you can use the following variables to reference locations within the
source-code repository, where your target lives:
:'REP_DIR': is the base directory of the current source-code repository.
Normally, specifying locations relative to the base of the repository is
never used by _target.mk_ files but needed by library descriptions.
:'PRG_DIR': is the directory, where your _target.mk_ file resides. This
variable is always to be used when specifying a relative path.
Library descriptions
====================
In contrast to target descriptions that are scattered across the whole source
tree, library descriptions are located at the central place _lib/mk_. Each
library corresponds to a _<libname>.mk_ file. The base of the description file
is the name of the library. Therefore, no 'TARGET' variable needs to be set.
The source-code locations are expressed as '$(REP_DIR)'-relative 'vpath'
commands.
Library-description files support the following additional declarations:
:'SHARED_LIB = yes': declares that the library should be built as a shared
object rather than a static library. The resulting object will be called
_<libname>.lib.so_.
Specializations
===============
Building components for different platforms likely implicates portions of code
that are tied to certain aspects of the target platform. For example, a target
platform may be characterized by
* A kernel API such as L4v2, Linux, L4.sec,
* A hardware architecture such as x86, ARM, Coldfire,
* A certain hardware facility such as a custom device, or
* Other properties such as software license requirements.
Each of these attributes express a specialization of the build process. The
build system provides a generic mechanism to handle such specializations.
The _programmer_ of a software component knows the properties on which his
software relies and thus, specifies these requirements in his build description
file.
The _user/customer/builder_ decides to build software for a specific platform
and defines the platform specifics via the 'SPECS' variable per build
directory in _etc/specs.conf_. In addition to an (optional) _etc/specs.conf_
file within the build directory, the build system incorporates the first
_etc/specs.conf_ file found in the repositories as configured for the
build directory. For example, for a 'linux_x86' build directory, the
_base-linux/etc/specs.conf_ file is used by default. The build directory's
'specs.conf' file can still be used to extend the 'SPECS' declarations, for
example to enable special features.
Each '<specname>' in the 'SPECS' variable instructs the build system to
* Include the 'make'-rules of a corresponding _base/mk/spec-<specname>.mk_
file. This enables the customization of the build process for each platform.
* Search for _<libname>.mk_ files in the _lib/mk/<specname>/_ subdirectory.
This way, we can provide alternative implementations of one and the same
library interface for different platforms.
Before a target or library gets built, the build system checks if the 'REQUIRES'
entries of the build description file are satisfied by entries of the 'SPECS'
variable. The compilation is executed only if each entry in the 'REQUIRES'
variable is present in the 'SPECS' variable as supplied by the build directory
configuration.
Building tools to be executed on the host platform
===================================================
Sometimes, software requires custom tools that are used to generate source
code or other ingredients for the build process, for example IDL compilers.
Such tools won't be executed on top of Genode but on the host platform
during the build process. Hence, they must be compiled with the tool chain
installed on the host, not the Genode tool chain.
The Genode build system accommodates the building of such host tools as a side
effect of building a library or a target. Even though it is possible to add
the tool compilation step to a regular build description file, it is
recommended to introduce a dedicated pseudo library for building such tools.
This way, the rules for building host tools are kept separate from rules that
refer to Genode programs. By convention, the pseudo library should be named
_<package>_host_tools_ and the host tools should be built at
_<build-dir>/tool/<package>/_. With _<package>_, we refer to the name of the
software package the tool belongs to, e.g., qt5 or mupdf. To build a tool
named _<tool>_, the pseudo library contains a custom make rule like the
following:
! $(BUILD_BASE_DIR)/tool/<package>/<tool>:
! $(MSG_BUILD)$(notdir $@)
! $(VERBOSE)mkdir -p $(dir $@)
! $(VERBOSE)...build commands...
To let the build system trigger the rule, add the custom target to the
'HOST_TOOLS' variable:
! HOST_TOOLS += $(BUILD_BASE_DIR)/tool/<package>/<tool>
Once the pseudo library for building the host tools is in place, it can be
referenced by each target or library that relies on the respective tools via
the 'LIBS' declaration. The tool can be invoked by referring to
'$(BUILD_BASE_DIR)/tool/<package>/tool'.
For an example of using custom host tools, please refer to the mupdf package
found within the libports repository. During the build of the mupdf library,
two custom tools fontdump and cmapdump are invoked. The tools are built via
the _lib/mk/mupdf_host_tools.mk_ library description file. The actual mupdf
library (_lib/mk/mupdf.mk_) has the pseudo library 'mupdf_host_tools' listed
in its 'LIBS' declaration and refers to the tools relative to
'$(BUILD_BASE_DIR)'.
Building additional custom targets accompanying library or program
==================================================================
There are cases when it is important to build additional targets
besides standard files built for library or program. Of course there
is no problem with writing specific make rules for commands that
generate those target files but for them to be built a proper
dependency must be specified. To achieve it those additional targets
should be added to 'CUSTOM_TARGET_DEPS' variable like e.g. in
iwl_firmware library from dde_linux repository:
! CUSTOM_TARGET_DEPS += $(addprefix $(BIN_DIR)/,$(IMAGES))
Automated integration and testing
#################################
Genode's cross-kernel portability is one of the prime features of the
framework. However, each kernel takes a different route when it comes to
configuring, integrating, and booting the system. Hence, for using a particular
kernel, profound knowledge about the boot concept and the kernel-specific tools
is required. To streamline the testing of Genode-based systems across the many
different supported kernels, the framework comes equipped with tools that
relieve you from these peculiarities.
Run scripts
===========
Using so-called run scripts, complete Genode systems can be described in a
concise and kernel-independent way. Once created, a run script can be used
to integrate and test-drive a system scenario directly from the build directory.
The best way to get acquainted with the concept is reviewing the run script
for the 'hello_tutorial' located at _hello_tutorial/run/hello.run_.
Let's revisit each step expressed in the _hello.run_ script:
* Building the components needed for the system using the 'build' command.
This command instructs the build system to compile the targets listed in
the brace block. It has the same effect as manually invoking 'make' with
the specified argument from within the build directory.
* Creating a new boot directory using the 'create_boot_directory' command.
The integration of the scenario is performed in a dedicated directory at
_<build-dir>/var/run/<run-script-name>/_. When the run script is finished,
this directory will contain all components of the final system. In the
following, we will refer to this directory as run directory.
* Installing the Genode 'config' file into the run directory using the
'install_config' command. The argument to this command will be written
to a file called 'config' at the run directory picked up by
Genode's init process.
* Creating a bootable system image using the 'build_boot_image' command.
This command copies the specified list of files from the _<build-dir>/bin/_
directory to the run directory and executes the platform-specific steps
needed to transform the content of the run directory into a bootable
form. This form depends on the actual base platform and may be an ISO
image or a bootable ELF image.
* Executing the system image using the 'run_genode_until' command. Depending
on the base platform, the system image will be executed using an emulator.
For most platforms, Qemu is the tool of choice used by default. On Linux,
the scenario is executed by starting 'core' directly from the run
directory. The 'run_genode_until' command takes a regular expression
as argument. If the log output of the scenario matches the specified
pattern, the 'run_genode_until' command returns. If specifying 'forever'
as argument (as done in 'hello.run'), this command will never return.
If a regular expression is specified, an additional argument determines
a timeout in seconds. If the regular expression does not match until
the timeout is reached, the run script will abort.
Please note that the _hello.run_ script does not contain kernel-specific
information. Therefore it can be executed from the build directory of any base
platform by using:
! make run/hello
When invoking 'make' with an argument of the form 'run/*', the build system
will look in all repositories for a run script with the specified name. The run
script must be located in one of the repositories 'run/' subdirectories and
have the file extension '.run'.
For a more comprehensive run script, _os/run/demo.run_ serves as a good
example. This run script describes Genode's default demo scenario. As seen in
'demo.run', parts of init's configuration can be made dependent on the
platform's properties expressed as spec values. For example, the PCI driver
gets included in init's configuration only on platforms with a PCI bus. For
appending conditional snippets to the _config_ file, there exists the 'append_if'
command, which takes a condition as first and the snippet as second argument.
To test for a SPEC value, the command '[have_spec <spec-value>]' is used as
condition. Analogously to how 'append_if' appends strings, there exists
'lappend_if' to append list items. The latter command is used to conditionally
include binaries to the list of boot modules passed to the 'build_boot_image'
command.
The run mechanism explained
===========================
Under the hood, run scripts are executed by an expect interpreter. When the
user invokes a run script via _make run/<run-script>_, the build system invokes
the run tool at _<genode-dir>/tool/run_ with the run script as argument. The
run tool is an expect script that has no other purpose than defining several
commands used by run scripts, including a platform-specific script snippet
called run environment ('env'), and finally including the actual run script.
Whereas _tool/run_ provides the implementations of generic and largely
platform-independent commands, the _env_ snippet included from the platform's
respective _base-<platform>/run/env_ file contains all platform-specific
commands. For reference, the most simplistic run environment is the one at
_base-linux/run/env_, which implements the 'create_boot_directory',
'install_config', 'build_boot_image', and 'run_genode_until' commands for Linux
as base platform. For the other platforms, the run environments are far more
elaborative and document precisely how the integration and boot concept works
on each platform. Hence, the _base-<platform>/run/env_ files are not only
necessary parts of Genode's tooling support but serve as resource for
peculiarities of using each kernel.
Using run script to implement test cases
========================================
Because run scripts are actually expect scripts, the whole arsenal of
language features of the Tcl scripting language is available to them. This
turns run scripts into powerful tools for the automated execution of test
cases. A good example is the run script at _libports/run/lwip.run_, which tests
the lwIP stack by running a simple Genode-based HTTP server on Qemu. It fetches
and validates a HTML page from this server. The run script makes use of a
regular expression as argument to the 'run_genode_until' command to detect the
state when the web server becomes ready, subsequently executes the 'lynx' shell
command to fetch the web site, and employs Tcl's support for regular
expressions to validate the result. The run script works across base platforms
that use Qemu as execution environment.
To get the most out of the run mechanism, a basic understanding of the Tcl
scripting language is required. Furthermore the functions provided by
_tool/run_ and _base-<platform>/run/env_ should be studied.
Automated testing across base platforms
=======================================
To execute one or multiple test cases on more than one base platform, there
exists a dedicated tool at _tool/autopilot_. Its primary purpose is the
nightly execution of test cases. The tool takes a list of platforms and of
run scripts as arguments and executes each run script on each platform. The
build directory for each platform is created at
_/tmp/autopilot.<username>/<platform>_ and the output of each run script is
written to a file called _<platform>.<run-script>.log_. On stderr, autopilot
prints the statistics about whether or not each run script executed
successfully on each platform. If at least one run script failed, autopilot
returns a non-zero exit code, which makes it straight forward to include
autopilot into an automated build-and-test environment.

View File

@@ -1,299 +0,0 @@
Coding style guidelines for Genode
##################################
Things to avoid
===============
Please avoid using pre-processor macros. C++ provides language
features for almost any case, for which a C programmer uses
macros.
:Defining constants:
Use 'enum' instead of '#define'
! enum { MAX_COLORS = 3 };
! enum {
! COLOR_RED = 1,
! COLOR_BLUE = 2,
! COLOR_GREEN = 3
! };
:Meta programming:
Use templates instead of pre-processor macros. In contrast to macros,
templates are type-safe and fit well with the implementation syntax.
:Conditional-code inclusion:
Please avoid C-hacker style '#ifdef CONFIG_PLATFROM' - '#endif'
constructs. Instead, factor-out the encapsulated code into a
separate file and introduce a proper function interface.
The build process should then be used to select the appropriate
platform-specific files at compile time. Keep platform dependent
code as small as possible. Never pollute existing generic code
with platform-specific code.
Header of each file
===================
! /*
! * \brief Short description of the file
! * \author Original author
! * \date Creation date
! *
! * Some more detailed description. This is optional.
! */
Identifiers
===========
* The first character of class names are uppercase, any other characters are
lowercase.
* Function and variable names are lower case.
* 'Multi_word_identifiers' use underline to separate words.
* 'CONSTANTS' and template arguments are upper case.
* Private and protected members of a class begin with an '_'-character.
* Accessor methods are named after their corresponding attributes:
! /**
! * Request private member variable
! */
! int value() const { return _value; }
!
! /**
! * Set the private member variable
! */
! void value(int value) { _value = value; }
* Accessors that return a boolean value do not carry an 'is_' prefix. E.g.,
a method for requesting the validity of an object should be named
'valid()', not 'is_valid()'.
Indentation
===========
* Use one tab per indentation step. *Do not mix tabs and spaces!*
* Use no tabs except at the beginning of a line.
* Use spaces for the alignment of continuation lines such as function
arguments that span multiple lines. The alignment spaces of such lines
should start after the (tab-indented) indentation level. For example:
! {
! <tab>function_with_many_arguments(arg1,
! <tab><--- spaces for aligment --->arg2,
! ...
! }
* Remove trailing spaces at the end of lines
This way, each developer can set his preferred tab size in his editor
and the source code always looks good.
_Hint:_ In VIM, use the 'set list' and 'set listchars' commands to make tabs
and spaces visible.
* If class initializers span multiple lines, put the colon on a separate
line and indent the initializers using one tab. For example:
! Complicated_machinery(Material &material, Deadline deadline)
! :
! <tab>_material(material),
! <tab>_deadline(deadline),
! <tab>...
! {
! ...
! }
* Preferably place statements that alter the control flow - such as
'break', 'continue', or 'return' - at the beginning of a separate line,
followed by vertical space (a blank line or the closing brace of the
surrounding scope).
! if (early_return_possible)
! return;
Switch statements
~~~~~~~~~~~~~~~~~
Switch-statement blocks should be indented as follows:
! switch (color) {
!
! case BLUE:
! <tab>break;
!
! case GREEN:
! <tab>{
! <tab><tab>int declaration_required;
! <tab><tab>...
! <tab>}
!
! default:
! }
Please note that the case labels have the same indentation
level as the switch statement. This avoids a two-level
indentation-change at the end of the switch block that
would occur otherwise.
Vertical whitespaces
====================
In header files:
* Leave two empty lines between classes.
* Leave one empty line between member functions.
In implementation files:
* Leave two empty lines between functions.
Braces
======
* Braces after class, struct and function names are placed at a new line:
! class Foo
! {
! public:
!
! void method(void)
! {
! ...
! }
! };
except for one-line functions.
* All other occurrences of open braces (for 'if', 'while', 'do', 'for',
'namespace', 'enum' etc.) are at the end of a line:
! if (flag) {
! ..
! } else {
! ..
! }
* One-line functions should be written on a single line as long as the line
length does not exceed approximately 80 characters.
Typically, this applies for accessor functions.
If slightly more space than one line is needed, indent as follows:
! int heavy_computation(int a, int lot, int of, int args) {
! return a + lot + of + args; }
Comments
========
Function/method header
~~~~~~~~~~~~~~~~~~~~~~
Each public or protected (but no private) method in a header-file should be
prepended by a header as follows:
! /**
! * Short description
! *
! * \param a meaning of parameter a
! * \param b meaning of parameter b
! * \param c,d meaning of parameters c and d
! *
! * \throw Exception_type meaning of the exception
! *
! * \return meaning of return value
! *
! * More detailed information about the function. This is optional.
! */
Descriptions of parameters and return values should be lower-case and brief.
More elaborative descriptions can be documented in the text area below.
In implementation files, only local and private functions should feature
function headers.
Single-line comments
~~~~~~~~~~~~~~~~~~~~
! /* use this syntax for single line comments */
A single-line comment should be prepended by an empty line.
Single-line comments should be short - no complete sentences. Use lower-case.
C++-style comments ('//') should only be used for temporarily commenting-out
code. Such commented-out garbage is easy to 'grep' and there are handy
'vim'-macros available for creating and removing such comments.
Variable descriptions
~~~~~~~~~~~~~~~~~~~~~
Use the same syntax as for single-line comments. Insert two or more
spaces before your comment starts.
! int size; /* in kilobytes */
Multi-line comments
~~~~~~~~~~~~~~~~~~~
Multi-line comments are more detailed descriptions in the form of
sentences.
A multi-line comment should be enclosed by empty lines.
! /*
! * This is some tricky
! * algorithm that works
! * as follows:
! * ...
! */
The first and last line of a multi-line comment contain no words.
Source-code blocks
~~~~~~~~~~~~~~~~~~
For structuring your source code, you can entitle the different
parts of a file like this:
! <- two empty lines
!
! /********************
! ** Event handlers **
! ********************/
! <- one empty line
Note the two stars at the left and right. There are two of them to
make the visible width of the border match its height (typically,
characters are ca. twice as high as wide).
A source-code block header represents a headline for the following
code. To couple this headline with the following code closer than
with previous code, leave two empty lines above and one empty line
below the source-code block header.
Order of public, protected, and private blocks
==============================================
For consistency reasons, use the following class layout:
! class Sandstein
! {
! private:
! ...
! protected:
! ...
! public:
! };
Typically, the private section contains member variables that are used
by public accessor functions below. In this common case, we only reference
symbols that are defined above as it is done when programming plain C.
Leave one empty line (or a line that contains only a brace) above and below
a 'private', 'protected', or 'public' label. This also applies when the
label is followed by a source-code block header.

View File

@@ -1,70 +1,333 @@
Conventions for the Genode development
Norman Feske
==================================================
Conventions and coding-style guidelines for Genode
==================================================
Documentation
#############
Documentation and naming of files
#################################
We use the GOSH syntax [https://github.com/nfeske/gosh] for documentation and
README files.
We encourage that each directory contains a file called 'README' that briefly
explains what the directory is about.
README files
############
File names
----------
Each directory should contain a file called 'README' that briefly explains
what the directory is about. In 'doc/Makefile' is a rule for
generating a directory overview from the 'README' files automatically.
You can structure your 'README' file by using the GOSH style for subsections:
! Subsection
! ~~~~~~~~~~
Do not use chapters or sections in your 'README' files.
Filenames
#########
All normal filenames are lowercase. Filenames should be chosen to be
expressive. Someone who explores your files for the first time might not
All normal file names are lowercase. Filenames should be chosen to be
expressive. Someone who explores your files for the first time might not
understand what 'mbi.cc' means but 'multiboot_info.cc' would ring a bell. If a
filename contains multiple words, use the '_' to separate them (instead of
file name contains multiple words, use the '_' to separate them (instead of
'miscmath.h', use 'misc_math.h').
Coding style
############
A common coding style helps a lot to ease collaboration. The official coding
style of the Genode base components is described in 'doc/coding_style.txt'.
If you consider working closely together with the Genode main developers,
your adherence to this style is greatly appreciated.
Things to avoid
===============
Please avoid using pre-processor macros. C++ provides language
features for almost any case, for which a C programmer uses
macros.
:Defining constants:
Use 'enum' instead of '#define'
! enum { MAX_COLORS = 3 };
! enum {
! COLOR_RED = 1,
! COLOR_BLUE = 2,
! COLOR_GREEN = 3
! };
:Meta programming:
Use templates instead of pre-processor macros. In contrast to macros,
templates are type-safe and fit well with the implementation syntax.
:Conditional-code inclusion:
Please avoid C-hacker style '#ifdef CONFIG_PLATFROM' - '#endif'
constructs. Instead, factor-out the encapsulated code into a
separate file and introduce a proper function interface.
The build process should then be used to select the appropriate
platform-specific files at compile time. Keep platform dependent
code as small as possible. Never pollute existing generic code
with platform-specific code.
Include files and RPC interfaces
################################
Header of each file
===================
Never place include files directly into the '<repository>/include/' directory
but use a meaningful subdirectory that corresponds to the component that
provides the interfaces.
Each RPC interface is represented by a separate include subdirectory. For
an example, see 'base/include/ram_session/'. The header file that defines
the RPC function interface has the same base name as the directory. The RPC
stubs are called 'client.h' and 'server.h'. If your interface uses a custom
capability type, it is defined in 'capability.h'. Furthermore, if your
interface is a session interface of a service, it is good practice to
provide a connection class in a 'connection.h' file for managing session-
construction arguments and the creation and destruction of sessions.
Specialization-dependent include directories are placed in 'include/<specname>/'.
! /*
! * \brief Short description of the file
! * \author Original author
! * \date Creation date
! *
! * Some more detailed description. This is optional.
! */
Service Names
#############
Identifiers
===========
* The first character of class names are uppercase, any other characters are
lowercase.
* Function and variable names are lower case.
* 'Multi_word_identifiers' use underline to separate words.
* 'CONSTANTS' and template arguments are upper case.
* Private and protected members of a class begin with an '_'-character.
* Accessor methods are named after their corresponding attributes:
! /**
! * Request private member variable
! */
! int value() const { return _value; }
!
! /**
! * Set the private member variable
! */
! void value(int value) { _value = value; }
* Accessors that return a boolean value do not carry an 'is_' prefix. E.g.,
a method for requesting the validity of an object should be named
'valid()', not 'is_valid()'.
Indentation
===========
* Use one tab per indentation step. *Do not mix tabs and spaces!*
* Use no tabs except at the beginning of a line.
* Use spaces for the alignment of continuation lines such as function
arguments that span multiple lines. The alignment spaces of such lines
should start after the (tab-indented) indentation level. For example:
! {
! <tab>function_with_many_arguments(arg1,
! <tab><--- spaces for aligment --->arg2,
! ...
! }
* Remove trailing spaces at the end of lines
This way, each developer can set his preferred tab size in his editor
and the source code always looks good.
_Hint:_ In VIM, use the 'set list' and 'set listchars' commands to make tabs
and spaces visible.
* If class initializers span multiple lines, put the colon on a separate
line and indent the initializers using one tab. For example:
! Complicated_machinery(Material &material, Deadline deadline)
! :
! <tab>_material(material),
! <tab>_deadline(deadline),
! <tab>...
! {
! ...
! }
* Preferably place statements that alter the control flow - such as
'break', 'continue', or 'return' - at the beginning of a separate line,
followed by vertical space (a blank line or the closing brace of the
surrounding scope).
! if (early_return_possible)
! return;
Switch statements
~~~~~~~~~~~~~~~~~
Switch-statement blocks should be indented as follows:
! switch (color) {
!
! case BLUE:
! <tab>break;
!
! case GREEN:
! <tab>{
! <tab><tab>int declaration_required;
! <tab><tab>...
! <tab>}
!
! default:
! }
Please note that the case labels have the same indentation
level as the switch statement. This avoids a two-level
indentation-change at the end of the switch block that
would occur otherwise.
Vertical whitespaces
====================
In header files:
* Leave two empty lines between classes.
* Leave one empty line between member functions.
In implementation files:
* Leave two empty lines between functions.
Braces
======
* Braces after class, struct and function names are placed at a new line:
! class Foo
! {
! public:
!
! void method(void)
! {
! ...
! }
! };
except for one-line functions.
* All other occurrences of open braces (for 'if', 'while', 'do', 'for',
'namespace', 'enum' etc.) are at the end of a line:
! if (flag) {
! ..
! } else {
! ..
! }
* One-line functions should be written on a single line as long as the line
length does not exceed approximately 80 characters.
Typically, this applies for accessor functions.
If slightly more space than one line is needed, indent as follows:
! int heavy_computation(int a, int lot, int of, int args) {
! return a + lot + of + args; }
Comments
========
Function/method header
~~~~~~~~~~~~~~~~~~~~~~
Each public or protected (but no private) method in a header-file should be
prepended by a header as follows:
! /**
! * Short description
! *
! * \param a meaning of parameter a
! * \param b meaning of parameter b
! * \param c,d meaning of parameters c and d
! *
! * \throw Exception_type meaning of the exception
! *
! * \return meaning of return value
! *
! * More detailed information about the function. This is optional.
! */
Descriptions of parameters and return values should be lower-case and brief.
More elaborative descriptions can be documented in the text area below.
In implementation files, only local and private functions should feature
function headers.
Single-line comments
~~~~~~~~~~~~~~~~~~~~
! /* use this syntax for single line comments */
A single-line comment should be prepended by an empty line.
Single-line comments should be short - no complete sentences. Use lower-case.
C++-style comments ('//') should only be used for temporarily commenting-out
code. Such commented-out garbage is easy to 'grep' and there are handy
'vim'-macros available for creating and removing such comments.
Variable descriptions
~~~~~~~~~~~~~~~~~~~~~
Use the same syntax as for single-line comments. Insert two or more
spaces before your comment starts.
! int size; /* in kilobytes */
Multi-line comments
~~~~~~~~~~~~~~~~~~~
Multi-line comments are more detailed descriptions in the form of
sentences.
A multi-line comment should be enclosed by empty lines.
! /*
! * This is some tricky
! * algorithm that works
! * as follows:
! * ...
! */
The first and last line of a multi-line comment contain no words.
Source-code blocks
~~~~~~~~~~~~~~~~~~
For structuring your source code, you can entitle the different
parts of a file like this:
! <- two empty lines
!
! /********************
! ** Event handlers **
! ********************/
! <- one empty line
Note the two stars at the left and right. There are two of them to
make the visible width of the border match its height (typically,
characters are ca. twice as high as wide).
A source-code block header represents a headline for the following
code. To couple this headline with the following code closer than
with previous code, leave two empty lines above and one empty line
below the source-code block header.
Order of public, protected, and private blocks
==============================================
For consistency reasons, use the following class layout:
! class Sandstein
! {
! private:
! ...
! protected:
! ...
! public:
! };
Typically, the private section contains member variables that are used
by public accessor functions below. In this common case, we only reference
symbols that are defined above as it is done when programming plain C.
Leave one empty line (or a line that contains only a brace) above and below
a 'private', 'protected', or 'public' label. This also applies when the
label is followed by a source-code block header.
Naming of Genode services
=========================
Service names as announced via the 'parent()->announce()' function follow
the following convention:

View File

@@ -1,514 +0,0 @@
============================
Package management on Genode
============================
Norman Feske
Motivation and inspiration
##########################
The established system-integration work flow with Genode is based on
the 'run' tool, which automates the building, configuration, integration,
and testing of Genode-based systems. Whereas the run tool succeeds in
overcoming the challenges that come with Genode's diversity of kernels and
supported hardware platforms, its scalability is somewhat limited to
appliance-like system scenarios: The result of the integration process is
a system image with a certain feature set. Whenever requirements change,
the system image is replaced with a new created image that takes those
requirements into account. In practice, there are two limitations of this
system-integration approach:
First, since the run tool implicitly builds all components required for a
system scenario, the system integrator has to compile all components from
source. E.g., if a system includes a component based on Qt5, one needs to
compile the entire Qt5 application framework, which induces significant
overhead to the actual system-integration tasks of composing and configuring
components.
Second, general-purpose systems tend to become too complex and diverse to be
treated as system images. When looking at commodity OSes, each installation
differs with respect to the installed set of applications, user preferences,
used device drivers and system preferences. A system based on the run tool's
work flow would require the user to customize the run script of the system for
each tweak. To stay up to date, the user would need to re-create the
system image from time to time while manually maintaining any customizations.
In practice, this is a burden, very few end users are willing to endure.
The primary goal of Genode's package management is to overcome these
scalability limitations, in particular:
* Alleviating the need to build everything that goes into system scenarios
from scratch,
* Facilitating modular system compositions while abstracting from technical
details,
* On-target system update and system development,
* Assuring the user that system updates are safe to apply by providing the
ability to easily roll back the system or parts thereof to previous versions,
* Securing the integrity of the deployed software,
* Fostering a federalistic evolution of Genode systems,
* Low friction for existing developers.
The design of Genode's package-management concept is largely influenced by Git
as well as the [https://nixos.org/nix/ - Nix] package manager. In particular
the latter opened our eyes to discover the potential that lies beyond the
package management employed in state-of-the art commodity systems. Even though
we considered adapting Nix for Genode and actually conducted intensive
experiments in this direction (thanks to Emery Hemingway who pushed forward
this line of work), we settled on a custom solution that leverages Genode's
holistic view on all levels of the operating system including the build system
and tooling, source structure, ABI design, framework API, system
configuration, inter-component interaction, and the components itself. Whereby
Nix is designed for being used on top of Linux, Genode's whole-systems view
led us to simplifications that eliminated the needs for Nix' powerful features
like its custom description language.
Nomenclature
############
When speaking about "package management", one has to clarify what a "package"
in the context of an operating system represents. Traditionally, a package
is the unit of delivery of a bunch of "dumb" files, usually wrapped up in
a compressed archive. A package may depend on the presence of other
packages. Thereby, a dependency graph is formed. To express how packages fit
with each other, a package is usually accompanied with meta data
(description). Depending on the package manager, package descriptions follow
certain formalisms (e.g., package-description language) and express
more-or-less complex concepts such as versioning schemes or the distinction
between hard and soft dependencies.
Genode's package management does not follow this notion of a "package".
Instead of subsuming all deliverable content under one term, we distinguish
different kinds of content, each in a tailored and simple form. To avoid the
clash of the notions of the common meaning of a "package", we speak of
"archives" as the basic unit of delivery. The following subsections introduce
the different categories.
Archives are named with their version as suffix, appended via a slash. The
suffix is maintained by the author of the archive. The recommended naming
scheme is the use of the release date as version suffix, e.g.,
'report_rom/2017-05-14'.
Raw-data archives
=================
A raw-data archive contains arbitrary data that is - in contrast to executable
binaries - independent from the processor architecture. Examples are
configuration data, game assets, images, or fonts. The content of raw-data
archives is expected to be consumed by components at runtime. It is not
relevant for the build process for executable binaries. Each raw-data
archive contains merely a collection of data files. There is no meta data.
API archive
===========
An API archive has the structure of a Genode source-code repository. It may
contain all the typical content of such a source-code repository such as header
files (in the _include/_ subdirectory), source codes (in the _src/_
subdirectory), library-description files (in the _lib/mk/_ subdirectory), or
ABI symbols (_lib/symbols/_ subdirectory). At the top level, a LICENSE file is
expected that clarifies the license of the contained source code. There is no
meta data contained in an API archive.
An API archive is meant to provide _ingredients_ for building components. The
canonical example is the public programming interface of a library (header
files) and the library's binary interface in the form of an ABI-symbols file.
One API archive may contain the interfaces of multiple libraries. For example,
the interfaces of libc and libm may be contained in a single "libc" API
archive because they are closely related to each other. Conversely, an API
archive may contain a single header file only. The granularity of those
archives may vary. But they have in common that they are used at build time
only, not at runtime.
Source archive
==============
Like an API archive, a source archive has the structure of a Genode
source-tree repository and is expected to contain all the typical content of
such a source repository along with a LICENSE file. But unlike an API archive,
it contains descriptions of actual build targets in the form of Genode's usual
'target.mk' files.
In addition to the source code, a source archive contains a file
called 'used_apis', which contains a list of API-archive names with each
name on a separate line. For example, the 'used_apis' file of the 'report_rom'
source archive looks as follows:
! base/2017-05-14
! os/2017-05-13
! report_session/2017-05-13
The 'used_apis' file declares the APIs needed to incorporate into the build
process when building the source archive. Hence, they represent _build-time_
_dependencies_ on the specific API versions.
A source archive may be equipped with a top-level file called 'api' containing
the name of exactly one API archive. If present, it declares that the source
archive _implements_ the specified API. For example, the 'libc/2017-05-14'
source archive contains the actual source code of the libc and libm as well as
an 'api' file with the content 'libc/2017-04-13'. The latter refers to the API
implemented by this version of the libc source package (note the differing
versions of the API and source archives)
Binary archive
==============
A binary archive contains the build result of the equally-named source archive
when built for a particular architecture. That is, all files that would appear
at the _<build-dir>/bin/_ subdirectory when building all targets present in
the source archive. There is no meta data present in a binary archive.
A binary archive is created out of the content of its corresponding source
archive and all API archives listed in the source archive's 'used_apis' file.
Note that since a binary archive depends on only one source archive, which
has no further dependencies, all binary archives can be built independently
from each other.
For example, a libc-using application needs the source code of the
application as well as the libc's API archive (the libc's header file and
ABI) but it does not need the actual libc library to be present.
Package archive
===============
A package archive contains an 'archives' file with a list of archive names
that belong together at runtime. Each listed archive appears on a separate line.
For example, the 'archives' file of the package archive for the window
manager 'wm/2018-02-26' looks as follows:
! genodelabs/raw/wm/2018-02-14
! genodelabs/src/wm/2018-02-26
! genodelabs/src/report_rom/2018-02-26
! genodelabs/src/decorator/2018-02-26
! genodelabs/src/floating_window_layouter/2018-02-26
In contrast to the list of 'used_apis' of a source archive, the content of
the 'archives' file denotes the origin of the respective archives
("genodelabs"), the archive type, followed by the versioned name of the
archive.
An 'archives' file may specify raw archives, source archives, or package
archives (as type 'pkg'). It thereby allows the expression of _runtime
dependencies_. If a package archive lists another package archive, it inherits
the content of the listed archive. This way, a new package archive may easily
customize an existing package archive.
A package archive does not specify binary archives directly as they differ
between the architecture and are already referenced by the source archives.
In addition to an 'archives' file, a package archive is expected to contain
a 'README' file explaining the purpose of the collection.
Depot structure
###############
Archives are stored within a directory tree called _depot/_. The depot
is structured as follows:
! <user>/pubkey
! <user>/download
! <user>/src/<name>/<version>/
! <user>/api/<name>/<version>/
! <user>/raw/<name>/<version>/
! <user>/pkg/<name>/<version>/
! <user>/bin/<arch>/<src-name>/<src-version>/
The <user> stands for the origin of the contained archives. For example, the
official archives provided by Genode Labs reside in a _genodelabs/_
subdirectory. Within this directory, there is a 'pubkey' file with the
user's public key that is used to verify the integrity of archives downloaded
from the user. The file 'download' specifies the download location as an URL.
Subsuming archives in a subdirectory that correspond to their origin
(user) serves two purposes. First, it provides a user-local name space for
versioning archives. E.g., there might be two versions of a
'nitpicker/2017-04-15' source archive, one by "genodelabs" and one by
"nfeske". However, since each version resides under its origin's subdirectory,
version-naming conflicts between different origins cannot happen. Second, by
allowing multiple archive origins in the depot side-by-side, package archives
may incorporate archives of different origins, which fosters the goal of a
federalistic development, where contributions of different origins can be
easily combined.
The actual archives are stored in the subdirectories named after the archive
types ('raw', 'api', 'src', 'bin', 'pkg'). Archives contained in the _bin/_
subdirectories are further subdivided in the various architectures (like
'x86_64', or 'arm_v7').
Depot management
################
The tools for managing the depot content reside under the _tool/depot/_
directory. When invoked without arguments, each tool prints a brief
description of the tool and its arguments.
Unless stated otherwise, the tools are able to consume any number of archives
as arguments. By default, they perform their work sequentially. This can be
changed by the '-j<N>' argument, where <N> denotes the desired level of
parallelization. For example, by specifying '-j4' to the _tool/depot/build_
tool, four concurrent jobs are executed during the creation of binary archives.
Downloading archives
====================
The depot can be populated with archives in two ways, either by creating
the content from locally available source codes as explained by Section
[Automated extraction of archives from the source tree], or by downloading
ready-to-use archives from a web server.
In order to download archives originating from a specific user, the depot's
corresponding user subdirectory must contain two files:
:_pubkey_: contains the public key of the GPG key pair used by the creator
(aka "user") of the to-be-downloaded archives for signing the archives. The
file contains the ASCII-armored version of the public key.
:_download_: contains the base URL of the web server where to fetch archives
from. The web server is expected to mirror the structure of the depot.
That is, the base URL is followed by a sub directory for the user,
which contains the archive-type-specific subdirectories.
If both the public key and the download locations are defined, the download
tool can be used as follows:
! ./tool/depot/download genodelabs/src/zlib/2018-01-10
The tool automatically downloads the specified archives and their
dependencies. For example, as the zlib depends on the libc API, the libc API
archive is downloaded as well. All archive types are accepted as arguments
including binary and package archives. Furthermore, it is possible to download
all binary archives referenced by a package archive. For example, the
following command downloads the window-manager (wm) package archive including
all binary archives for the 64-bit x86 architecture. Downloaded binary
archives are always accompanied with their corresponding source and used API
archives.
! ./tool/depot/download genodelabs/pkg/x86_64/wm/2018-02-26
Archive content is not downloaded directly to the depot. Instead, the
individual archives and signature files are downloaded to a quarantine area in
the form of a _public/_ directory located in the root of Genode's source tree.
As its name suggests, the _public/_ directory contains data that is imported
from or to-be exported to the public. The download tool populates it with the
downloaded archives in their compressed form accompanied with their
signatures.
The compressed archives are not extracted before their signature is checked
against the public key defined at _depot/<user>/pubkey_. If however the
signature is valid, the archive content is imported to the target destination
within the depot. This procedure ensures that depot content - whenever
downloaded - is blessed by a cryptographic signature of its creator.
Building binary archives from source archives
=============================================
With the depot populated with source and API archives, one can use the
_tool/depot/build_ tool to produce binary archives. The arguments have the
form '<user>/bin/<arch>/<src-name>' where '<arch>' stands for the targeted
CPU architecture. For example, the following command builds the 'zlib'
library for the 64-bit x86 architecture. It executes four concurrent jobs
during the build process.
! ./tool/depot/build genodelabs/bin/x86_64/zlib/2018-01-10 -j4
Note that the command expects a specific version of the source archive as
argument. The depot may contain several versions. So the user has to decide,
which one to build.
After the tool is finished, the freshly built binary archive can be found in
the depot within the _genodelabs/bin/<arch>/<src>/<version>/_ subdirectory.
Only the final result of the built process is preserved. In the example above,
that would be the _zlib.lib.so_ library.
For debugging purposes, it might be interesting to inspect the intermediate
state of the build. This is possible by adding 'KEEP_BUILD_DIR=1' as argument
to the build command. The binary's intermediate build directory can be
found besides the binary archive's location named with a '.build' suffix.
By default, the build tool won't attempt to rebuild a binary archive that is
already present in the depot. However, it is possible to force a rebuild via
the 'REBUILD=1' argument.
Publishing archives
===================
Archives located in the depot can be conveniently made available to the public
using the _tool/depot/publish_ tool. Given an archive path, the tool takes
care of determining all archives that are implicitly needed by the specified
one, wrapping the archive's content into compressed tar archives, and signing
those.
As a precondition, the tool requires you to possess the private key that
matches the _depot/<you>/pubkey_ file within your depot. The key pair should
be present in the key ring of your GNU privacy guard.
To publish archives, one needs to specify the specific version to publish.
For example:
! ./tool/depot/publish <you>/pkg/x86_64/wm/2018-02-26
The command checks that the specified archive and all dependencies are present
in the depot. It then proceeds with the archiving and signing operations. For
the latter, the pass phrase for your private key will be requested. The
publish tool prints the information about the processed archives, e.g.:
! publish /.../public/<you>/api/base/2018-02-26.tar.xz
! publish /.../public/<you>/api/framebuffer_session/2017-05-31.tar.xz
! publish /.../public/<you>/api/gems/2018-01-28.tar.xz
! publish /.../public/<you>/api/input_session/2018-01-05.tar.xz
! publish /.../public/<you>/api/nitpicker_gfx/2018-01-05.tar.xz
! publish /.../public/<you>/api/nitpicker_session/2018-01-05.tar.xz
! publish /.../public/<you>/api/os/2018-02-13.tar.xz
! publish /.../public/<you>/api/report_session/2018-01-05.tar.xz
! publish /.../public/<you>/api/scout_gfx/2018-01-05.tar.xz
! publish /.../public/<you>/bin/x86_64/decorator/2018-02-26.tar.xz
! publish /.../public/<you>/bin/x86_64/floating_window_layouter/2018-02-26.tar.xz
! publish /.../public/<you>/bin/x86_64/report_rom/2018-02-26.tar.xz
! publish /.../public/<you>/bin/x86_64/wm/2018-02-26.tar.xz
! publish /.../public/<you>/pkg/wm/2018-02-26.tar.xz
! publish /.../public/<you>/raw/wm/2018-02-14.tar.xz
! publish /.../public/<you>/src/decorator/2018-02-26.tar.xz
! publish /.../public/<you>/src/floating_window_layouter/2018-02-26.tar.xz
! publish /.../public/<you>/src/report_rom/2018-02-26.tar.xz
! publish /.../public/<you>/src/wm/2018-02-26.tar.xz
According to the output, the tool populates a directory called _public/_
at the root of the Genode source tree with the to-be-published archives.
The content of the _public/_ directory is now ready to be copied to a
web server, e.g., by using rsync.
Automated extraction of archives from the source tree
#####################################################
Genode users are expected to populate their local depot with content obtained
via the _tool/depot/download_ tool. However, Genode developers need a way to
create depot archives locally in order to make them available to users. Thanks
to the _tool/depot/extract_ tool, the assembly of archives does not need to be
a manual process. Instead, archives can be conveniently generated out of the
source codes present in the Genode source tree and the _contrib/_ directory.
However, the granularity of splitting source code into archives, the
definition of what a particular API entails, and the relationship between
archives must be augmented by the archive creator as this kind of information
is not present in the source tree as is. This is where so-called "archive
recipes" enter the picture. An archive recipe defines the content of an
archive. Such recipes can be located at an _recipes/_ subdirectory of any
source-code repository, similar to how port descriptions and run scripts
are organized. Each _recipe/_ directory contains subdirectories for the
archive types, which, in turn, contain a directory for each archive. The
latter is called a _recipe directory_.
Recipe directory
----------------
The recipe directory is named after the archive _omitting the archive version_
and contains at least one file named _hash_. This file defines the version
of the archive along with a hash value of the archive's content
separated by a space character. By tying the version name to a particular hash
value, the _extract_ tool is able to detect the appropriate points in time
whenever the version should be increased due to a change of the archive's
content.
API, source, and raw-data archive recipes
-----------------------------------------
Recipe directories for API, source, or raw-data archives contain a
_content.mk_ file that defines the archive content in the form of make
rules. The content.mk file is executed from the archive's location within
the depot. Hence, the contained rules can refer to archive-relative files as targets.
The first (default) rule of the content.mk file is executed with a customized
make environment:
:GENODE_DIR: A variable that holds the path to root of the Genode source tree,
:REP_DIR: A variable with the path to source code repository where the recipe
is located
:port_dir: A make function that returns the directory of a port within the
_contrib/_ directory. The function expects the location of the
corresponding port file as argument, for example, the 'zlib' recipe
residing in the _libports/_ repository may specify '$(REP_DIR)/ports/zlib'
to access the 3rd-party zlib source code.
Source archive recipes contain simplified versions of the 'used_apis' and
(for libraries) 'api' files as found in the archives. In contrast to the
depot's counterparts of these files, which contain version-suffixed names,
the files contained in recipe directories omit the version suffix. This
is possible because the extract tool always extracts the _current_ version
of a given archive from the source tree. This current version is already
defined in the corresponding recipe directory.
Package-archive recipes
-----------------------
The recipe directory for a package archive contains the verbatim content of
the to-be-created package archive except for the _archives_ file. All other
files are copied verbatim to the archive. The content of the recipe's
_archives_ file may omit the version information from the listed ingredients.
Furthermore, the user part of each entry can be left blank by using '_' as a
wildcard. When generating the package archive from the recipe, the extract
tool will replace this wildcard with the user that creates the archive.
Convenience front-end to the extract, build tools
#################################################
For developers, the work flow of interacting with the depot is most often the
combination of the _extract_ and _build_ tools whereas the latter expects
concrete version names as arguments. The _create_ tool accelerates this common
usage pattern by allowing the user to omit the version names. Operations
implicitly refer to the _current_ version of the archives as defined in
the recipes.
Furthermore, the _create_ tool is able to manage version updates for the
developer. If invoked with the argument 'UPDATE_VERSIONS=1', it automatically
updates hash files of the involved recipes by taking the current date as
version name. This is a valuable assistance in situations where a commonly
used API changes. In this case, the versions of the API and all dependent
archives must be increased, which would be a labour-intensive task otherwise.
If the depot already contains an archive of the current version, the create
tools won't re-create the depot archive by default. Local modifications of
the source code in the repository do not automatically result in a new archive.
To ensure that the depot archive is current, one can specify 'FORCE=1' to
the create tool. With this argument, existing depot archives are replaced by
freshly extracted ones and version updates are detected. When specified for
creating binary archives, 'FORCE=1' normally implies 'REBUILD=1'. To prevent
the superfluous rebuild of binary archives whose source versions remain
unchanged, 'FORCE=1' can be combined with the argument 'REBUILD='.
Accessing depot content from run scripts
########################################
The depot tools are not meant to replace the run tool but rather to complement
it. When both tools are combined, the run tool implicitly refers to "current"
archive versions as defined for the archive's corresponding recipes. This way,
the regular run-tool work flow can be maintained while attaining a
productivity boost by fetching content from the depot instead of building it.
Run scripts can use the 'import_from_depot' function to incorporate archive
content from the depot into a scenario. The function must be called after the
'create_boot_directory' function and takes any number of pkg, src, or raw
archives as arguments. An archive is specified as depot-relative path of the
form '<user>/<type>/name'. Run scripts may call 'import_from_depot'
repeatedly. Each argument can refer to a specific version of an archive or
just the version-less archive name. In the latter case, the current version
(as defined by a corresponding archive recipe in the source tree) is used.
If a 'src' archive is specified, the run tool integrates the content of
the corresponding binary archive into the scenario. The binary archives
are selected according the spec values as defined for the build directory.

View File

@@ -1,154 +0,0 @@
=============================
How to start exploring Genode
=============================
Norman Feske
Abstract
########
This guide is meant to provide you a painless start with using the Genode OS
Framework. It explains the steps needed to get a simple demo system running
on Linux first, followed by the instructions on how to run the same scenario
on a microkernel.
Quick start to build Genode for Linux
#####################################
The best starting point for exploring Genode is to run it on Linux. Make sure
that your system satisfies the following requirements:
* GNU Make version 3.81 or newer
* 'libsdl2-dev', 'libdrm-dev', and 'libgbm-dev' (needed to run interactive
system scenarios directly on Linux)
* 'tclsh' and 'expect'
* 'byacc' (only needed for the L4/Fiasco kernel)
* 'qemu' and 'xorriso' (for testing non-Linux platforms via Qemu)
For using the entire collection of ported 3rd-party software, the following
packages should be installed additionally: 'autoconf2.64', 'autogen', 'bison',
'flex', 'g++', 'git', 'gperf', 'libxml2-utils', 'subversion', and 'xsltproc'.
Your exploration of Genode starts with obtaining the source code of the
[https://sourceforge.net/projects/genode/files/latest/download - latest version]
of the framework. For detailed instructions and alternatives to the
download from Sourceforge please refer to [https://genode.org/download].
Furthermore, you will need to install the official Genode tool chain, which
you can download at [https://genode.org/download/tool-chain].
The Genode build system never touches the source tree but generates object
files, libraries, and programs in a dedicated build directory. We do not have a
build directory yet. For a quick start, let us create one for the Linux base
platform:
! cd <genode-dir>
! ./tool/create_builddir x86_64
This creates a new build directory for building x86_64 binaries in './build'.
The build system creates unified binaries that work on the given
architecture independent from the underlying base platform, in this case Linux.
Now change into the fresh build directory:
! cd build/x86_64
Please uncomment the following line in 'etc/build.conf' to make the
build process as smooth as possible.
! RUN_OPT += --depot-auto-update
To give Genode a try, build and execute a simple demo scenario via:
! make KERNEL=linux BOARD=linux run/demo
By invoking 'make' with the 'run/demo' argument, all components needed by the
demo scenario are built and the demo is executed. This includes all components
which are implicitly needed by the base platform. The base platform that the
components will be executed upon on is selected via the 'KERNEL' and 'BOARD'
variables. If you are interested in looking behind the scenes of the demo
scenario, please refer to 'doc/build_system.txt' and the run script at
'os/run/demo.run'.
Using platforms other than Linux
================================
Running Genode on Linux is the most convenient way to get acquainted with the
framework. However, the point where Genode starts to shine is when used as the
user land executed on a microkernel. The framework supports a variety of
different kernels such as L4/Fiasco, L4ka::Pistachio, OKL4, and NOVA. Those
kernels largely differ in terms of feature sets, build systems, tools, and boot
concepts. To relieve you from dealing with those peculiarities, Genode provides
you with an unified way of using them. For each kernel platform, there exists
a dedicated description file that enables the 'prepare_port' tool to fetch and
prepare the designated 3rd-party sources. Just issue the following command
within the toplevel directory of the Genode source tree:
! ./tool/ports/prepare_port <platform>
Note that each 'base-<platform>' directory comes with a 'README' file, which
you should revisit first when exploring the base platform. Additionally, most
'base-<platform>' directories provide more in-depth information within their
respective 'doc/' subdirectories.
For the VESA driver on x86, the x86emu library is required and can be
downloaded and prepared by again invoking the 3rd-party sources preparation
tool:
! ./tool/ports/prepare_port x86emu
On x86 base platforms the GRUB2 boot loader is required and can be
downloaded and prepared by invoking:
! ./tool/ports/prepare_port grub2
Now that the base platform is prepared, the 'create_builddir' tool can be used
to create a build directory for your architecture of choice by giving the
architecture as argument. To see the list of available architecture, execute
'create_builddir' with no arguments. Note, that not all kernels support all
architectures.
For example, to give the demo scenario a spin on the OKL4 kernel, the following
steps are required:
# Download the kernel:
! cd <genode-dir>
! ./tool/ports/prepare_port okl4
# Create a build directory
! ./tool/create_builddir x86_32
# Uncomment the following line in 'x86_32/etc/build.conf'
! REPOSITORIES += $(GENODE_DIR)/repos/libports
# Build and execute the demo using Qemu
! make -C build/x86_32 KERNEL=okl4 BOARD=pc run/demo
The procedure works analogously for the other base platforms. You can, however,
reuse the already created build directory and skip its creation step if the
architecture matches.
How to proceed with exploring Genode
####################################
Now that you have taken the first steps into using Genode, you may seek to
get more in-depth knowledge and practical experience. The foundation for doing
so is a basic understanding of the build system. The documentation at
'build_system.txt' provides you with the information about the layout of the
source tree, how new components are integrated, and how complete system
scenarios can be expressed. Equipped with this knowledge, it is time to get
hands-on experience with creating custom Genode components. A good start is the
'hello_tutorial', which shows you how to implement a simple client-server
scenario. To compose complex scenarios out of many small components, the
documentation of the Genode's configuration concept at 'os/doc/init.txt' is an
essential reference.
Certainly, you will have further questions on your way with exploring Genode.
The best place to get these questions answered is the Genode mailing list.
Please feel welcome to ask your questions and to join the discussions:
:Genode Mailing Lists:
[https://genode.org/community/mailing-lists]

View File

@@ -1,236 +0,0 @@
==========================
Google Summer of Code 2012
==========================
Genode Labs has applied as mentoring organization for the Google Summer of Code
program in 2012. This document summarizes all information important to Genode's
participation in the program.
:[http://www.google-melange.com/gsoc/homepage/google/gsoc2012]:
Visit the official homepage of the Google Summer of Code program.
*Update* Genode Labs was not accepted as mentoring organization for GSoC 2012.
Application of Genode Labs as mentoring organization
####################################################
:Organization ID: genodelabs
:Organization name: Genode Labs
:Organization description:
Genode Labs is a self-funded company founded by the original creators of the
Genode OS project. Its primary mission is to bring the Genode operating-system
technology, which started off as an academic research project, to the real
world. At present, Genode Labs is the driving force behind the Genode OS
project.
:Organization home page url:
http://www.genode-labs.com
:Main organization license:
GNU General Public License version 2
:Admins:
nfeske, chelmuth
:What is the URL for your Ideas page?:
[http://genode.org/community/gsoc_2012]
:What is the main IRC channel for your organization?:
#genode
:What is the main development mailing list for your organization?:
genode-main@lists.sourceforge.net
:Why is your organization applying to participate? What do you hope to gain?:
During the past three months, our project underwent the transition from a
formerly company-internal development to a completely open and transparent
endeavour. By inviting a broad community for participation in shaping the
project, we hope to advance Genode to become a broadly used and recognised
technology. GSoC would help us to build our community.
The project has its roots at the University of Technology Dresden where the
Genode founders were former members of the academic research staff. We have
a long and successful track record with regard to supervising students. GSoC
would provide us with the opportunity to establish and cultivate
relationships to new students and to spawn excitement about Genode OS
technology.
:Does your organization have an application templateo?:
GSoC student projects follow the same procedure as regular community
contributions, in particular the student is expected to sign the Genode
Contributor's Agreement. (see [http://genode.org/community/contributions])
:What criteria did you use to select your mentors?:
We selected the mentors on the basis of their long-time involvement with the
project and their time-tested communication skills. For each proposed working
topic, there is least one stakeholder with profound technical background within
Genode Labs. This person will be the primary contact person for the student
working on the topic. However, we will encourgage the student to make his/her
development transparant to all community members (i.e., via GitHub). So
So any community member interested in the topic is able to bring in his/her
ideas at any stage of development. Consequently, in practive, there will be
multiple persons mentoring each students.
:What is your plan for dealing with disappearing students?:
Actively contact them using all channels of communication available to us,
find out the reason for disappearance, trying to resolve the problems. (if
they are related to GSoC or our project for that matter).
:What is your plan for dealing with disappearing mentors?:
All designated mentors are local to Genode Labs. So the chance for them to
disappear to very low. However, if a mentor disappears for any serious reason
(i.e., serious illness), our organization will provide a back-up mentor.
:What steps will you take to encourage students to interact with your community?:
First, we discussed GSoC on our mailing list where we received an overly
positive response. We checked back with other Open-Source projects related to
our topics, exchanged ideas, and tried to find synergies between our
respective projects. For most project ideas, we have created issues in our
issue tracker to collect technical information and discuss the topic.
For several topics, we already observed interests of students to participate.
During the work on the topics, the mentors will try to encourage the
students to play an active role in discussions on our mailing list, also on
topics that are not strictly related to the student project. We regard an
active participation as key to to enable new community members to develop a
holistic view onto our project and gather a profound understanding of our
methodologies.
Student projects will be carried out in a transparent fashion at GitHub.
This makes it easy for each community member to get involved, discuss
the rationale behind design decisions, and audit solutions.
Topics
######
While discussing GSoC participation on our mailing list, we identified the
following topics as being well suited for GSoC projects. However, if none of
those topics receives resonance from students, there is more comprehensive list
of topics available at our road map and our collection of future challenges:
:[http://genode.org/about/road-map]: Road-map
:[http://genode.org/about/challenges]: Challenges
Combining Genode with the HelenOS/SPARTAN kernel
================================================
[http://www.helenos.org - HelenOS] is a microkernel-based multi-server OS
developed at the university of Prague. It is based on the SPARTAN microkernel,
which runs on a wide variety of CPU architectures including Sparc, MIPS, and
PowerPC. This broad platform support makes SPARTAN an interesting kernel to
look at alone. But a further motivation is the fact that SPARTAN does not
follow the classical L4 road, providing a kernel API that comes with an own
terminology and different kernel primitives. This makes the mapping of
SPARTAN's kernel API to Genode a challenging endeavour and would provide us
with feedback regarding the universality of Genode's internal interfaces.
Finally, this project has the potential to ignite a further collaboration
between the HelenOS and Genode communities.
Block-level encryption
======================
Protecting privacy is one of the strongest motivational factors for developing
Genode. One pivotal element with that respect is the persistence of information
via block-level encryption. For example, to use Genode every day at Genode
Labs, it's crucial to protect the confidentiality of some information that's
not part of the Genode code base, e.g., emails and reports. There are several
expansion stages imaginable to reach the goal and the basic building blocks
(block-device interface, ATA/SATA driver for Qemu) are already in place.
:[https://github.com/genodelabs/genode/issues/55 - Discuss the issue...]:
Virtual NAT
===========
For sharing one physical network interface among multiple applications, Genode
comes with a component called nic_bridge, which implements proxy ARP. Through
this component, each application receives a distinct (virtual) network
interface that is visible to the real network. I.e., each application requests
an IP address via a DHCP request at the local network. An alternative approach
would be a component that implements NAT on Genode's NIC session interface.
This way, the whole Genode system would use only one IP address visible to the
local network. (by stacking multiple nat and nic_bridge components together, we
could even form complex virtual networks inside a single Genode system)
The implementation of the virtual NAT could follow the lines of the existing
nic_bridge component. For parsing network packets, there are already some handy
utilities available (at os/include/net/).
:[https://github.com/genodelabs/genode/issues/114 - Discuss the issue...]:
Runtime for the Go or D programming language
============================================
Genode is implemented in C++. However, we are repeatedly receiving requests
for offering more safe alternatives for implementing OS-level functionality
such as device drivers, file systems, and other protocol stacks. The goals
for this project are to investigate the Go and D programming languages with
respect to their use within Genode, port the runtime of of those languages
to Genode, and provide a useful level of integration with Genode.
Block cache
===========
Currently, there exists only the iso9660 server that is able to cache block
accesses. A generic solution for caching block-device accesses would be nice.
One suggestion is a component that requests a block session (routed to a block
device driver) as back end and also announces a block service (front end)
itself. Such a block-cache server waits for requests at the front end and
forwards them to the back end. But it uses its own memory to cache blocks.
The first version could support only read-only block devices (such as CDROM) by
caching the results of read accesses. In this version, we already need an
eviction strategy that kicks in once the block cache gets saturated. For a
start this could be FIFO or LRU (least recently used).
A more sophisticated version would support write accesses, too. Here we need a
way to sync blocks to the back end at regular intervals in order to guarantee
that all block-write accesses are becoming persistent after a certain time. We
would also need a way to explicitly flush the block cache (i.e., when the
front-end block session gets closed).
:[https://github.com/genodelabs/genode/issues/113 - Discuss the issue...]:
; _Since Genode Labs was not accepted as GSoC mentoring organization, the_
; _following section has become irrelevant. Hence, it is commented-out_
;
; Student applications
; ####################
;
; The formal steps for applying to the GSoC program will be posted once Genode
; Labs is accepted as mentoring organization. If you are a student interested
; in working on a Genode-related GSoC project, now is a good time to get
; involved with the Genode community. The best way is joining the discussions
; at our mailing list and the issue tracker. This way, you will learn about
; the currently relevant topics, our discussion culture, and the people behind
; the project.
;
; :[http://genode.org/community/mailing-lists]: Join our mailing list
; :[https://github.com/genodelabs/genode/issues]: Discuss issues around Genode

View File

@@ -4,6 +4,78 @@
===========
Genode OS Framework release 24.11 | 2024-11-22
##############################################
| With mirrored and panoramic multi-monitor setups, pointer grabbing,
| atomic blitting and panning, and panel-self-refresh support, Genode's GUI
| stack gets ready for the next decade. Hardware-wise, version 24.11 brings
| a massive driver update for the i.MX SoC family. As a special highlight, the
| release is accompanied by the first edition of the free book "Genode
| Applications" as a gateway for application developers into Genode.
Closing up the Year of Sculpt OS usability as the theme of our road map
for 2024, we are excited to unveil the results of two intense lines of
usability-concerned work with the release of Genode 24.11.
For the usability of the Genode-based Sculpt OS as day-to-day operating
system, the support of multi-monitor setups has been an unmet desire
for a long time. Genode 24.11 does not only deliver a solution as a
singular feature but improves the entire GUI stack in a holistic way,
addressing panel self-refresh, mechanisms needed to overcome tearing
artifacts, rigid resource partitioning between GUI applications, up to
pointer-grabbing support.
The second line of work addresses the usability of application development for
Genode and Sculpt OS in particular. Over the course of the year, our Goa SDK
has seen a succession of improvements that make the development, porting,
debugging, and publishing of software a breeze. Still, given Genode's
novelties, the learning curve to get started has remained challenging. Our new
book "Genode Applications" is intended as a gateway into the world of Genode
for those of us who enjoy dwelling in architectural beauty but foremost want
to get things done. It features introductory material, explains fundamental
concepts and components, and invites the reader on to a ride through a series
of beginner-friendly as well as advanced tutorials. The book can be downloaded
for free at [https://genode.org].
Regarding hardware support, our work during the release cycle was hugely
motivated by the prospect of bringing Genode to the MNT Pocket Reform laptop,
which is based on the NXP i.MX8MP SoC. Along this way, we upgraded all
Linux-based i.MX drivers to kernel version 6.6 while consolidating a variety
of vendor kernels, equipped our platform driver with watchdog support, and
added board support for this platform to Sculpt OS.
You can find these among more topics covered in the detailed
[https:/documentation/release-notes/24.11 - release documentation of version 24.11...]
Sculpt OS release 24.10 | 2024-10-30
####################################
| Thanks to a largely revamped GUI stack, the Genode-based
| Sculpt OS 24.10 has gained profound support for multi-monitor setups.
Among the many usability-related topics on our road map, multi-monitor
support is certainly the most anticipated feature. It motivated a holistic
modernization of Genode's GUI stack over several months, encompassing drivers,
the GUI multiplexer, inter-component interfaces, up to widget toolkits. Sculpt
OS 24.10 combines these new foundations with a convenient
[https:/documentation/articles/sculpt-24-10#Multi-monitor_support - user interface]
for controlling monitor modes, making brightness adjustments, and setting up
mirrored and panoramic monitor configurations.
Besides this main theme, version 24.10 benefits from the advancements of the
Genode OS Framework over the past six months: compatibility with Qt6,
drivers ported from the Linux kernel version 6.6.47, and comprehensive
[https:/documentation/release-notes/24.08#Goa_SDK - debugging support]
for the Goa SDK.
Sculpt OS 24.10 is available as ready-to-use system image for PC hardware,
the PinePhone, and the MNT Reform laptop at the
[https:/download/sculpt - Sculpt download page] accompanied
with updated [https:/documentation/articles/sculpt-24-10 - documentation].
Genode OS Framework release 24.08 | 2024-08-29
##############################################

File diff suppressed because it is too large Load Diff

579
doc/release_notes/24-11.txt Normal file
View File

@@ -0,0 +1,579 @@
===============================================
Release notes for the Genode OS Framework 24.11
===============================================
Genode Labs
During the discussion of this year's road-map roughly one year ago, the
usability concerns of Sculpt OS stood out.
Besides suspend/resume, which we addressed
[https://genode.org/documentation/release-notes/24.05#Suspend_resume_infrastructure - earlier this year],
multi-monitor support ranked highest on the list of desires. We are more than
happy to wrap up the year with the realization of this feature.
Section [Multi-monitor support] presents the many facets and outcomes of this
intensive line of work.
Over the course of 2024, our Goa SDK has received tremendous advances, which
make the development, porting, debugging, and publishing of software for
Genode - and Sculpt OS in particular - a breeze.
So far however, the learning curve for getting started remained rather steep
because the underlying concepts largely deviate from the beaten tracks known
from traditional operating systems. Even though there is plenty of
documentation, it is rather scattered and overwhelming.
All the more happy we are to announce that the current release is accompanied
by a new book "Genode Applications" that can be downloaded for free and
provides a smooth gateway for application developers into the world of Genode
(Section [New "Genode Applications" book]).
Regarding hardware-related technical topics, the release focuses on the
ARM-based i.MX SoC family, taking our ambition to run Sculpt OS on the MNT
Pocket Reform laptop as guiding theme. Section [Device drivers and platforms]
covers our driver and platform-related work in detail.
New "Genode Applications" book
##############################
Complementary to our _Genode Foundations_ and _Genode Platforms_ books, we have
been working on a new book that concentrates on application development.
_Genode Applications_ centers on the Goa SDK that we introduced with
[https://genode.org/documentation/release-notes/19.11#New_tooling_for_bridging_existing_build_systems_with_Genode - Genode 19.11]
and which has seen significant improvements over the past year
([https://genode.org/documentation/release-notes/23.08#Goa_tool_gets_usability_improvements_and_depot-index_publishing_support - 23.08],
[https://genode.org/documentation/release-notes/24.02#Sculpt_OS_as_remote_test_target_for_the_Goa_SDK - 24.02],
[https://genode.org/documentation/release-notes/24.08#Goa_SDK - 24.08]).
: <div class="visualClear"><!-- --></div>
: <p>
: <div style="clear: both; float: left; margin-right:20px;">
: <a class="internal-link" href="https://genode.org">
: <img class="image-inline" src="https://genode.org/documentation/genode-applications-title.png">
: </a>
: </div>
: </p>
The book intends to provide a beginner-friendly starting point for application
development and porting for Genode and Sculpt OS in particular. It starts off
with a getting-started tutorial for the Goa tool, and further recapitulates
Genode's architecture and a subset of its libraries, components, and
conventions such as the C runtime, VFS, NIC router, and package management.
With these essentials in place, the book is topped off with instructions for
application debugging and a collection of advanced tutorials.
Aligned with the release of Sculpt 24.10, we updated the Goa tool with the
corresponding depot archive versions. Furthermore, the Sculpt-integrated and
updated _Goa testbed_ preset is now prepared for remote debugging.
: <div class="visualClear"><!-- --></div>
:First revision of the Genode Applications document:
[https://genode.org/documentation/genode-applications-24-11.pdf]
Multi-monitor support
#####################
Among the users of the Genode-based Sculpt OS, the flexible use of multiple
monitors was certainly the most longed-after desire raised during our public
road-map discussion roughly one year ago. We quickly identified that a
profound solution cannot focus on piecemeal extensions of individual
components but must embrace an architectural step forward. The step turned
out being quite a leap.
In fact, besides reconsidering the roles of display and input drivers in
[https://genode.org/documentation/release-notes/20.08#The_GUI_stack__restacked - version 20.08],
the GUI stack has remained largely unchanged since
[https://genode.org/documentation/release-notes/14.08#New_GUI_architecture - version 14.08].
So we took our multi-monitor ambitions as welcome opportunity to incorporate
our experiences of the past ten years into a new design for the next ten
years.
Tickless GUI server and display drivers
=======================================
Up to now, the nitpicker GUI server as well as the display drivers used to
operate in a strictly periodic fashion. At a rate of 10 milliseconds, the GUI
server would route input events to the designated GUI clients and flush
graphical changes of the GUI clients to the display driver.
This simple mode of execution has benefits such as the natural ability of
batching input events and the robustness of the GUI server against overload
situations. However, in Sculpt OS, we observed that the fixed rate induces
little but constant load into an otherwise idle system, rendering
energy-saving regimes of modern CPUs less effective than they could be.
This problem would become amplified in the presence of multiple output channels
operating at independent frame rates. Moreover, with panel self-refresh
support of recent Intel graphics devices, the notion of a fixed continuous
frame rate has become antiquated.
Hence, it was time to move to a tickless GUI-server design where the GUI
server acts as a mere broker between events triggered by applications (e.g.,
pushing pixels) and drivers (e.g., occurrence of input, scanout to a display).
Depending on the behavior of its clients (GUI applications and drivers alike),
the GUI server notifies the affected parties about events of interest but
does not assert an active role.
For example, if a display driver does not observe any changed pixels for 50
ms, it goes to sleep. Once an application updates pixels affecting a display,
the GUI server wakes up the respective display driver, which then polls the
pixels at a driver-defined frame rate until observing when the pixels remain
static for 50 ms. Vice versa, the point in time when a display driver requests
updated pixels is reflected as a sync event to GUI applications visible on
that display, enabling such applications to synchronize their output to the
frame rate of the driver. The GUI server thereby asserts the role of steering
the sleep cycles of drivers and applications. Unless anything happens on
screen, neither the GUI server nor the display driver are active. When two
applications are visible on distinct monitors, the change of one application
does not induce any activity regarding the unrelated display. This allows for
scaling up the number of monitors without increasing the idle CPU load.
This change implies that the former practice of using sync signals as a
time source for application-side animation timing is no longer viable.
Sync signals occur only when a driver is active after all. GUI applications
may best use sync signals for redraw scheduling but need to use a real time
source as basis for calculating the progress of animations.
Paving the ground for tearing-free motion
=========================================
Tearing artifacts during animations are rightfully frowned upon. It goes
without saying that we strive to attain tearing-free motion in Genode. Two
preconditions must be met. First, the GUI server must be able to get hold
of a _consistent_ picture at any time. Second, the flushing of the picture
to the display hardware must be timed with _vsync_ of the physical display.
Up to now, the GUI stack was unable to meet the first precondition by design.
If the picture is composed of multiple clients, the visual representation of
each client must be present in a consistent state.
The textures used as input of the compositing of the final picture are buffers
shared between server and client. Even though clients traditionally employ
double-buffering to hide intermediate drawing states, the final back-to-front
copy into the shared buffer violated the consistency of the buffer during
the client-side copy operation - when looking at the buffer from the server
side. To overcome this deficiency, we have now equipped the GUI server with
atomic blitting and panning operations, which support atomic updates in two
fashions.
_Atomic back-to-front blitting_ allows GUI clients that partially update their
user interface - like regular application dialogs - to implement double
buffering by placing both the back buffer and front buffer within the GUI
session's shared buffer and configuring a view that shows only the front
buffer. The new blit operation ('Framebuffer::Session::blit') allows the client
to atomically flush pixels from the back buffer to the front buffer.
_Atomic buffer flipping_ allows GUI clients that always update all pixels -
like a media player or a game - to leverage panning
('Framebuffer::Session::panning') to atomically redirect the displayed pixels to
a different portion of the GUI session's shared buffer without any copy
operation needed. The buffer contains two frames, the displayed one and the
next one. Once the next frame is complete, the client changes the panning
position to the portion containing the next frame.
Almost all GUI clients of the Genode OS framework have been updated to use
these new facilities.
The vsync timing as the second precondition for tearing-free motion lies in
the hands of the display driver, which can in principle capture pixel updates
from the GUI server driven by vsync interrupts. In the presence of multiple
monitors with different vsync rates, a GUI client may deliberately select
a synchronization source ('Framebuffer::Session::sync_source'). That said,
even though the interfaces are in place, vsync timing is not yet provided by
the current display drivers.
Mirrored and panoramic monitor setups
=====================================
A display driver interacts with the nitpicker GUI server as a capture client.
One can think of a display driver as a screen-capturing application.
Up until now, the nitpicker GUI server handed out the same picture to each
capture client. So each client obtained a mirror of the same picture. By
subjecting each client to a policy defining a window within a larger panorama,
a driver creating one capture session per monitor becomes able to display the
larger panorama spanning the connected displays. The assignment of capture
clients to different parts of the panorama follows Genode's established
label-based policy-selection approach as explained in the
[https://github.com/genodelabs/genode/blob/master/repos/os/src/server/nitpicker/README - documentation]
of the nitpicker GUI server.
Special care has been taken to ensure that the pointer is always visible. It
cannot be moved to any area that is not captured. Should the only capture
client displaying the pointer disappear, the pointer is warped to the center
of (any) remaining capture client.
A mirrored monitor setup can in principle be attained by placing multiple
capture clients at the same part of nitpicker's panorama. However, there is
a better way: Our Intel display-driver component supports both discrete and
merged output channels. The driver's configuration subsumes all connectors
listed within a '<merge>' node as a single encompassing capture session at the
GUI server. The mirroring of the picture is done by the hardware. Each
connector declared outside the '<merge>' node is handled as a discrete capture
session labeled after the corresponding connector. The driver's
[https://github.com/genodelabs/genode/blob/master/repos/pc/src/driver/framebuffer/intel/pc/README - documentation]
describes the configuration in detail.
Sculpt OS integration
=====================
All the changes described above are featured in the recently released
Sculpt OS version 24.10, which gives the user the ability to attain mirrored
or panoramic monitor setups or a combination thereof by the means of manual
configuration or by using interactive controls.
[image sculpt_24_10_intel_fb]
You can find the multi-monitor use of Sculpt OS covered by the
[https://genode.org/documentation/articles/sculpt-24-10#Multi-monitor_support - documentation].
Revised inter-component interfaces
==================================
Strict resource partitioning between GUI clients
------------------------------------------------
Even though Genode gives server components the opportunity to strictly operate
on client-provided resources only, the two prominent GUI servers - nitpicker
and the window manager (wm) - did not leverage these mechanisms to full
extent. In particular the wm eschewed strict resource accounting by paying out
of its own pocket. This deficiency has been rectified by the current release,
thereby making the GUI stack much more robust against potential resource
denial-of-service issues. Both the nitpicker GUI server and the window manager
now account all allocations to the resource budgets of the respective clients.
This change has the effect that GUI clients must now be equipped with the
actual cap and RAM quotas needed.
Note that not all central parts of the GUI stack operate on client-provided
resources. In particular, a window decorator is a mere client of the window
manager despite playing a role transcending multiple applications. As the
costs needed for the decorations depend on the number of applications present
on screen, the resources of the decorator must be dimensioned with a sensible
upper bound. Fortunately, however, as the decorator is a plain client of the
window manager, it can be restarted, replaced, and upgraded without affecting
any application.
Structured mode information for applications
--------------------------------------------
Up to now, GUI clients were able to request mode information via a plain
RPC call that returned the dimensions and color depth of the display.
Multi-monitor setups call for more flexibility, which prompted us to
replace the mode information by XML-structured information delivered as
an 'info' dataspace. This is in line with how meta information is handled
in other modern session interfaces like the platform or USB sessions.
The new representation gives us room to annotate information that could
previously not be exposed to GUI clients, in particular:
* The total panorama dimensions.
* Captured areas within the panorama, which can be used by multi-monitor
aware GUI clients as intelligence for placing GUI views.
* DPI information carried by 'width_mm' and 'height_mm' attributes.
This information is defined by the display driver and passed to the GUI
server as 'Capture::Connection::buffer' argument.
* The closed state of a window interactively closed by the user.
Note that the window manager (wm) virtualizes the information of the nitpicker
GUI server. Instead of exposing nitpicker's panorama to its clients, the wm
reports the logical screen hosting the client's window as panorama and the
window size as a single captured rectangle within the panorama.
Mouse grabbing
--------------
Since the inception of the nitpicker GUI server, its clients observed absolute
pointer positions only. The GUI server unconditionally translated relative
mouse-motion events to absolute motion events.
To accommodate applications like games or a VM emulating a relative pointer
device, we have now extended the GUI server(s) with the ability to selectively
expose relative motion events while locking the absolute pointer position.
This is usually called pointer grabbing. It goes without saying that the user
must always retain a way to forcefully reassert control over the pointer
without the cooperation of the application.
The solution is the enhancement of the 'Input::Session' interface by a new RPC
function that allows a client to request exclusive input. The nitpicker GUI
server grants this request if the application owns the focus. In scenarios
using the window manager (wm), the focus is always defined by the wm, which
happens to intercept all input sessions of GUI applications. Hence, the wm is
in the natural position of arbitrating the grabbing/ungrabbing of the pointer.
For each GUI client, the wm records whether the client is interested in
exclusive input but does not forward this request to nitpicker. Only if a GUI
client receives the focus and has requested exclusive input, the wm enables
exclusive input for this client at nitpicker when observing a mouse click on
the application window. Whenever the user presses the global wm key (super),
the wm forcefully releases the exclusive input at nitpicker until the user
clicks into the client window the next time.
Furthermore, an application may enable exclusive input transiently during a
key sequence, e.g., when dragging the mouse while holding the mouse button.
Transient exclusive input is revoked as soon as the last button/key is
released. It thereby would in principle allow for GUI controls like knobs to
lock the pointer position while the user adjusts the value by moving the mouse
while the mouse button is held. So the pointer retains its original position
at the knob.
While operating in exclusive input mode, there is no useful notion of an
absolute pointer position at the nitpicker GUI server. Hence, nitpicker hides
GUI domains that use the pointer position as coordinate origin. Thereby, the
mouse cursor automatically disappears while the pointer is grabbed.
Current state and ongoing work
==============================
All the advances described above are in full effect in the recently released
version 24.10 of [https://genode.org/download/sculpt - Sculpt OS]. All
components hosted in Genode's main and world repositories have been updated
accordingly, including Genode-specific components like the widget toolkit
used by the administrative user interface of Sculpt OS, window decorators,
over Qt5 and Qt6, to SDL and SDL2.
[image multiple_monitors]
Current work is underway to implement multi-monitor window management and to
make multiple monitors seamlessly available to guest OSes hosted in VirtualBox.
Furthermore, the Intel display driver is currently getting equipped with the
ability to use vsync interrupts for driving the interaction with the GUI
server, taking the final step to attain tearing-free motion.
Device drivers and platforms
############################
Linux device-driver environment (DDE)
=====================================
With our
[https://genode.org/documentation/release-notes/24.08#Linux_device-driver_environment__DDE_ - recent]
update of the DDE Linux kernel to version 6.6 for PC platforms and as a
prerequisite to support the MNT Pocket Reform, we have adapted all drivers for
the i.MX5/6/7/8 platforms to Linux kernel version 6.6.47. The list of drivers
includes Wifi, NIC, display, GPU, USB and SD-card.
MNT Pocket Reform
~~~~~~~~~~~~~~~~~
The [https://shop.mntre.com/products/mnt-pocket-reform - MNT Pocket Reform] is
a Mini Laptop by MNT aiming to be modular, upgradable, and repairable while
being assembled completely using open-source hardware. Being modular implies
that a range of CPU modules is available for the MNT Pocket. Some of these
chips, like the Rockchip based modules, are not officially supported by
Genode, yet. But there is a choice of an i.MX8MP based module available which
fits nicely into Genode's i.MX infrastructure.
Genode already supports the MNT Reform 2 i.MX8MQ based
[https://genodians.org/skalk/2020-06-29-mnt-reform - laptop]. So an update from
MQ to MP doesn't sound like a big issue because only one letter changed,
right? It turns out that there are more changes to the platform than mere
adjustments of I/O resources and interrupt numbers. Additionally, the MNT
Reform team offers quite a large patch set for each supported Linux kernel
version. Luckily there is
[https://source.mnt.re/reform/reform-debian-packages/-/tree/main/linux/patches6.6?ref_type=heads - one]
for our just updated Linux 6.6 kernel. With this patch set, we were able to
produce a Linux source tree (imx_linux) that we now take as basis for driver
development on Genode. Note that these Linux kernel sources are shared by all
supported i.MX platforms. Of course, additional patch series were necessary to
include device-tree sources from other vendor kernels, for instance from
Compulab.
With the development environment in place and after putting lots of effort in,
we ultimately achieved initial Genode support for the MNT Pocket Reform with
Genode 24.11.
On the device-driver side of things, we did not have to port lots of new
drivers but were able to extend drivers already available for the i.MX8MQ
platform. In particular these drivers are for the wired network card, USB host
controller, display, and SD card.
For the wireless network device that is found on the i.MX8MP SoM in the MNT
Pocket Reform, we needed to port a new driver. It has a Qualcomm QCA9377
chipset and is attached via SDIO. Unfortunately the available _ath10k_ driver
in the vanilla kernel does not work properly with such a device and therefore
is also not used in the regular Linux kernel for the MNT Pocket Reform. A
slightly adapted external QCACLD2 reference driver is used instead. So we
followed suit by incorporating this particular driver in our _imx_linux_
source tree as well.
[image sculpt_mnt_pocket]
Sculpt OS running on the MNT Pocket Reform
Being the initial enablement, there are still some limitations.
For example, the display of the MNT Pocket is physically
[https://mntre.com/documentation/pocket-reform-handbook.pdf - rotated] by 90
degrees. So, we had to find a way to accommodate for that. Unfortunately,
there seems to be no hardware support other than using the GPU to perform
a fast rotation. With GPU support still missing on this system, we had to
resort to perform the rotation in software on the CPU, which is obviously
far from optimal.
Those early inefficiencies notwithstanding, Sculpt OS has become able to run
on the MNT Pocket Reform. We will provide a preview image that exercises the
available features soon.
Platform driver for i.MX 8M Plus
================================
While enabling support for the MNT Pocket Reform (Section [MNT Pocket Reform]),
it was necessary to adjust the i.MX8MP specific platform driver, which was
originally introduced in the previous
[https://genode.org/documentation/release-notes/24.08#Improvements_for_NXP_s_i.MX_family - release 24.08]
to drive the Compulab i.MX 8M Plus IOT Gateway.
Some of the I/O pin configurations necessary to set up the SoC properly are
statically compiled into this driver because they do not change at runtime.
However, the pin configuration is specific to the actual board. Therefore, the
i.MX8MP platform driver now needs to distinguish between different boards (IOT
Gateway and MNT Pocket) by evaluating the 'platform_info' ROM provided by
core.
Moreover, while working on different drivers, we detected a few missing clocks
that were added to the platform driver. It turned out that some clocks that we
initially turned off to save energy, have to be enabled to ensure the
liveliness of the ARM Trusted Firmware (ATF) and thereby the platform. Also,
we had to adapt the communication in between ATF and our platform driver to
control power-domains. The first version of the i.MX8MP platform driver shared
the ATF power-domains protocol with the i.MX8MQ version. However, the
power-domain enumerations of the different firmwares varies also and we
adapted that.
Finally, the watchdog hardware is now served by the platform driver in a
recurrent way. Originally our driver used the watchdog only to implement reset
functionality. But in case of the MNT Pocket Reform, the watchdog hardware is
already armed by the bootloader. Therefore, it needs to get served in time, to
prevent the system from rebooting. As a consequence, the platform driver is
mandatory on this platform if it needs to run longer than a minute.
Wifi management rework
======================
Our management interface in the wifi driver served us well over the years
and concealed the underlying complexity of the wireless stack. At the same
time it gained some complexity itself to satisfy a variety of use-cases.
Thus, we took the past release cycle as opportunity to rework the management
layer to reduce its complexity by streamlining the interaction between
various parts, like the manager layer itself, 'wpa_supplicant' as well as
the device driver in order to provide a sound foundation for future
adaptions.
Included is also an update of the 'wpa_supplicant' to version 2.11.
The following segments detail the changes made to the configuration options as
they were altered quite a bit to no longer mix different tasks (e.g. joining a
network and scanning for hidden networks) while removing obsolete options.
At the top-level '<wifi_config>' node, the following alterations were made:
* The 'log_level' attribute was added and configures the supplicant's
verbosity. Valid values correspond to levels used by the supplicant
and are as follows: 'excessive', 'msgdump', 'debug', 'info', 'warning',
and 'error'. The default value is 'error' and configures the least
amount of verbosity. This option was introduced to ease the investigation
of connectivity issues.
* The 'bgscan' attribute may be used to configure the way the
supplicant performs background-scanning to steer or rather optimize
roaming decision within the same network. The default value is set
to 'simple:30:-70:600'. The attribute is forwarded unmodified to the WPA
supplicant and thus provides the syntax supported by the supplicant
implementation. It can be disabled by specifying an empty value, e.g.
'bgscan=""'.
* The 'connected_scan_interval' attribute was removed as this functionality
is now covered by background scanning.
* The 'verbose_state' attribute was removed altogether and similar
functionality is now covered by the 'verbose' attribute.
The network management received the following changes:
* Every configured network, denoted by a '<network>' node, is now implicitly
considered an option for joining. The 'auto_connect' attribute was
removed and a '<network>' node must be renamed or removed to deactivate
automatic connection establishment.
* The intent to scan for a hidden network is now managed by the newly
introduced '<explicit_scan>' node that like the '<network>' node has
an 'ssid' attribute. If the specified SSID is valid, it is incorporated
into the scan request to actively probe for this network. As the node
requests explicit scanning only, a corresponding '<network>' node is
required to actually connect to the hidden network.
The 'explicit_scan' attribute of the '<network>' node has been removed.
The following exemplary configuration shows how to configure the driver
for attempting to join two different networks where one of them is hidden.
The initial scan interval is set 10 seconds and the signal quality will be
updated every 30 seconds while connected to a network.
!<wifi_config scan_interval="10" update_quality_interval="30">
! <explicit_scan ssid="Skynet"/>
! <network ssid="Zero" protection="WPA2" passphrase="allyourbase"/>
! <network ssid="Skynet" protection="WPA3" passphrase="illbeback"/>
!</wifi_config>
For more information please consult the driver's
[https://github.com/genodelabs/genode/blob/master/repos/dde_linux/src/driver/wifi/README - documentation]
that now features a best-practices section explaining how the driver should be
operated at best, and highlights the difference between a managed (as used in
Sculpt OS) and a user-generated configuration.
Audio driver updated to OpenBSD 7.6
===================================
With this release, we updated our OpenBSD-based audio driver to a more recent
revision that correlates to version 7.6. It supports newer devices, e.g. Alder
Lake-N, and includes a fix for using message-signaled interrupts (MSI) with
HDA devices as found in AMD-based systems.
AVX and hardware-based AES in virtual machines
==============================================
The current release adds support for requesting and transferring the AVX FPU
state via Genode's VM-session interface. With this prerequisite fulfilled, we
enabled the announcement of the AVX feature to guest VMs in our port of
VirtualBox6.
Additionally, we enabled the announcement of AES and RDRAND CPU features to
guest VMs to further improve the utilization of the hardware.
Build system and tools
######################
Extended depot-tool safeguards
------------------------------
When using the run tool's '--depot-auto-update' feature while switching
between different git topic branches with committed recipe hashes, a binary
archive present in the depot may accidentally not match its ingredients
because the depot/build tool's 'REBUILD=' mode - as used by the depot
auto-update mechanism - merely looks at the archive versions. This situation
is arguably rare. But when it occurs, its reach and effects are hard to
predict. To rule out this corner case early, the depot/build tool has now been
extended by recording the hashes of the ingredients of binary archives. When
skipping a rebuild because the desired version presumably already exists as a
binary archive, the recorded hashes are compared to the current state of the
ingredients (src and api archives). Thereby inconsistencies are promptly
reported to the user.
Users of the depot tool will notice .hash files appearing alongside src and
api archives. Those files contain the hash value of the content of the
respective archive. Each binary archive built is now also accompanied by
a .hash file, which contains a list of hash values of the ingredients that went
into the binary archive. Thanks to these .hash files, the consistency between
binaries and their ingredients can be checked quickly.
_As a note of caution, when switching to the Genode 24.11 with existing depot,_
_one will possibly need to remove existing depot archives (as listed by the_
_diagnostic messages) because the existing archives are not accompanied by_
_.hash files yet._

View File

@@ -1 +1 @@
2024-08-28 8f1db0e604a283f5d3aafea61d38d6852ee91911
2024-12-10 408b474f632eefaaa19db35812a9aa94a48e6bdb

View File

@@ -61,8 +61,9 @@ class Core::Platform_thread : Interface
/**
* Constructor
*/
Platform_thread(Platform_pd &pd, size_t, const char *name,
unsigned, Affinity::Location, addr_t)
Platform_thread(Platform_pd &pd, Rpc_entrypoint &, Ram_allocator &,
Region_map &, size_t, const char *name, unsigned,
Affinity::Location, addr_t)
: _name(name), _pd(pd) { }
/**

View File

@@ -38,8 +38,11 @@ static inline bool can_use_super_page(addr_t, size_t)
}
addr_t Io_mem_session_component::_map_local(addr_t phys_base, size_t size)
Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const phys_base,
size_t const size_in)
{
size_t const size = size_in;
auto map_io_region = [] (addr_t phys_base, addr_t local_base, size_t size)
{
using namespace Fiasco;
@@ -91,14 +94,16 @@ addr_t Io_mem_session_component::_map_local(addr_t phys_base, size_t size)
size_t align = (size >= get_super_page_size()) ? get_super_page_size_log2()
: get_page_size_log2();
return platform().region_alloc().alloc_aligned(size, align).convert<addr_t>(
return platform().region_alloc().alloc_aligned(size, align).convert<Map_local_result>(
[&] (void *ptr) {
addr_t const core_local_base = (addr_t)ptr;
map_io_region(phys_base, core_local_base, size);
return core_local_base; },
return Map_local_result { .core_local_addr = core_local_base, .success = true };
},
[&] (Range_allocator::Alloc_error) -> addr_t {
[&] (Range_allocator::Alloc_error) {
error("core-local mapping of memory-mapped I/O range failed");
return 0; });
return Map_local_result();
});
}

View File

@@ -103,3 +103,6 @@ Untyped_capability Pager_entrypoint::_pager_object_cap(unsigned long badge)
{
return Capability_space::import(native_thread().l4id, Rpc_obj_key(badge));
}
void Core::init_page_fault_handling(Rpc_entrypoint &) { }

View File

@@ -20,7 +20,6 @@
/* core includes */
#include <platform.h>
#include <core_env.h>
using namespace Core;

View File

@@ -26,7 +26,7 @@ namespace Genode { struct Foc_thread_state; }
struct Genode::Foc_thread_state : Thread_state
{
Foc::l4_cap_idx_t kcap { Foc::L4_INVALID_CAP }; /* thread's gate cap in its PD */
uint16_t id { }; /* ID of gate capability */
uint32_t id { }; /* ID of gate capability */
addr_t utcb { }; /* thread's UTCB in its PD */
};

View File

@@ -1 +1 @@
2024-08-28 deb70ebec813a19ba26a28cd94fa7d25bbe52e78
2024-12-10 4247239f4d3ce9a840be368ac9e054e8064c01c6

View File

@@ -1 +1 @@
2024-08-28 a4ae12d703c38248ac22905163479000020e0bb0
2024-12-10 39609d3553422b8c7c6acff2db845c67c5f8912b

View File

@@ -1 +1 @@
2024-08-28 4c4d4d5d96bc345947e90c42559e45fec4dcc4c0
2024-12-10 7867db59531dc9086e76b74800125ee61ccc310e

View File

@@ -1 +1 @@
2024-08-28 b0160be55c422f860753dbd375f04ff8f7ffc7e9
2024-12-10 3fc7c1b2cae2b9af835c97bf384b10411ec9c511

View File

@@ -1 +1 @@
2024-08-28 3e92e9cf1ec41d5de0bfa754ff48c63476e60d67
2024-12-10 68ee5bc5640e1d32c33f46072256d5b1c71bef9b

View File

@@ -30,17 +30,15 @@ class Core::Cap_id_allocator
{
public:
using id_t = uint16_t;
enum { ID_MASK = 0xffff };
using id_t = unsigned;
private:
enum {
CAP_ID_RANGE = ~0UL,
CAP_ID_MASK = ~3UL,
CAP_ID_NUM_MAX = CAP_ID_MASK >> 2,
CAP_ID_OFFSET = 1 << 2
CAP_ID_OFFSET = 1 << 2,
CAP_ID_MASK = CAP_ID_OFFSET - 1,
CAP_ID_RANGE = 1u << 28,
ID_MASK = CAP_ID_RANGE - 1,
};
Synced_range_allocator<Allocator_avl> _id_alloc;

View File

@@ -75,8 +75,8 @@ class Core::Platform_thread : Interface
/**
* Constructor for non-core threads
*/
Platform_thread(Platform_pd &, size_t, const char *name, unsigned priority,
Affinity::Location, addr_t);
Platform_thread(Platform_pd &, Rpc_entrypoint &, Ram_allocator &, Region_map &,
size_t, const char *name, unsigned priority, Affinity::Location, addr_t);
/**
* Constructor for core main-thread

View File

@@ -125,7 +125,7 @@ class Core::Vm_session_component
** Vm session interface **
**************************/
Capability<Native_vcpu> create_vcpu(Thread_capability);
Capability<Native_vcpu> create_vcpu(Thread_capability) override;
void attach_pic(addr_t) override { /* unused on Fiasco.OC */ }
void attach(Dataspace_capability, addr_t, Attach_attr) override; /* vm_session_common.cc */

View File

@@ -6,7 +6,7 @@
*/
/*
* Copyright (C) 2006-2017 Genode Labs GmbH
* Copyright (C) 2006-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
@@ -21,31 +21,37 @@
using namespace Core;
void Io_mem_session_component::_unmap_local(addr_t base, size_t, addr_t)
void Io_mem_session_component::_unmap_local(addr_t base, size_t size, addr_t)
{
if (!base)
return;
unmap_local(base, size >> 12);
platform().region_alloc().free(reinterpret_cast<void *>(base));
}
addr_t Io_mem_session_component::_map_local(addr_t base, size_t size)
Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const base,
size_t const size)
{
/* align large I/O dataspaces on a super-page boundary within core */
size_t alignment = (size >= get_super_page_size()) ? get_super_page_size_log2()
: get_page_size_log2();
/* find appropriate region for mapping */
return platform().region_alloc().alloc_aligned(size, (unsigned)alignment).convert<addr_t>(
/* find appropriate region and map it locally */
return platform().region_alloc().alloc_aligned(size, (unsigned)alignment).convert<Map_local_result>(
[&] (void *local_base) {
if (!map_local_io(base, (addr_t)local_base, size >> get_page_size_log2())) {
error("map_local_io failed");
error("map_local_io failed ", Hex_range(base, size));
platform().region_alloc().free(local_base, base);
return 0UL;
return Map_local_result();
}
return (addr_t)local_base;
return Map_local_result { .core_local_addr = addr_t(local_base),
.success = true };
},
[&] (Range_allocator::Alloc_error) {
error("allocation of virtual memory for local I/O mapping failed");
return 0UL; });
return Map_local_result(); });
}

View File

@@ -153,3 +153,6 @@ Pager_capability Pager_entrypoint::manage(Pager_object &obj)
},
[&] (Cpu_session::Create_thread_error) { return Pager_capability(); });
}
void Core::init_page_fault_handling(Rpc_entrypoint &) { }

View File

@@ -18,6 +18,7 @@
#include <dataspace/capability.h>
#include <trace/source_registry.h>
#include <util/misc_math.h>
#include <util/mmio.h>
#include <util/xml_generator.h>
/* base-internal includes */
@@ -342,6 +343,76 @@ void Core::Platform::_setup_irq_alloc()
}
struct Acpi_rsdp : public Genode::Mmio<32>
{
using Mmio<32>::Mmio;
struct Signature : Register< 0, 64> { };
struct Revision : Register<15, 8> { };
struct Rsdt : Register<16, 32> { };
struct Length : Register<20, 32> { };
struct Xsdt : Register<24, 64> { };
bool valid() const
{
const char sign[] = "RSD PTR ";
return read<Signature>() == *(Genode::uint64_t *)sign;
}
} __attribute__((packed));
static void add_acpi_rsdp(auto &region_alloc, auto &xml)
{
using namespace Foc;
using Foc::L4::Kip::Mem_desc;
l4_kernel_info_t const &kip = sigma0_map_kip();
Mem_desc const * const desc = Mem_desc::first(&kip);
if (!desc)
return;
for (unsigned i = 0; i < Mem_desc::count(&kip); ++i) {
if (desc[i].type() != Mem_desc::Mem_type::Info ||
desc[i].sub_type() != Mem_desc::Info_sub_type::Info_acpi_rsdp)
continue;
auto offset = desc[i].start() & 0xffful;
auto pages = align_addr(offset + desc[i].size(), 12) >> 12;
region_alloc.alloc_aligned(pages * 4096, 12).with_result([&] (void *core_local_ptr) {
if (!map_local_io(desc[i].start(), (addr_t)core_local_ptr, pages))
return;
Byte_range_ptr const ptr((char *)(addr_t(core_local_ptr) + offset),
pages * 4096 - offset);
auto const rsdp = Acpi_rsdp(ptr);
if (!rsdp.valid())
return;
xml.node("acpi", [&] {
xml.attribute("revision", rsdp.read<Acpi_rsdp::Revision>());
if (rsdp.read<Acpi_rsdp::Rsdt>())
xml.attribute("rsdt", String<32>(Hex(rsdp.read<Acpi_rsdp::Rsdt>())));
if (rsdp.read<Acpi_rsdp::Xsdt>())
xml.attribute("xsdt", String<32>(Hex(rsdp.read<Acpi_rsdp::Xsdt>())));
});
unmap_local(addr_t(core_local_ptr), pages);
region_alloc.free(core_local_ptr);
pages = 0;
}, [&] (Range_allocator::Alloc_error) { });
if (!pages)
return;
}
}
void Core::Platform::_setup_basics()
{
using namespace Foc;
@@ -412,6 +483,10 @@ void Core::Platform::_setup_basics()
/* image is accessible by core */
add_region(Region(img_start, img_end), _core_address_ranges());
/* requested as I/O memory by the VESA driver and ACPI (rsdp search) */
_io_mem_alloc.add_range (0, 0x2000);
ram_alloc() .remove_range(0, 0x2000);
}
@@ -517,7 +592,10 @@ Core::Platform::Platform()
xml.node("affinity-space", [&] {
xml.attribute("width", affinity_space().width());
xml.attribute("height", affinity_space().height()); });
xml.attribute("height", affinity_space().height());
});
add_acpi_rsdp(region_alloc(), xml);
});
}
);

View File

@@ -18,7 +18,6 @@
/* core includes */
#include <platform_thread.h>
#include <platform.h>
#include <core_env.h>
/* Fiasco.OC includes */
#include <foc/syscall.h>
@@ -210,7 +209,7 @@ Foc_thread_state Platform_thread::state()
s = _pager_obj->state.state;
s.kcap = _gate.remote;
s.id = (uint16_t)_gate.local.local_name();
s.id = Cap_index::id_t(_gate.local.local_name());
s.utcb = _utcb;
return s;
@@ -278,7 +277,8 @@ void Platform_thread::_finalize_construction()
}
Platform_thread::Platform_thread(Platform_pd &pd, size_t, const char *name, unsigned prio,
Platform_thread::Platform_thread(Platform_pd &pd, Rpc_entrypoint &, Ram_allocator &,
Region_map &, size_t, const char *name, unsigned prio,
Affinity::Location location, addr_t)
:
_name(name),

View File

@@ -38,7 +38,7 @@ using namespace Core;
Cap_index_allocator &Genode::cap_idx_alloc()
{
static Cap_index_allocator_tpl<Core_cap_index,10*1024> alloc;
static Cap_index_allocator_tpl<Core_cap_index, 128 * 1024> alloc;
return alloc;
}
@@ -190,7 +190,7 @@ Cap_id_allocator::Cap_id_allocator(Allocator &alloc)
:
_id_alloc(&alloc)
{
_id_alloc.add_range(CAP_ID_OFFSET, CAP_ID_RANGE);
_id_alloc.add_range(CAP_ID_OFFSET, unsigned(CAP_ID_RANGE) - unsigned(CAP_ID_OFFSET));
}
@@ -213,7 +213,7 @@ void Cap_id_allocator::free(id_t id)
Mutex::Guard lock_guard(_mutex);
if (id < CAP_ID_RANGE)
_id_alloc.free((void*)(id & CAP_ID_MASK), CAP_ID_OFFSET);
_id_alloc.free((void*)(addr_t(id & CAP_ID_MASK)), CAP_ID_OFFSET);
}

View File

@@ -12,7 +12,6 @@
*/
/* core includes */
#include <core_env.h>
#include <platform_services.h>
#include <vm_root.h>
#include <io_port_root.h>
@@ -23,15 +22,16 @@
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &heap,
Registry<Service> &services,
Trace::Source_registry &trace_sources)
Trace::Source_registry &trace_sources,
Ram_allocator &core_ram,
Region_map &core_rm,
Range_allocator &io_port_ranges)
{
static Vm_root vm_root(ep, heap, core_env().ram_allocator(),
core_env().local_rm(), trace_sources);
static Vm_root vm_root(ep, heap, core_ram, core_rm, trace_sources);
static Core_service<Vm_session_component> vm(services, vm_root);
static Io_port_root io_root(*core_env().pd_session(),
platform().io_port_alloc(), heap);
static Io_port_root io_root(io_port_ranges, heap);
static Core_service<Io_port_session_component> io_port(services, io_root);
}

View File

@@ -22,7 +22,6 @@
/* core includes */
#include <platform.h>
#include <core_env.h>
/* Fiasco.OC includes */
#include <foc/syscall.h>

View File

@@ -30,12 +30,13 @@ class Genode::Native_capability::Data : public Avl_node<Data>
{
public:
using id_t = uint16_t;
using id_t = unsigned;
constexpr static id_t INVALID_ID = ~0u;
private:
constexpr static uint16_t INVALID_ID = ~0;
constexpr static uint16_t UNUSED = 0;
constexpr static id_t UNUSED = 0;
uint8_t _ref_cnt; /* reference counter */
id_t _id; /* global capability id */
@@ -46,8 +47,8 @@ class Genode::Native_capability::Data : public Avl_node<Data>
bool valid() const { return _id != INVALID_ID; }
bool used() const { return _id != UNUSED; }
uint16_t id() const { return _id; }
void id(uint16_t id) { _id = id; }
id_t id() const { return _id; }
void id(id_t id) { _id = id; }
uint8_t inc();
uint8_t dec();
addr_t kcap() const;

View File

@@ -3,11 +3,11 @@
* \author Stefan Kalkowski
* \date 2010-12-06
*
* This is a Fiasco.OC-specific addition to the process enviroment.
* This is a Fiasco.OC-specific addition to the process environment.
*/
/*
* Copyright (C) 2010-2017 Genode Labs GmbH
* Copyright (C) 2010-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
@@ -59,7 +59,7 @@ static volatile int _cap_index_spinlock = SPINLOCK_UNLOCKED;
bool Cap_index::higher(Cap_index *n) { return n->_id > _id; }
Cap_index* Cap_index::find_by_id(uint16_t id)
Cap_index* Cap_index::find_by_id(id_t id)
{
if (_id == id) return this;
@@ -116,8 +116,8 @@ Cap_index* Capability_map::insert(Cap_index::id_t id)
{
Spin_lock::Guard guard(_lock);
ASSERT(!_tree.first() || !_tree.first()->find_by_id(id),
"Double insertion in cap_map()!");
if (_tree.first() && _tree.first()->find_by_id(id))
return { };
Cap_index * const i = cap_idx_alloc().alloc_range(1);
if (i) {
@@ -184,9 +184,16 @@ Cap_index* Capability_map::insert_map(Cap_index::id_t id, addr_t kcap)
_tree.insert(i);
/* map the given cap to our registry entry */
l4_task_map(L4_BASE_TASK_CAP, L4_BASE_TASK_CAP,
l4_obj_fpage(kcap, 0, L4_FPAGE_RWX),
i->kcap() | L4_ITEM_MAP | L4_MAP_ITEM_GRANT);
auto const msg = l4_task_map(L4_BASE_TASK_CAP, L4_BASE_TASK_CAP,
l4_obj_fpage(kcap, 0, L4_FPAGE_RWX),
i->kcap() | L4_ITEM_MAP | L4_MAP_ITEM_GRANT);
if (l4_error(msg)) {
_tree.remove(i);
cap_idx_alloc().free(i, 1);
return 0;
}
return i;
}

View File

@@ -55,9 +55,6 @@ static inline bool ipc_error(l4_msgtag_t tag, bool print)
}
static constexpr Cap_index::id_t INVALID_BADGE = 0xffff;
/**
* Representation of a capability during UTCB marshalling/unmarshalling
*/
@@ -116,7 +113,7 @@ static int extract_msg_from_utcb(l4_msgtag_t tag,
Cap_index::id_t const badge = (Cap_index::id_t)(*msg_words++);
if (badge == INVALID_BADGE)
if (badge == Cap_index::INVALID_ID)
continue;
/* received a delegated capability */
@@ -227,7 +224,7 @@ static l4_msgtag_t copy_msgbuf_to_utcb(Msgbuf_base &snd_msg,
for (unsigned i = 0; i < num_caps; i++) {
/* store badge as normal message word */
*msg_words++ = caps[i].valid ? caps[i].badge : INVALID_BADGE;
*msg_words++ = caps[i].valid ? caps[i].badge : Cap_index::INVALID_ID;
/* setup flexpage for valid capability to delegate */
if (caps[i].valid) {

View File

@@ -42,7 +42,6 @@ namespace Foc {
using namespace Genode;
using Exit_config = Vm_connection::Exit_config;
using Call_with_state = Vm_connection::Call_with_state;
enum Virt { VMX, SVM, UNKNOWN };
@@ -72,8 +71,7 @@ struct Foc_native_vcpu_rpc : Rpc_client<Vm_session::Native_vcpu>, Noncopyable
Capability<Vm_session::Native_vcpu> _create_vcpu(Vm_connection &vm,
Thread_capability &cap)
{
return vm.with_upgrade([&] {
return vm.call<Vm_session::Rpc_create_vcpu>(cap); });
return vm.create_vcpu(cap);
}
public:
@@ -400,6 +398,7 @@ struct Foc_vcpu : Thread, Noncopyable
if (state.fpu.charged()) {
state.fpu.charge([&] (Vcpu_state::Fpu::State &fpu) {
asm volatile ("fxrstor %0" : : "m" (fpu) : "memory");
return 512;
});
} else
asm volatile ("fxrstor %0" : : "m" (_fpu_vcpu) : "memory");
@@ -412,6 +411,7 @@ struct Foc_vcpu : Thread, Noncopyable
state.fpu.charge([&] (Vcpu_state::Fpu::State &fpu) {
asm volatile ("fxsave %0" : "=m" (fpu) :: "memory");
asm volatile ("fxsave %0" : "=m" (_fpu_vcpu) :: "memory");
return 512;
});
asm volatile ("fxrstor %0" : : "m" (_fpu_ep) : "memory");
@@ -1340,7 +1340,7 @@ struct Foc_vcpu : Thread, Noncopyable
_wake_up.up();
}
void with_state(Call_with_state &cw)
void with_state(auto const &fn)
{
if (!_dispatching) {
if (Thread::myself() != _ep_handler) {
@@ -1373,7 +1373,7 @@ struct Foc_vcpu : Thread, Noncopyable
_state_ready.down();
}
if (cw.call_with_state(_vcpu_state)
if (fn(_vcpu_state)
|| _extra_dispatch_up)
resume();
@@ -1415,7 +1415,10 @@ static enum Virt virt_type(Env &env)
** vCPU API **
**************/
void Vm_connection::Vcpu::_with_state(Call_with_state &cw) { static_cast<Foc_native_vcpu_rpc &>(_native_vcpu).vcpu.with_state(cw); }
void Vm_connection::Vcpu::_with_state(With_state::Ft const &fn)
{
static_cast<Foc_native_vcpu_rpc &>(_native_vcpu).vcpu.with_state(fn);
}
Vm_connection::Vcpu::Vcpu(Vm_connection &vm, Allocator &alloc,

View File

@@ -382,13 +382,10 @@ namespace Kernel {
* Halt processing of a signal context synchronously
*
* \param context capability ID of the targeted signal context
*
* \retval 0 suceeded
* \retval -1 failed
*/
inline int kill_signal_context(capid_t const context)
inline void kill_signal_context(capid_t const context)
{
return (int)call(call_id_kill_signal_context(), context);
call(call_id_kill_signal_context(), context);
}
/**

View File

@@ -11,13 +11,15 @@
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__SPEC__X86_64__PORT_IO_H_
#define _CORE__SPEC__X86_64__PORT_IO_H_
#ifndef _INCLUDE__SPEC__X86_64__PORT_IO_H_
#define _INCLUDE__SPEC__X86_64__PORT_IO_H_
/* core includes */
#include <types.h>
#include <base/fixed_stdint.h>
namespace Core {
namespace Hw {
using Genode::uint8_t;
using Genode::uint16_t;
/**
* Read byte from I/O port
@@ -38,4 +40,4 @@ namespace Core {
}
}
#endif /* _CORE__SPEC__X86_64__PORT_IO_H_ */
#endif /* _INCLUDE__SPEC__X86_64__PORT_IO_H_ */

View File

@@ -46,7 +46,6 @@ SRC_CC += ram_dataspace_factory.cc
SRC_CC += signal_transmitter_noinit.cc
SRC_CC += thread_start.cc
SRC_CC += env.cc
SRC_CC += region_map_support.cc
SRC_CC += pager.cc
SRC_CC += _main.cc
SRC_CC += kernel/cpu.cc
@@ -55,13 +54,16 @@ SRC_CC += kernel/ipc_node.cc
SRC_CC += kernel/irq.cc
SRC_CC += kernel/main.cc
SRC_CC += kernel/object.cc
SRC_CC += kernel/signal_receiver.cc
SRC_CC += kernel/signal.cc
SRC_CC += kernel/thread.cc
SRC_CC += kernel/timer.cc
SRC_CC += capability.cc
SRC_CC += stack_area_addr.cc
SRC_CC += heartbeat.cc
BOARD ?= unknown
CC_OPT_platform += -DBOARD_NAME="\"$(BOARD)\""
# provide Genode version information
include $(BASE_DIR)/src/core/version.inc

View File

@@ -22,12 +22,9 @@ SRC_CC += kernel/vm_thread_on.cc
SRC_CC += spec/x86_64/virtualization/kernel/vm.cc
SRC_CC += spec/x86_64/virtualization/kernel/svm.cc
SRC_CC += spec/x86_64/virtualization/kernel/vmx.cc
SRC_CC += spec/x86_64/virtualization/vm_session_component.cc
SRC_CC += vm_session_common.cc
SRC_CC += vm_session_component.cc
SRC_CC += kernel/lock.cc
SRC_CC += spec/x86_64/pic.cc
SRC_CC += spec/x86_64/pit.cc
SRC_CC += spec/x86_64/timer.cc
SRC_CC += spec/x86_64/kernel/thread_exception.cc
SRC_CC += spec/x86_64/platform_support.cc
SRC_CC += spec/x86_64/virtualization/platform_services.cc

View File

@@ -1 +1 @@
2024-08-28 de31628804f8541b6c0cf5a43ed621432befd5cb
2024-12-10 ca4eabba0cf0313545712015ae6e9ebb4d968b2a

View File

@@ -1 +1 @@
2024-11-08-j 84d5a44cde007081915979748933030b05113be5
2024-12-10 dad50ef2ab70aa5a7bd316ad116bfb1d59c5df5c

View File

@@ -1 +1 @@
2024-08-28 73ea0cda27023fee8a56c5c104f85875e0ce2597
2024-12-10 58d8cb90d04a52f53a9797d964568dc0d1e7c45d

View File

@@ -1 +1 @@
2024-08-28 268365a21014538c4524a43c86f1e4b1b9709a96
2024-12-10 1a5d21d207bb12797d285e1c3173cdaec7559afe

View File

@@ -200,6 +200,7 @@ generalize_target_names: $(CONTENT)
# supplement BOARD definition that normally comes form the build dir
sed -i "s/\?= unknown/:= $(BOARD)/" src/core/hw/target.mk
sed -i "s/\?= unknown/:= $(BOARD)/" src/bootstrap/hw/target.mk
sed -i "s/\?= unknown/:= $(BOARD)/" lib/mk/core-hw.inc
# discharge targets when building for mismatching architecture
sed -i "1aREQUIRES := $(ARCH)" src/core/hw/target.mk
sed -i "1aREQUIRES := $(ARCH)" src/bootstrap/hw/target.mk

View File

@@ -16,7 +16,6 @@
/* base includes */
#include <base/internal/globals.h>
#include <base/internal/unmanaged_singleton.h>
using namespace Genode;
@@ -26,13 +25,23 @@ size_t bootstrap_stack_size = STACK_SIZE;
uint8_t bootstrap_stack[Board::NR_OF_CPUS][STACK_SIZE]
__attribute__((aligned(get_page_size())));
Bootstrap::Platform & Bootstrap::platform() {
return *unmanaged_singleton<Bootstrap::Platform>(); }
Bootstrap::Platform & Bootstrap::platform()
{
/*
* Don't use static local variable because cmpxchg cannot be executed
* w/o MMU on ARMv6.
*/
static long _obj[(sizeof(Bootstrap::Platform)+sizeof(long))/sizeof(long)];
static Bootstrap::Platform *ptr;
if (!ptr)
ptr = construct_at<Bootstrap::Platform>(_obj);
return *ptr;
}
extern "C" void init() __attribute__ ((noreturn));
extern "C" void init()
{
Bootstrap::Platform & p = Bootstrap::platform();

View File

@@ -20,7 +20,6 @@
#include <base/internal/globals.h>
#include <base/internal/output.h>
#include <base/internal/raw_write_string.h>
#include <base/internal/unmanaged_singleton.h>
#include <board.h>
@@ -55,7 +54,11 @@ struct Buffer
};
Genode::Log &Genode::Log::log() { return unmanaged_singleton<Buffer>()->log; }
Genode::Log &Genode::Log::log()
{
static Buffer buffer { };
return buffer.log;
}
void Genode::raw_write_string(char const *str) { log(str); }

View File

@@ -27,6 +27,7 @@ namespace Bootstrap {
using Genode::addr_t;
using Genode::size_t;
using Genode::uint32_t;
using Boot_info = Hw::Boot_info<::Board::Boot_info>;
using Hw::Mmio_space;
using Hw::Mapping;

View File

@@ -73,7 +73,8 @@ class Genode::Multiboot2_info : Mmio<0x8>
Multiboot2_info(addr_t mbi) : Mmio({(char *)mbi, Mmio::SIZE}) { }
void for_each_tag(auto const &mem_fn,
auto const &acpi_fn,
auto const &acpi_rsdp_v1_fn,
auto const &acpi_rsdp_v2_fn,
auto const &fb_fn,
auto const &systab64_fn)
{
@@ -103,6 +104,7 @@ class Genode::Multiboot2_info : Mmio<0x8>
if (tag.read<Tag::Type>() == Tag::Type::ACPI_RSDP_V1 ||
tag.read<Tag::Type>() == Tag::Type::ACPI_RSDP_V2) {
size_t const sizeof_tag = 1UL << Tag::LOG2_SIZE;
addr_t const rsdp_addr = tag_addr + sizeof_tag;
@@ -113,10 +115,12 @@ class Genode::Multiboot2_info : Mmio<0x8>
Hw::Acpi_rsdp rsdp_v1;
memset (&rsdp_v1, 0, sizeof(rsdp_v1));
memcpy (&rsdp_v1, rsdp, 20);
acpi_fn(rsdp_v1);
acpi_rsdp_v1_fn(rsdp_v1);
} else
if (sizeof(*rsdp) <= tag.read<Tag::Size>() - sizeof_tag) {
/* ACPI RSDP v2 */
acpi_rsdp_v2_fn(*rsdp);
}
if (sizeof(*rsdp) <= tag.read<Tag::Size>() - sizeof_tag)
acpi_fn(*rsdp);
}
if (tag.read<Tag::Type>() == Tag::Type::FRAMEBUFFER) {

View File

@@ -18,10 +18,12 @@
#include <platform.h>
#include <multiboot.h>
#include <multiboot2.h>
#include <port_io.h>
#include <hw/memory_consts.h>
#include <hw/spec/x86_64/acpi.h>
#include <hw/spec/x86_64/apic.h>
#include <hw/spec/x86_64/x86_64.h>
using namespace Genode;
@@ -61,11 +63,113 @@ static Hw::Acpi_rsdp search_rsdp(addr_t area, addr_t area_size)
}
}
Hw::Acpi_rsdp invalid;
Hw::Acpi_rsdp invalid { };
return invalid;
}
static uint32_t calibrate_tsc_frequency(addr_t fadt_addr)
{
uint32_t const default_freq = 2'400'000;
if (!fadt_addr) {
warning("FADT not found, returning fixed TSC frequency of ", default_freq, "kHz");
return default_freq;
}
uint32_t const sleep_ms = 10;
Hw::Acpi_fadt fadt(reinterpret_cast<Hw::Acpi_generic *>(fadt_addr));
uint32_t const freq = fadt.calibrate_freq_khz(sleep_ms, []() { return Hw::Tsc::rdtsc(); });
if (!freq) {
warning("Unable to calibrate TSC, returning fixed TSC frequency of ", default_freq, "kHz");
return default_freq;
}
return freq;
}
static Hw::Local_apic::Calibration calibrate_lapic_frequency(addr_t fadt_addr)
{
uint32_t const default_freq = TIMER_MIN_TICKS_PER_MS;
if (!fadt_addr) {
warning("FADT not found, setting minimum Local APIC frequency of ", default_freq, "kHz");
return { default_freq, 1 };
}
uint32_t const sleep_ms = 10;
Hw::Acpi_fadt fadt(reinterpret_cast<Hw::Acpi_generic *>(fadt_addr));
Hw::Local_apic lapic(Hw::Cpu_memory_map::lapic_phys_base());
auto const result =
lapic.calibrate_divider([&] {
return fadt.calibrate_freq_khz(sleep_ms, [&] {
return lapic.read<Hw::Local_apic::Tmr_current>(); }, true); });
if (!result.freq_khz) {
warning("FADT not found, setting minimum Local APIC frequency of ", default_freq, "kHz");
return { default_freq, 1 };
}
return result;
}
static void disable_pit()
{
using Hw::outb;
enum {
/* PIT constants */
PIT_CH0_DATA = 0x40,
PIT_MODE = 0x43,
};
/*
* Disable PIT timer channel. This is necessary since BIOS sets up
* channel 0 to fire periodically.
*/
outb(PIT_MODE, 0x30);
outb(PIT_CH0_DATA, 0);
outb(PIT_CH0_DATA, 0);
}
/*
* Enable dispatch serializing lfence instruction on AMD processors
*
* See Software techniques for managing speculation on AMD processors
* Revision 5.09.23
* Mitigation G-2
*/
static void amd_enable_serializing_lfence()
{
using Cpu = Hw::X86_64_cpu;
if (Hw::Vendor::get_vendor_id() != Hw::Vendor::Vendor_id::AMD)
return;
unsigned const family = Hw::Vendor::get_family();
/*
* In family 0Fh and 11h, lfence is always dispatch serializing and
* "AMD plans support for this MSR and access to this bit for all future
* processors." from family 14h on.
*/
if ((family == 0x10) || (family == 0x12) || (family >= 0x14)) {
Cpu::Amd_lfence::access_t amd_lfence = Cpu::Amd_lfence::read();
Cpu::Amd_lfence::Enable_dispatch_serializing::set(amd_lfence);
Cpu::Amd_lfence::write(amd_lfence);
}
}
Bootstrap::Platform::Board::Board()
:
core_mmio(Memory_region { 0, 0x1000 },
@@ -143,10 +247,14 @@ Bootstrap::Platform::Board::Board()
lambda(base, size);
},
[&] (Hw::Acpi_rsdp const &rsdp) {
/* prefer higher acpi revisions */
if (!acpi_rsdp.valid() || acpi_rsdp.revision < rsdp.revision)
acpi_rsdp = rsdp;
[&] (Hw::Acpi_rsdp const &rsdp_v1) {
/* only use ACPI RSDP v1 if nothing available/valid by now */
if (!acpi_rsdp.valid())
acpi_rsdp = rsdp_v1;
},
[&] (Hw::Acpi_rsdp const &rsdp_v2) {
/* prefer v2 ever, override stored previous rsdp v1 potentially */
acpi_rsdp = rsdp_v2;
},
[&] (Hw::Framebuffer const &fb) {
info.framebuffer = fb;
@@ -246,6 +354,21 @@ Bootstrap::Platform::Board::Board()
cpus = !cpus ? 1 : max_cpus;
}
/*
* Enable serializing lfence on supported AMD processors
*
* For APs this will be set up later, but we need it already to obtain
* the most acurate results when calibrating the TSC frequency.
*/
amd_enable_serializing_lfence();
auto r = calibrate_lapic_frequency(info.acpi_fadt);
info.lapic_freq_khz = r.freq_khz;
info.lapic_div = r.div;
info.tsc_freq_khz = calibrate_tsc_frequency(info.acpi_fadt);
disable_pit();
/* copy 16 bit boot code for AP CPUs and for ACPI resume */
addr_t ap_code_size = (addr_t)&_start - (addr_t)&_ap;
memcpy((void *)AP_BOOT_CODE_PAGE, &_ap, ap_code_size);
@@ -315,9 +438,12 @@ unsigned Bootstrap::Platform::enable_mmu()
if (board.cpus <= 1)
return (unsigned)cpu_id;
if (!Cpu::IA32_apic_base::Bsp::get(lapic_msr))
if (!Cpu::IA32_apic_base::Bsp::get(lapic_msr)) {
/* AP - done */
/* enable serializing lfence on supported AMD processors. */
amd_enable_serializing_lfence();
return (unsigned)cpu_id;
}
/* BSP - we're primary CPU - wake now all other CPUs */

View File

@@ -21,7 +21,7 @@
/* base-hw core includes */
#include <spec/x86_64/pic.h>
#include <spec/x86_64/pit.h>
#include <spec/x86_64/timer.h>
#include <spec/x86_64/cpu.h>
namespace Board {

View File

@@ -82,4 +82,11 @@ Core_region_map::attach(Dataspace_capability ds_cap, Attr const &attr)
}
void Core_region_map::detach(addr_t) { }
void Core_region_map::detach(addr_t core_local_addr)
{
size_t size = platform_specific().region_alloc_size_at((void *)core_local_addr);
unmap_local(core_local_addr, size >> get_page_size_log2());
platform().region_alloc().free((void *)core_local_addr);
}

View File

@@ -0,0 +1,275 @@
/*
* \brief Guest memory abstraction
* \author Stefan Kalkowski
* \author Benjamin Lamowski
* \date 2024-11-25
*/
/*
* Copyright (C) 2015-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__GUEST_MEMORY_H_
#define _CORE__GUEST_MEMORY_H_
/* base includes */
#include <base/allocator.h>
#include <base/allocator_avl.h>
#include <vm_session/vm_session.h>
#include <dataspace/capability.h>
/* core includes */
#include <dataspace_component.h>
#include <region_map_component.h>
namespace Core { class Guest_memory; }
using namespace Core;
class Core::Guest_memory
{
private:
using Avl_region = Allocator_avl_tpl<Rm_region>;
using Attach_attr = Genode::Vm_session::Attach_attr;
Sliced_heap _sliced_heap;
Avl_region _map { &_sliced_heap };
uint8_t _remaining_print_count { 10 };
void _with_region(addr_t const addr, auto const &fn)
{
Rm_region *region = _map.metadata((void *)addr);
if (region)
fn(*region);
else
if (_remaining_print_count) {
error(__PRETTY_FUNCTION__, " unknown region");
_remaining_print_count--;
}
}
public:
enum class Attach_result {
OK,
INVALID_DS,
OUT_OF_RAM,
OUT_OF_CAPS,
REGION_CONFLICT,
};
Attach_result attach(Region_map_detach &rm_detach,
Dataspace_component &dsc,
addr_t const guest_phys,
Attach_attr attr,
auto const &map_fn)
{
/*
* unsupported - deny otherwise arbitrary physical
* memory can be mapped to a VM
*/
if (dsc.managed())
return Attach_result::INVALID_DS;
if (guest_phys & 0xffful || attr.offset & 0xffful ||
attr.size & 0xffful)
return Attach_result::INVALID_DS;
if (!attr.size) {
attr.size = dsc.size();
if (attr.offset < attr.size)
attr.size -= attr.offset;
}
if (attr.size > dsc.size())
attr.size = dsc.size();
if (attr.offset >= dsc.size() ||
attr.offset > dsc.size() - attr.size)
return Attach_result::INVALID_DS;
using Alloc_error = Range_allocator::Alloc_error;
Attach_result const retval = _map.alloc_addr(attr.size, guest_phys).convert<Attach_result>(
[&] (void *) {
Rm_region::Attr const region_attr
{
.base = guest_phys,
.size = attr.size,
.write = dsc.writeable() && attr.writeable,
.exec = attr.executable,
.off = attr.offset,
.dma = false,
};
/* store attachment info in meta data */
try {
_map.construct_metadata((void *)guest_phys,
dsc, rm_detach, region_attr);
} catch (Allocator_avl_tpl<Rm_region>::Assign_metadata_failed) {
if (_remaining_print_count) {
error("failed to store attachment info");
_remaining_print_count--;
}
return Attach_result::INVALID_DS;
}
Rm_region &region = *_map.metadata((void *)guest_phys);
/* inform dataspace about attachment */
dsc.attached_to(region);
return Attach_result::OK;
},
[&] (Alloc_error error) {
switch (error) {
case Alloc_error::OUT_OF_RAM:
return Attach_result::OUT_OF_RAM;
case Alloc_error::OUT_OF_CAPS:
return Attach_result::OUT_OF_CAPS;
case Alloc_error::DENIED:
{
/*
* Handle attach after partial detach
*/
Rm_region *region_ptr = _map.metadata((void *)guest_phys);
if (!region_ptr)
return Attach_result::REGION_CONFLICT;
Rm_region &region = *region_ptr;
bool conflict = false;
region.with_dataspace([&] (Dataspace_component &dataspace) {
(void)dataspace;
if (!(dsc.cap() == dataspace.cap()))
conflict = true;
});
if (conflict)
return Attach_result::REGION_CONFLICT;
if (guest_phys < region.base() ||
guest_phys > region.base() + region.size() - 1)
return Attach_result::REGION_CONFLICT;
}
};
return Attach_result::OK;
}
);
if (retval == Attach_result::OK) {
addr_t phys_addr = dsc.phys_addr() + attr.offset;
size_t size = attr.size;
map_fn(guest_phys, phys_addr, size);
}
return retval;
}
void detach(addr_t guest_phys,
size_t size,
auto const &unmap_fn)
{
if (!size || (guest_phys & 0xffful) || (size & 0xffful)) {
if (_remaining_print_count) {
warning("vm_session: skipping invalid memory detach addr=",
(void *)guest_phys, " size=", (void *)size);
_remaining_print_count--;
}
return;
}
addr_t const guest_phys_end = guest_phys + (size - 1);
addr_t addr = guest_phys;
do {
Rm_region *region = _map.metadata((void *)addr);
/* walk region holes page-by-page */
size_t iteration_size = 0x1000;
if (region) {
iteration_size = region->size();
detach_at(region->base(), unmap_fn);
}
if (addr >= guest_phys_end - (iteration_size - 1))
break;
addr += iteration_size;
} while (true);
}
Guest_memory(Constrained_ram_allocator &constrained_md_ram_alloc,
Region_map &region_map)
:
_sliced_heap(constrained_md_ram_alloc, region_map)
{
/* configure managed VM area */
_map.add_range(0UL, ~0UL);
}
~Guest_memory()
{
/* detach all regions */
while (true) {
addr_t out_addr = 0;
if (!_map.any_block_addr(&out_addr))
break;
detach_at(out_addr, [](addr_t, size_t) { });
}
}
void detach_at(addr_t addr,
auto const &unmap_fn)
{
_with_region(addr, [&] (Rm_region &region) {
if (!region.reserved())
reserve_and_flush(addr, unmap_fn);
/* free the reserved region */
_map.free(reinterpret_cast<void *>(region.base()));
});
}
void reserve_and_flush(addr_t addr,
auto const &unmap_fn)
{
_with_region(addr, [&] (Rm_region &region) {
/* inform dataspace */
region.with_dataspace([&] (Dataspace_component &dataspace) {
dataspace.detached_from(region);
});
region.mark_as_reserved();
unmap_fn(region.base(), region.size());
});
}
};
#endif /* _CORE__GUEST_MEMORY_H_ */

View File

@@ -21,5 +21,7 @@ using namespace Core;
void Io_mem_session_component::_unmap_local(addr_t, size_t, addr_t) { }
addr_t Io_mem_session_component::_map_local(addr_t base, size_t) { return base; }
Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const base, size_t)
{
return { .core_local_addr = base, .success = true };
}

View File

@@ -18,7 +18,7 @@
/* core includes */
#include <kernel/irq.h>
#include <irq_root.h>
#include <core_env.h>
#include <platform.h>
/* base-internal includes */
#include <base/internal/capability_space.h>

View File

@@ -66,6 +66,7 @@ namespace Kernel {
constexpr Call_arg call_id_set_cpu_state() { return 125; }
constexpr Call_arg call_id_exception_state() { return 126; }
constexpr Call_arg call_id_single_step() { return 127; }
constexpr Call_arg call_id_ack_pager_signal() { return 128; }
/**
* Invalidate TLB entries for the `pd` in region `addr`, `sz`
@@ -137,10 +138,9 @@ namespace Kernel {
* \retval 0 suceeded
* \retval !=0 failed
*/
inline int start_thread(Thread & thread, unsigned const cpu_id,
Pd & pd, Native_utcb & utcb)
inline int start_thread(Thread & thread, Pd & pd, Native_utcb & utcb)
{
return (int)call(call_id_start_thread(), (Call_arg)&thread, cpu_id,
return (int)call(call_id_start_thread(), (Call_arg)&thread,
(Call_arg)&pd, (Call_arg)&utcb);
}
@@ -148,13 +148,16 @@ namespace Kernel {
/**
* Set or unset the handler of an event that can be triggered by a thread
*
* \param thread pointer to thread kernel object
* \param thread reference to thread kernel object
* \param pager reference to pager kernel object
* \param signal_context_id capability id of the page-fault handler
*/
inline void thread_pager(Thread & thread,
inline void thread_pager(Thread &thread,
Thread &pager,
capid_t const signal_context_id)
{
call(call_id_thread_pager(), (Call_arg)&thread, signal_context_id);
call(call_id_thread_pager(), (Call_arg)&thread, (Call_arg)&pager,
signal_context_id);
}
@@ -203,6 +206,18 @@ namespace Kernel {
{
call(call_id_single_step(), (Call_arg)&thread, (Call_arg)&on);
}
/**
* Acknowledge a signal transmitted to a pager
*
* \param context signal context to acknowledge
* \param thread reference to faulting thread kernel object
* \param resolved whether fault got resolved
*/
inline void ack_pager_signal(capid_t const context, Thread &thread, bool resolved)
{
call(call_id_ack_pager_signal(), context, (Call_arg)&thread, resolved);
}
}
#endif /* _CORE__KERNEL__CORE_INTERFACE_H_ */

View File

@@ -27,35 +27,35 @@
using namespace Kernel;
/*************
** Cpu_job **
*************/
/*****************
** Cpu_context **
*****************/
void Cpu_job::_activate_own_share() { _cpu->schedule(this); }
void Cpu_context::_activate() { _cpu().schedule(*this); }
void Cpu_job::_deactivate_own_share()
void Cpu_context::_deactivate()
{
assert(_cpu->id() == Cpu::executing_id());
_cpu->scheduler().unready(*this);
assert(_cpu().id() == Cpu::executing_id());
_cpu().scheduler().unready(*this);
}
void Cpu_job::_yield()
void Cpu_context::_yield()
{
assert(_cpu->id() == Cpu::executing_id());
_cpu->scheduler().yield();
assert(_cpu().id() == Cpu::executing_id());
_cpu().scheduler().yield();
}
void Cpu_job::_interrupt(Irq::Pool &user_irq_pool, unsigned const /* cpu_id */)
void Cpu_context::_interrupt(Irq::Pool &user_irq_pool)
{
/* let the IRQ controller take a pending IRQ for handling, if any */
unsigned irq_id;
if (_cpu->pic().take_request(irq_id))
if (_cpu().pic().take_request(irq_id))
/* let the CPU of this job handle the IRQ if it is a CPU-local one */
if (!_cpu->handle_if_cpu_local_interrupt(irq_id)) {
/* let the CPU of this context handle the IRQ if it is a CPU-local one */
if (!_cpu().handle_if_cpu_local_interrupt(irq_id)) {
/* it isn't a CPU-local IRQ, so, it must be a user IRQ */
User_irq * irq = User_irq::object(user_irq_pool, irq_id);
@@ -64,38 +64,37 @@ void Cpu_job::_interrupt(Irq::Pool &user_irq_pool, unsigned const /* cpu_id */)
}
/* let the IRQ controller finish the currently taken IRQ */
_cpu->pic().finish_request();
_cpu().pic().finish_request();
}
void Cpu_job::affinity(Cpu &cpu)
void Cpu_context::affinity(Cpu &cpu)
{
_cpu = &cpu;
_cpu->scheduler().insert(*this);
_cpu().scheduler().remove(*this);
_cpu_ptr = &cpu;
_cpu().scheduler().insert(*this);
}
void Cpu_job::quota(unsigned const q)
void Cpu_context::quota(unsigned const q)
{
if (_cpu)
_cpu->scheduler().quota(*this, q);
else
Context::quota(q);
_cpu().scheduler().quota(*this, q);
}
Cpu_job::Cpu_job(Priority const p, unsigned const q)
Cpu_context::Cpu_context(Cpu &cpu,
Priority const priority,
unsigned const quota)
:
Context(p, q), _cpu(0)
{ }
Cpu_job::~Cpu_job()
Context(priority, quota), _cpu_ptr(&cpu)
{
if (!_cpu)
return;
_cpu().scheduler().insert(*this);
}
_cpu->scheduler().remove(*this);
Cpu_context::~Cpu_context()
{
_cpu().scheduler().remove(*this);
}
@@ -112,19 +111,17 @@ Cpu::Idle_thread::Idle_thread(Board::Address_space_id_allocator &addr_space_id_a
Cpu &cpu,
Pd &core_pd)
:
Thread { addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd,
Priority::min(), 0, "idle", Thread::IDLE }
Thread { addr_space_id_alloc, user_irq_pool, cpu_pool, cpu,
core_pd, Priority::min(), 0, "idle", Thread::IDLE }
{
regs->ip = (addr_t)&idle_thread_main;
affinity(cpu);
Thread::_pd = &core_pd;
}
void Cpu::schedule(Job * const job)
void Cpu::schedule(Context &context)
{
_scheduler.ready(job->context());
_scheduler.ready(static_cast<Scheduler::Context&>(context));
if (_id != executing_id() && _scheduler.need_to_schedule())
trigger_ip_interrupt();
}
@@ -142,33 +139,34 @@ bool Cpu::handle_if_cpu_local_interrupt(unsigned const irq_id)
}
Cpu_job & Cpu::schedule()
Cpu::Context & Cpu::handle_exception_and_schedule()
{
/* update scheduler */
Job & old_job = scheduled_job();
old_job.exception(*this);
Context &context = current_context();
context.exception();
if (_state == SUSPEND || _state == HALT)
return _halt_job;
/* update schedule if necessary */
if (_scheduler.need_to_schedule()) {
_timer.process_timeouts();
_scheduler.update(_timer.time());
time_t t = _scheduler.current_time_left();
_timer.set_timeout(&_timeout, t);
time_t duration = _timer.schedule_timeout();
old_job.update_execution_time(duration);
context.update_execution_time(duration);
}
/* return new job */
return scheduled_job();
/* return current context */
return current_context();
}
addr_t Cpu::stack_start()
{
return Abi::stack_align(Hw::Mm::cpu_local_memory().base +
(1024*1024*_id) + (64*1024));
(Hw::Mm::CPU_LOCAL_MEMORY_SLOT_SIZE*_id)
+ Hw::Mm::KERNEL_STACK_SIZE);
}

View File

@@ -39,9 +39,11 @@ namespace Kernel {
class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
public Genode::List<Cpu>::Element
{
private:
public:
using Job = Cpu_job;
using Context = Cpu_context;
private:
/**
* Inter-processor-interrupt object of the cpu
@@ -83,16 +85,14 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
Pd &core_pd);
};
struct Halt_job : Job
struct Halt_job : Cpu_context
{
Halt_job() : Job (0, 0) { }
Halt_job(Cpu &cpu)
: Cpu_context(cpu, 0, 0) { }
void exception(Kernel::Cpu &) override { }
void proceed(Kernel::Cpu &) override;
Kernel::Cpu_job* helping_destination() override { return this; }
} _halt_job { };
void exception() override { }
void proceed() override;
} _halt_job { *this };
enum State { RUN, HALT, SUSPEND };
@@ -143,14 +143,14 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
bool handle_if_cpu_local_interrupt(unsigned const irq_id);
/**
* Schedule 'job' at this CPU
* Schedule 'context' at this CPU
*/
void schedule(Job * const job);
void schedule(Context& context);
/**
* Return the job that should be executed at next
* Return the context that should be executed next
*/
Cpu_job& schedule();
Context& handle_exception_and_schedule();
Board::Pic & pic() { return _pic; }
Timer & timer() { return _timer; }
@@ -158,10 +158,10 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
addr_t stack_start();
/**
* Returns the currently active job
* Returns the currently scheduled context
*/
Job & scheduled_job() {
return *static_cast<Job *>(&_scheduler.current())->helping_destination(); }
Context & current_context() {
return static_cast<Context&>(_scheduler.current().helping_destination()); }
unsigned id() const { return _id; }
Scheduler &scheduler() { return _scheduler; }

View File

@@ -22,46 +22,39 @@
namespace Kernel {
class Cpu;
/**
* Context of a job (thread, VM, idle) that shall be executed by a CPU
*/
class Cpu_job;
class Cpu_context;
}
class Kernel::Cpu_job : private Scheduler::Context
/**
* Context (thread, vcpu) that shall be executed by a CPU
*/
class Kernel::Cpu_context : private Scheduler::Context
{
private:
friend class Cpu; /* static_cast from 'Scheduler::Context' to 'Cpu_job' */
friend class Cpu;
time_t _execution_time { 0 };
Cpu *_cpu_ptr;
/*
* Noncopyable
*/
Cpu_job(Cpu_job const &);
Cpu_job &operator = (Cpu_job const &);
Cpu_context(Cpu_context const &);
Cpu_context &operator = (Cpu_context const &);
protected:
Cpu * _cpu;
Cpu &_cpu() const { return *_cpu_ptr; }
/**
* Handle interrupt exception that occured during execution on CPU 'id'
* Handle interrupt exception
*/
void _interrupt(Irq::Pool &user_irq_pool, unsigned const id);
void _interrupt(Irq::Pool &user_irq_pool);
/**
* Activate our own CPU-share
*/
void _activate_own_share();
/**
* Deactivate our own CPU-share
*/
void _deactivate_own_share();
void _activate();
void _deactivate();
/**
* Yield the currently scheduled CPU share of this context
@@ -69,55 +62,37 @@ class Kernel::Cpu_job : private Scheduler::Context
void _yield();
/**
* Return wether we are allowed to help job 'j' with our CPU-share
* Return possibility to help context 'j' scheduling-wise
*/
bool _helping_possible(Cpu_job const &j) const { return j._cpu == _cpu; }
bool _helping_possible(Cpu_context const &j) const {
return j._cpu_ptr == _cpu_ptr; }
void _help(Cpu_context &context) { Context::help(context); }
using Context::ready;
using Context::helping_finished;
public:
using Context = Scheduler::Context;
using Priority = Scheduler::Priority;
/**
* Handle exception that occured during execution on CPU 'id'
*/
virtual void exception(Cpu & cpu) = 0;
Cpu_context(Cpu &cpu,
Priority const priority,
unsigned const quota);
virtual ~Cpu_context();
/**
* Continue execution on CPU 'id'
*/
virtual void proceed(Cpu & cpu) = 0;
/**
* Return which job currently uses our CPU-share
*/
virtual Cpu_job * helping_destination() = 0;
/**
* Construct a job with scheduling priority 'p' and time quota 'q'
*/
Cpu_job(Priority const p, unsigned const q);
/**
* Destructor
*/
virtual ~Cpu_job();
/**
* Link job to CPU 'cpu'
* Link context to CPU 'cpu'
*/
void affinity(Cpu &cpu);
/**
* Set CPU quota of the job to 'q'
* Set CPU quota of the context to 'q'
*/
void quota(unsigned const q);
/**
* Return wether our CPU-share is currently active
*/
bool own_share_active() { return Context::ready(); }
/**
* Update total execution time
*/
@@ -128,14 +103,15 @@ class Kernel::Cpu_job : private Scheduler::Context
*/
time_t execution_time() const { return _execution_time; }
/**
* Handle exception that occured during execution of this context
*/
virtual void exception() = 0;
/***************
** Accessors **
***************/
void cpu(Cpu &cpu) { _cpu = &cpu; }
Context &context() { return *this; }
/**
* Continue execution of this context
*/
virtual void proceed() = 0;
};
#endif /* _CORE__KERNEL__CPU_CONTEXT_H_ */

View File

@@ -11,8 +11,8 @@
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__KERNEL__SMP_H_
#define _CORE__KERNEL__SMP_H_
#ifndef _CORE__KERNEL__INTER_PROCESSOR_WORK_H_
#define _CORE__KERNEL__INTER_PROCESSOR_WORK_H_
#include <util/interface.h>
@@ -32,11 +32,11 @@ class Kernel::Inter_processor_work : Genode::Interface
{
public:
virtual void execute(Cpu &) = 0;
virtual void execute(Cpu & cpu) = 0;
protected:
Genode::List_element<Inter_processor_work> _le { this };
};
#endif /* _CORE__KERNEL__SMP_H_ */
#endif /* _CORE__KERNEL__INTER_PROCESSOR_WORK_H_ */

View File

@@ -57,19 +57,13 @@ void Ipc_node::_cancel_send()
}
bool Ipc_node::_helping() const
{
return _out.state == Out::SEND_HELPING && _out.node;
}
bool Ipc_node::ready_to_send() const
{
return _out.state == Out::READY && !_in.waiting();
}
void Ipc_node::send(Ipc_node &node, bool help)
void Ipc_node::send(Ipc_node &node)
{
node._in.queue.enqueue(_queue_item);
@@ -78,13 +72,7 @@ void Ipc_node::send(Ipc_node &node, bool help)
node._thread.ipc_await_request_succeeded();
}
_out.node = &node;
_out.state = help ? Out::SEND_HELPING : Out::SEND;
}
Thread &Ipc_node::helping_destination()
{
return _helping() ? _out.node->helping_destination() : _thread;
_out.state = Out::SEND;
}

View File

@@ -50,14 +50,14 @@ class Kernel::Ipc_node
struct Out
{
enum State { READY, SEND, SEND_HELPING, DESTRUCT };
enum State { READY, SEND, DESTRUCT };
State state { READY };
Ipc_node *node { nullptr };
bool sending() const
{
return state == SEND_HELPING || state == SEND;
return state == SEND;
}
};
@@ -76,11 +76,6 @@ class Kernel::Ipc_node
*/
void _cancel_send();
/**
* Return wether this IPC node is helping another one
*/
bool _helping() const;
/**
* Noncopyable
*/
@@ -102,28 +97,8 @@ class Kernel::Ipc_node
* Send a message and wait for the according reply
*
* \param node targeted IPC node
* \param help wether the request implies a helping relationship
*/
void send(Ipc_node &node, bool help);
/**
* Return final destination of the helping-chain
* this IPC node is part of, or its own thread otherwise
*/
Thread &helping_destination();
/**
* Call 'fn' of type 'void (Ipc_node *)' for each helper
*/
void for_each_helper(auto const &fn)
{
_in.queue.for_each([fn] (Queue_item &item) {
Ipc_node &node { item.object() };
if (node._helping())
fn(node._thread);
});
}
void send(Ipc_node &node);
/**
* Return whether this IPC node is ready to wait for messages

View File

@@ -20,7 +20,7 @@
#include <util/avl_tree.h>
/* core includes */
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
namespace Board {
@@ -161,9 +161,7 @@ class Kernel::User_irq : public Kernel::Irq
*/
void occurred() override
{
if (_context.can_submit(1)) {
_context.submit(1);
}
_context.submit(1);
disable();
}

View File

@@ -63,16 +63,16 @@ Kernel::Main *Kernel::Main::_instance;
void Kernel::Main::_handle_kernel_entry()
{
Cpu &cpu = _cpu_pool.cpu(Cpu::executing_id());
Cpu_job * new_job;
Cpu::Context * context;
{
Lock::Guard guard(_data_lock);
new_job = &cpu.schedule();
context =
&_cpu_pool.cpu(Cpu::executing_id()).handle_exception_and_schedule();
}
new_job->proceed(cpu);
context->proceed();
}

View File

@@ -19,6 +19,38 @@
using namespace Kernel;
void Scheduler::Context::help(Scheduler::Context &c)
{
_destination = &c;
c._helper_list.insert(&_helper_le);
}
void Scheduler::Context::helping_finished()
{
if (!_destination)
return;
_destination->_helper_list.remove(&_helper_le);
_destination = nullptr;
}
Scheduler::Context& Scheduler::Context::helping_destination()
{
return (_destination) ? _destination->helping_destination() : *this;
}
Scheduler::Context::~Context()
{
helping_finished();
for (Context::List_element *h = _helper_list.first(); h; h = h->next())
h->object()->helping_finished();
}
void Scheduler::_consumed(unsigned const time)
{
if (_super_period_left > time) {
@@ -149,7 +181,10 @@ void Scheduler::update(time_t time)
void Scheduler::ready(Context &c)
{
assert(!c.ready() && &c != &_idle);
assert(&c != &_idle);
if (c.ready())
return;
c._ready = true;
@@ -170,23 +205,33 @@ void Scheduler::ready(Context &c)
_slack_list.insert_head(&c._slack_le);
if (!keep_current && _state == UP_TO_DATE) _state = OUT_OF_DATE;
for (Context::List_element *helper = c._helper_list.first();
helper; helper = helper->next())
if (!helper->object()->ready()) ready(*helper->object());
}
void Scheduler::unready(Context &c)
{
assert(c.ready() && &c != &_idle);
assert(&c != &_idle);
if (!c.ready())
return;
if (&c == _current && _state == UP_TO_DATE) _state = OUT_OF_DATE;
c._ready = false;
_slack_list.remove(&c._slack_le);
if (!c._quota)
return;
if (c._quota) {
_rpl[c._priority].remove(&c._priotized_le);
_upl[c._priority].insert_tail(&c._priotized_le);
}
_rpl[c._priority].remove(&c._priotized_le);
_upl[c._priority].insert_tail(&c._priotized_le);
for (Context::List_element *helper = c._helper_list.first();
helper; helper = helper->next())
if (helper->object()->ready()) unready(*helper->object());
}

View File

@@ -65,6 +65,7 @@ class Kernel::Scheduler
friend class Scheduler_test::Context;
using List_element = Genode::List_element<Context>;
using List = Genode::List<List_element>;
unsigned _priority;
unsigned _quota;
@@ -74,10 +75,20 @@ class Kernel::Scheduler
List_element _slack_le { this };
unsigned _slack_time_left { 0 };
List_element _helper_le { this };
List _helper_list {};
Context *_destination { nullptr };
bool _ready { false };
void _reset() { _priotized_time_left = _quota; }
/**
* Noncopyable
*/
Context(const Context&) = delete;
Context& operator=(const Context&) = delete;
public:
Context(Priority const priority,
@@ -85,9 +96,14 @@ class Kernel::Scheduler
:
_priority(priority.value),
_quota(quota) { }
~Context();
bool ready() const { return _ready; }
void quota(unsigned const q) { _quota = q; }
void help(Context &c);
void helping_finished();
Context& helping_destination();
};
private:

View File

@@ -1,18 +1,19 @@
/*
* \brief Kernel backend for asynchronous inter-process communication
* \author Martin Stein
* \author Stefan Kalkowski
* \date 2012-11-30
*/
/*
* Copyright (C) 2012-2019 Genode Labs GmbH
* Copyright (C) 2012-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
/* core includes */
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <kernel/thread.h>
using namespace Kernel;
@@ -26,7 +27,7 @@ void Signal_handler::cancel_waiting()
{
if (_receiver) {
_receiver->_handler_cancelled(*this);
_receiver = 0;
_receiver = nullptr;
}
}
@@ -71,28 +72,20 @@ void Signal_context::_deliverable()
void Signal_context::_delivered()
{
_submits = 0;
_ack = 0;
_ack = false;
}
void Signal_context::_killer_cancelled() { _killer = 0; }
bool Signal_context::can_submit(unsigned const n) const
{
if (_killed || _submits >= (unsigned)~0 - n)
return false;
return true;
}
void Signal_context::_killer_cancelled() { _killer = nullptr; }
void Signal_context::submit(unsigned const n)
{
if (_killed || _submits >= (unsigned)~0 - n)
if (_killed)
return;
_submits += n;
if (_submits < ((unsigned)~0 - n))
_submits += n;
if (_ack)
_deliverable();
@@ -105,32 +98,19 @@ void Signal_context::ack()
return;
if (!_killed) {
_ack = 1;
_ack = true;
_deliverable();
return;
}
if (_killer) {
_killer->_context = 0;
_killer->_context = nullptr;
_killer->_thread.signal_context_kill_done();
_killer = 0;
_killer = nullptr;
}
}
bool Signal_context::can_kill() const
{
/* check if in a kill operation or already killed */
if (_killed) {
if (_ack)
return true;
return false;
}
return true;
}
void Signal_context::kill(Signal_context_killer &k)
{
/* check if in a kill operation or already killed */
@@ -139,13 +119,13 @@ void Signal_context::kill(Signal_context_killer &k)
/* kill directly if there is no unacknowledged delivery */
if (_ack) {
_killed = 1;
_killed = true;
return;
}
/* wait for delivery acknowledgement */
_killer = &k;
_killed = 1;
_killed = true;
_killer->_context = this;
_killer->_thread.signal_context_kill_pending();
}
@@ -231,24 +211,17 @@ void Signal_receiver::_add_context(Signal_context &c) {
_contexts.enqueue(c._contexts_fe); }
bool Signal_receiver::can_add_handler(Signal_handler const &h) const
bool Signal_receiver::add_handler(Signal_handler &h)
{
if (h._receiver)
return false;
return true;
}
void Signal_receiver::add_handler(Signal_handler &h)
{
if (h._receiver)
return;
_handlers.enqueue(h._handlers_fe);
h._receiver = this;
h._thread.signal_wait_for_signal();
_listen();
return true;
}

View File

@@ -1,18 +1,19 @@
/*
* \brief Kernel backend for asynchronous inter-process communication
* \author Martin Stein
* \author Stefan Kalkowski
* \date 2012-11-30
*/
/*
* Copyright (C) 2012-2017 Genode Labs GmbH
* Copyright (C) 2012-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__KERNEL__SIGNAL_RECEIVER_H_
#define _CORE__KERNEL__SIGNAL_RECEIVER_H_
#ifndef _CORE__KERNEL__SIGNAL_H_
#define _CORE__KERNEL__SIGNAL_H_
/* Genode includes */
#include <base/signal.h>
@@ -158,20 +159,14 @@ class Kernel::Signal_context
*
* \param r receiver that the context shall be assigned to
* \param imprint userland identification of the context
*
* \throw Assign_to_receiver_failed
*/
Signal_context(Signal_receiver & r, addr_t const imprint);
Signal_context(Signal_receiver &, addr_t const imprint);
/**
* Submit the signal
*
* \param n number of submits
*
* \retval 0 succeeded
* \retval -1 failed
*/
bool can_submit(unsigned const n) const;
void submit(unsigned const n);
/**
@@ -182,12 +177,8 @@ class Kernel::Signal_context
/**
* Destruct context or prepare to do it as soon as delivery is done
*
* \param killer object that shall receive progress reports
*
* \retval 0 succeeded
* \retval -1 failed
* \param k object that shall receive progress reports
*/
bool can_kill() const;
void kill(Signal_context_killer &k);
/**
@@ -272,8 +263,7 @@ class Kernel::Signal_receiver
* \retval 0 succeeded
* \retval -1 failed
*/
bool can_add_handler(Signal_handler const &h) const;
void add_handler(Signal_handler &h);
bool add_handler(Signal_handler &h);
/**
* Syscall to create a signal receiver

View File

@@ -33,45 +33,42 @@ extern "C" void _core_start(void);
using namespace Kernel;
void Thread::_ipc_alloc_recv_caps(unsigned cap_count)
Thread::Ipc_alloc_result Thread::_ipc_alloc_recv_caps(unsigned cap_count)
{
using Allocator = Genode::Allocator;
using Result = Ipc_alloc_result;
Allocator &slab = pd().platform_pd().capability_slab();
for (unsigned i = 0; i < cap_count; i++) {
if (_obj_id_ref_ptr[i] != nullptr)
continue;
slab.try_alloc(sizeof(Object_identity_reference)).with_result(
Result const result =
slab.try_alloc(sizeof(Object_identity_reference)).convert<Result>(
[&] (void *ptr) {
_obj_id_ref_ptr[i] = ptr; },
_obj_id_ref_ptr[i] = ptr;
return Result::OK; },
[&] (Allocator::Alloc_error e) {
switch (e) {
case Allocator::Alloc_error::DENIED:
/*
* Slab is exhausted, reflect condition to the client.
*/
throw Genode::Out_of_ram();
case Allocator::Alloc_error::OUT_OF_CAPS:
case Allocator::Alloc_error::OUT_OF_RAM:
/*
* These conditions cannot happen because the slab
* does not try to grow automatically. It is
* explicitely expanded by the client as response to
* the 'Out_of_ram' condition above.
*/
/*
* Conditions other than DENIED cannot happen because the slab
* does not try to grow automatically. It is explicitely
* expanded by the client as response to the EXHAUSTED return
* value.
*/
if (e != Allocator::Alloc_error::DENIED)
Genode::raw("unexpected recv_caps allocation failure");
}
return Result::EXHAUSTED;
}
);
if (result == Result::EXHAUSTED)
return result;
}
_ipc_rcv_caps = cap_count;
return Result::OK;
}
@@ -87,11 +84,20 @@ void Thread::_ipc_free_recv_caps()
}
void Thread::_ipc_init(Genode::Native_utcb &utcb, Thread &starter)
Thread::Ipc_alloc_result Thread::_ipc_init(Genode::Native_utcb &utcb, Thread &starter)
{
_utcb = &utcb;
_ipc_alloc_recv_caps((unsigned)(starter._utcb->cap_cnt()));
ipc_copy_msg(starter);
switch (_ipc_alloc_recv_caps((unsigned)(starter._utcb->cap_cnt()))) {
case Ipc_alloc_result::OK:
ipc_copy_msg(starter);
break;
case Ipc_alloc_result::EXHAUSTED:
return Ipc_alloc_result::EXHAUSTED;
}
return Ipc_alloc_result::OK;
}
@@ -163,7 +169,7 @@ Thread::Destroy::Destroy(Thread & caller, Core::Kernel_object<Thread> & to_delet
:
caller(caller), thread_to_destroy(to_delete)
{
thread_to_destroy->_cpu->work_list().insert(&_le);
thread_to_destroy->_cpu().work_list().insert(&_le);
caller._become_inactive(AWAITS_RESTART);
}
@@ -171,7 +177,7 @@ Thread::Destroy::Destroy(Thread & caller, Core::Kernel_object<Thread> & to_delet
void
Thread::Destroy::execute(Cpu &)
{
thread_to_destroy->_cpu->work_list().remove(&_le);
thread_to_destroy->_cpu().work_list().remove(&_le);
thread_to_destroy.destruct();
caller._restart();
}
@@ -233,7 +239,8 @@ void Thread::ipc_send_request_succeeded()
assert(_state == AWAITS_IPC);
user_arg_0(0);
_state = ACTIVE;
if (!Cpu_job::own_share_active()) { _activate_used_shares(); }
_activate();
helping_finished();
}
@@ -242,7 +249,8 @@ void Thread::ipc_send_request_failed()
assert(_state == AWAITS_IPC);
user_arg_0(-1);
_state = ACTIVE;
if (!Cpu_job::own_share_active()) { _activate_used_shares(); }
_activate();
helping_finished();
}
@@ -262,32 +270,16 @@ void Thread::ipc_await_request_failed()
}
void Thread::_deactivate_used_shares()
{
Cpu_job::_deactivate_own_share();
_ipc_node.for_each_helper([&] (Thread &thread) {
thread._deactivate_used_shares(); });
}
void Thread::_activate_used_shares()
{
Cpu_job::_activate_own_share();
_ipc_node.for_each_helper([&] (Thread &thread) {
thread._activate_used_shares(); });
}
void Thread::_become_active()
{
if (_state != ACTIVE && !_paused) { _activate_used_shares(); }
if (_state != ACTIVE && !_paused) Cpu_context::_activate();
_state = ACTIVE;
}
void Thread::_become_inactive(State const s)
{
if (_state == ACTIVE && !_paused) { _deactivate_used_shares(); }
if (_state == ACTIVE && !_paused) Cpu_context::_deactivate();
_state = s;
}
@@ -295,17 +287,13 @@ void Thread::_become_inactive(State const s)
void Thread::_die() { _become_inactive(DEAD); }
Cpu_job * Thread::helping_destination() {
return &_ipc_node.helping_destination(); }
size_t Thread::_core_to_kernel_quota(size_t const quota) const
{
using Genode::Cpu_session;
/* we assert at timer construction that cpu_quota_us in ticks fits size_t */
size_t const ticks = (size_t)
_cpu->timer().us_to_ticks(Kernel::cpu_quota_us);
_cpu().timer().us_to_ticks(Kernel::cpu_quota_us);
return Cpu_session::quota_lim_downscale(quota, ticks);
}
@@ -313,24 +301,26 @@ size_t Thread::_core_to_kernel_quota(size_t const quota) const
void Thread::_call_thread_quota()
{
Thread * const thread = (Thread *)user_arg_1();
thread->Cpu_job::quota((unsigned)(_core_to_kernel_quota(user_arg_2())));
thread->Cpu_context::quota((unsigned)(_core_to_kernel_quota(user_arg_2())));
}
void Thread::_call_start_thread()
{
/* lookup CPU */
Cpu & cpu = _cpu_pool.cpu((unsigned)user_arg_2());
user_arg_0(0);
Thread &thread = *(Thread*)user_arg_1();
assert(thread._state == AWAITS_START);
thread.affinity(cpu);
/* join protection domain */
thread._pd = (Pd *) user_arg_3();
thread._ipc_init(*(Native_utcb *)user_arg_4(), *this);
thread._pd = (Pd *) user_arg_2();
switch (thread._ipc_init(*(Native_utcb *)user_arg_3(), *this)) {
case Ipc_alloc_result::OK:
break;
case Ipc_alloc_result::EXHAUSTED:
user_arg_0(-2);
return;
}
/*
* Sanity check core threads!
@@ -344,7 +334,8 @@ void Thread::_call_start_thread()
* semantic changes, and additional core threads are started
* across cpu cores.
*/
if (thread._pd == &_core_pd && cpu.id() != _cpu_pool.primary_cpu().id())
if (thread._pd == &_core_pd &&
thread._cpu().id() != _cpu_pool.primary_cpu().id())
Genode::raw("Error: do not start core threads"
" on CPU cores different than boot cpu");
@@ -355,8 +346,8 @@ void Thread::_call_start_thread()
void Thread::_call_pause_thread()
{
Thread &thread = *reinterpret_cast<Thread*>(user_arg_1());
if (thread._state == ACTIVE && !thread._paused) {
thread._deactivate_used_shares(); }
if (thread._state == ACTIVE && !thread._paused)
thread._deactivate();
thread._paused = true;
}
@@ -365,8 +356,8 @@ void Thread::_call_pause_thread()
void Thread::_call_resume_thread()
{
Thread &thread = *reinterpret_cast<Thread*>(user_arg_1());
if (thread._state == ACTIVE && thread._paused) {
thread._activate_used_shares(); }
if (thread._state == ACTIVE && thread._paused)
thread._activate();
thread._paused = false;
}
@@ -394,6 +385,7 @@ void Thread::_call_restart_thread()
_die();
return;
}
user_arg_0(thread._restart());
}
@@ -401,7 +393,10 @@ void Thread::_call_restart_thread()
bool Thread::_restart()
{
assert(_state == ACTIVE || _state == AWAITS_RESTART);
if (_state != AWAITS_RESTART) { return false; }
if (_state == ACTIVE && _exception_state == NO_EXCEPTION)
return false;
_exception_state = NO_EXCEPTION;
_become_active();
return true;
@@ -439,7 +434,7 @@ void Thread::_cancel_blocking()
void Thread::_call_yield_thread()
{
Cpu_job::_yield();
Cpu_context::_yield();
}
@@ -449,12 +444,11 @@ void Thread::_call_delete_thread()
*(Core::Kernel_object<Thread>*)user_arg_1();
/**
* Delete a thread immediately if it has no cpu assigned yet,
* or it is assigned to this cpu, or the assigned cpu did not scheduled it.
* Delete a thread immediately if it is assigned to this cpu,
* or the assigned cpu did not scheduled it.
*/
if (!to_delete->_cpu ||
(to_delete->_cpu->id() == Cpu::executing_id() ||
&to_delete->_cpu->scheduled_job() != &*to_delete)) {
if (to_delete->_cpu().id() == Cpu::executing_id() ||
&to_delete->_cpu().current_context() != &*to_delete) {
_call_delete<Thread>();
return;
}
@@ -463,7 +457,7 @@ void Thread::_call_delete_thread()
* Construct a cross-cpu work item and send an IPI
*/
_destroy.construct(*this, to_delete);
to_delete->_cpu->trigger_ip_interrupt();
to_delete->_cpu().trigger_ip_interrupt();
}
@@ -472,8 +466,8 @@ void Thread::_call_delete_pd()
Core::Kernel_object<Pd> & pd =
*(Core::Kernel_object<Pd>*)user_arg_1();
if (_cpu->active(pd->mmu_regs))
_cpu->switch_to(_core_pd.mmu_regs);
if (_cpu().active(pd->mmu_regs))
_cpu().switch_to(_core_pd.mmu_regs);
_call_delete<Pd>();
}
@@ -482,7 +476,14 @@ void Thread::_call_delete_pd()
void Thread::_call_await_request_msg()
{
if (_ipc_node.ready_to_wait()) {
_ipc_alloc_recv_caps((unsigned)user_arg_1());
switch (_ipc_alloc_recv_caps((unsigned)user_arg_1())) {
case Ipc_alloc_result::OK:
break;
case Ipc_alloc_result::EXHAUSTED:
user_arg_0(-2);
return;
}
_ipc_node.wait();
if (_ipc_node.waiting()) {
_become_inactive(AWAITS_IPC);
@@ -498,7 +499,7 @@ void Thread::_call_await_request_msg()
void Thread::_call_timeout()
{
Timer & t = _cpu->timer();
Timer & t = _cpu().timer();
_timeout_sigid = (Kernel::capid_t)user_arg_2();
t.set_timeout(this, t.us_to_ticks(user_arg_1()));
}
@@ -506,13 +507,13 @@ void Thread::_call_timeout()
void Thread::_call_timeout_max_us()
{
user_ret_time(_cpu->timer().timeout_max_us());
user_ret_time(_cpu().timer().timeout_max_us());
}
void Thread::_call_time()
{
Timer & t = _cpu->timer();
Timer & t = _cpu().timer();
user_ret_time(t.ticks_to_us(t.time()));
}
@@ -521,11 +522,8 @@ void Thread::timeout_triggered()
{
Signal_context * const c =
pd().cap_tree().find<Signal_context>(_timeout_sigid);
if (!c || !c->can_submit(1)) {
Genode::raw(*this, ": failed to submit timeout signal");
return;
}
c->submit(1);
if (c) c->submit(1);
else Genode::warning(*this, ": failed to submit timeout signal");
}
@@ -539,19 +537,26 @@ void Thread::_call_send_request_msg()
_become_inactive(DEAD);
return;
}
bool const help = Cpu_job::_helping_possible(*dst);
bool const help = Cpu_context::_helping_possible(*dst);
oir = oir->find(dst->pd());
if (!_ipc_node.ready_to_send()) {
Genode::raw("IPC send request: bad state");
} else {
_ipc_alloc_recv_caps((unsigned)user_arg_2());
_ipc_capid = oir ? oir->capid() : cap_id_invalid();
_ipc_node.send(dst->_ipc_node, help);
switch (_ipc_alloc_recv_caps((unsigned)user_arg_2())) {
case Ipc_alloc_result::OK:
break;
case Ipc_alloc_result::EXHAUSTED:
user_arg_0(-2);
return;
}
_ipc_capid = oir ? oir->capid() : cap_id_invalid();
_ipc_node.send(dst->_ipc_node);
}
_state = AWAITS_IPC;
if (!help || !dst->own_share_active()) { _deactivate_used_shares(); }
if (help) Cpu_context::_help(*dst);
if (!help || !dst->ready()) _deactivate();
}
@@ -568,7 +573,9 @@ void Thread::_call_pager()
{
/* override event route */
Thread &thread = *(Thread *)user_arg_1();
thread._pager = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_2());
Thread &pager = *(Thread *)user_arg_2();
Signal_context &sc = *pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_3());
thread._fault_context.construct(pager, sc);
}
@@ -592,12 +599,11 @@ void Thread::_call_await_signal()
return;
}
/* register handler at the receiver */
if (!r->can_add_handler(_signal_handler)) {
if (!r->add_handler(_signal_handler)) {
Genode::raw("failed to register handler at signal receiver");
user_arg_0(-1);
return;
}
r->add_handler(_signal_handler);
user_arg_0(0);
}
@@ -614,11 +620,10 @@ void Thread::_call_pending_signal()
}
/* register handler at the receiver */
if (!r->can_add_handler(_signal_handler)) {
if (!r->add_handler(_signal_handler)) {
user_arg_0(-1);
return;
}
r->add_handler(_signal_handler);
if (_state == AWAITS_SIGNAL) {
_cancel_blocking();
@@ -653,20 +658,7 @@ void Thread::_call_submit_signal()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_1());
if(!c) {
/* cannot submit unknown signal context */
user_arg_0(-1);
return;
}
/* trigger signal context */
if (!c->can_submit((unsigned)user_arg_2())) {
Genode::raw("failed to submit signal context");
user_arg_0(-1);
return;
}
c->submit((unsigned)user_arg_2());
user_arg_0(0);
if(c) c->submit((unsigned)user_arg_2());
}
@@ -674,13 +666,8 @@ void Thread::_call_ack_signal()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_1());
if (!c) {
Genode::raw(*this, ": cannot ack unknown signal context");
return;
}
/* acknowledge */
c->ack();
if (c) c->ack();
else Genode::warning(*this, ": cannot ack unknown signal context");
}
@@ -688,19 +675,8 @@ void Thread::_call_kill_signal_context()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_1());
if (!c) {
Genode::raw(*this, ": cannot kill unknown signal context");
user_arg_0(-1);
return;
}
/* kill signal context */
if (!c->can_kill()) {
Genode::raw("failed to kill signal context");
user_arg_0(-1);
return;
}
c->kill(_signal_context_killer);
if (c) c->kill(_signal_context_killer);
else Genode::warning(*this, ": cannot kill unknown signal context");
}
@@ -719,7 +695,7 @@ void Thread::_call_new_irq()
(Genode::Irq_session::Polarity) (user_arg_3() & 0b11);
_call_new<User_irq>((unsigned)user_arg_2(), trigger, polarity, *c,
_cpu->pic(), _user_irq_pool);
_cpu().pic(), _user_irq_pool);
}
@@ -820,10 +796,27 @@ void Thread::_call_single_step() {
}
void Thread::_call_ack_pager_signal()
{
Signal_context * const c = pd().cap_tree().find<Signal_context>((Kernel::capid_t)user_arg_1());
if (!c)
Genode::raw(*this, ": cannot ack unknown signal context");
else
c->ack();
Thread &thread = *(Thread*)user_arg_2();
thread.helping_finished();
bool resolved = user_arg_3() ||
thread._exception_state == NO_EXCEPTION;
if (resolved) thread._restart();
else thread._become_inactive(AWAITS_RESTART);
}
void Thread::_call()
{
try {
/* switch over unrestricted kernel calls */
unsigned const call_id = (unsigned)user_arg_0();
switch (call_id) {
@@ -863,13 +856,15 @@ void Thread::_call()
switch (call_id) {
case call_id_new_thread():
_call_new<Thread>(_addr_space_id_alloc, _user_irq_pool, _cpu_pool,
_core_pd, (unsigned) user_arg_2(),
(unsigned) _core_to_kernel_quota(user_arg_3()),
(char const *) user_arg_4(), USER);
_cpu_pool.cpu((unsigned)user_arg_2()),
_core_pd, (unsigned) user_arg_3(),
(unsigned) _core_to_kernel_quota(user_arg_4()),
(char const *) user_arg_5(), USER);
return;
case call_id_new_core_thread():
_call_new<Thread>(_addr_space_id_alloc, _user_irq_pool, _cpu_pool,
_core_pd, (char const *) user_arg_2());
_cpu_pool.cpu((unsigned)user_arg_2()),
_core_pd, (char const *) user_arg_3());
return;
case call_id_thread_quota(): _call_thread_quota(); return;
case call_id_delete_thread(): _call_delete_thread(); return;
@@ -902,40 +897,70 @@ void Thread::_call()
case call_id_set_cpu_state(): _call_set_cpu_state(); return;
case call_id_exception_state(): _call_exception_state(); return;
case call_id_single_step(): _call_single_step(); return;
case call_id_ack_pager_signal(): _call_ack_pager_signal(); return;
default:
Genode::raw(*this, ": unknown kernel call");
_die();
return;
}
} catch (Genode::Allocator::Out_of_memory &e) { user_arg_0(-2); }
}
void Thread::_signal_to_pager()
{
if (!_fault_context.constructed()) {
Genode::warning(*this, " could not send signal to pager");
_die();
return;
}
/* first signal to pager to wake it up */
_fault_context->sc.submit(1);
/* only help pager thread if runnable and scheduler allows it */
bool const help = Cpu_context::_helping_possible(_fault_context->pager)
&& (_fault_context->pager._state == ACTIVE);
if (help) Cpu_context::_help(_fault_context->pager);
else _become_inactive(AWAITS_RESTART);
}
void Thread::_mmu_exception()
{
_become_inactive(AWAITS_RESTART);
using namespace Genode;
using Genode::log;
_exception_state = MMU_FAULT;
Cpu::mmu_fault(*regs, _fault);
_fault.ip = regs->ip;
if (_fault.type == Thread_fault::UNKNOWN) {
Genode::raw(*this, " raised unhandled MMU fault ", _fault);
Genode::warning(*this, " raised unhandled MMU fault ", _fault);
_die();
return;
}
if (_type != USER)
Genode::raw(*this, " raised a fault, which should never happen ",
_fault);
if (_type != USER) {
error(*this, " raised a fault, which should never happen ",
_fault);
log("Register dump: ", *regs);
log("Backtrace:");
if (_pager && _pager->can_submit(1)) {
_pager->submit(1);
Const_byte_range_ptr const stack {
(char const*)Hw::Mm::core_stack_area().base,
Hw::Mm::core_stack_area().size };
regs->for_each_return_address(stack, [&] (void **p) {
log(*p); });
_die();
return;
}
_signal_to_pager();
}
void Thread::_exception()
{
_become_inactive(AWAITS_RESTART);
_exception_state = EXCEPTION;
if (_type != USER) {
@@ -943,18 +968,14 @@ void Thread::_exception()
_die();
}
if (_pager && _pager->can_submit(1)) {
_pager->submit(1);
} else {
Genode::raw(*this, " could not send signal to pager on exception");
_die();
}
_signal_to_pager();
}
Thread::Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
Cpu &cpu,
Pd &core_pd,
unsigned const priority,
unsigned const quota,
@@ -962,7 +983,7 @@ Thread::Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Type type)
:
Kernel::Object { *this },
Cpu_job { priority, quota },
Cpu_context { cpu, priority, quota },
_addr_space_id_alloc { addr_space_id_alloc },
_user_irq_pool { user_irq_pool },
_cpu_pool { cpu_pool },
@@ -999,8 +1020,8 @@ Core_main_thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Cpu_pool &cpu_pool,
Pd &core_pd)
:
Core_object<Thread>(
core_pd, addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd, "core")
Core_object<Thread>(core_pd, addr_space_id_alloc, user_irq_pool, cpu_pool,
cpu_pool.primary_cpu(), core_pd, "core")
{
using namespace Core;
@@ -1016,7 +1037,6 @@ Core_main_thread(Board::Address_space_id_allocator &addr_space_id_alloc,
regs->sp = (addr_t)&__initial_stack_base[0] + DEFAULT_STACK_SIZE;
regs->ip = (addr_t)&_core_start;
affinity(_cpu_pool.primary_cpu());
_utcb = &_utcb_instance;
Thread::_pd = &core_pd;
_become_active();

View File

@@ -20,7 +20,7 @@
/* base-hw core includes */
#include <kernel/cpu_context.h>
#include <kernel/inter_processor_work.h>
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <kernel/ipc_node.h>
#include <object.h>
#include <kernel/interface.h>
@@ -53,7 +53,7 @@ struct Kernel::Thread_fault
/**
* Kernel back-end for userland execution-contexts
*/
class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
class Kernel::Thread : private Kernel::Object, public Cpu_context, private Timeout
{
public:
@@ -173,7 +173,15 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
size_t _ipc_rcv_caps { 0 };
Genode::Native_utcb *_utcb { nullptr };
Pd *_pd { nullptr };
Signal_context *_pager { nullptr };
struct Fault_context
{
Thread &pager;
Signal_context &sc;
};
Genode::Constructible<Fault_context> _fault_context {};
Thread_fault _fault { };
State _state;
Signal_handler _signal_handler { *this };
@@ -216,21 +224,16 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
*/
void _become_inactive(State const s);
/**
* Activate our CPU-share and those of our helpers
*/
void _activate_used_shares();
/**
* Deactivate our CPU-share and those of our helpers
*/
void _deactivate_used_shares();
/**
* Suspend unrecoverably from execution
*/
void _die();
/**
* In case of fault, signal to pager, and help or block
*/
void _signal_to_pager();
/**
* Handle an exception thrown by the memory management unit
*/
@@ -306,6 +309,7 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
void _call_set_cpu_state();
void _call_exception_state();
void _call_single_step();
void _call_ack_pager_signal();
template <typename T>
void _call_new(auto &&... args)
@@ -322,9 +326,13 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
kobj.destruct();
}
void _ipc_alloc_recv_caps(unsigned rcv_cap_count);
enum Ipc_alloc_result { OK, EXHAUSTED };
[[nodiscard]] Ipc_alloc_result _ipc_alloc_recv_caps(unsigned rcv_cap_count);
void _ipc_free_recv_caps();
void _ipc_init(Genode::Native_utcb &utcb, Thread &callee);
[[nodiscard]] Ipc_alloc_result _ipc_init(Genode::Native_utcb &utcb, Thread &callee);
public:
@@ -341,6 +349,7 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
Cpu &cpu,
Pd &core_pd,
unsigned const priority,
unsigned const quota,
@@ -355,11 +364,12 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
Cpu &cpu,
Pd &core_pd,
char const *const label)
:
Thread(addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd,
Scheduler::Priority::min(), 0, label, CORE)
Thread(addr_space_id_alloc, user_irq_pool, cpu_pool, cpu,
core_pd, Scheduler::Priority::min(), 0, label, CORE)
{ }
~Thread();
@@ -396,13 +406,14 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
* \retval capability id of the new kernel object
*/
static capid_t syscall_create(Core::Kernel_object<Thread> &t,
unsigned const cpu_id,
unsigned const priority,
size_t const quota,
char const * const label)
{
return (capid_t)call(call_id_new_thread(), (Call_arg)&t,
(Call_arg)priority, (Call_arg)quota,
(Call_arg)label);
(Call_arg)cpu_id, (Call_arg)priority,
(Call_arg)quota, (Call_arg)label);
}
/**
@@ -414,10 +425,11 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
* \retval capability id of the new kernel object
*/
static capid_t syscall_create(Core::Kernel_object<Thread> &t,
unsigned const cpu_id,
char const * const label)
{
return (capid_t)call(call_id_new_core_thread(), (Call_arg)&t,
(Call_arg)label);
(Call_arg)cpu_id, (Call_arg)label);
}
/**
@@ -454,13 +466,12 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
void signal_receive_signal(void * const base, size_t const size);
/*************
** Cpu_job **
*************/
/*****************
** Cpu_context **
*****************/
void exception(Cpu & cpu) override;
void proceed(Cpu & cpu) override;
Cpu_job * helping_destination() override;
void exception() override;
void proceed() override;
/*************

View File

@@ -18,7 +18,7 @@
/* core includes */
#include <kernel/cpu_context.h>
#include <kernel/pd.h>
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <board.h>
@@ -31,7 +31,7 @@ namespace Kernel {
}
class Kernel::Vm : private Kernel::Object, public Cpu_job
class Kernel::Vm : private Kernel::Object, public Cpu_context
{
public:
@@ -66,7 +66,7 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
void _pause_vcpu()
{
if (_scheduled != INACTIVE)
Cpu_job::_deactivate_own_share();
Cpu_context::_deactivate();
_scheduled = INACTIVE;
}
@@ -135,7 +135,7 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
void run()
{
_sync_from_vmm();
if (_scheduled != ACTIVE) Cpu_job::_activate_own_share();
if (_scheduled != ACTIVE) Cpu_context::_activate();
_scheduled = ACTIVE;
}
@@ -146,13 +146,12 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
}
/*************
** Cpu_job **
*************/
/*****************
** Cpu_context **
*****************/
void exception(Cpu & cpu) override;
void proceed(Cpu & cpu) override;
Cpu_job * helping_destination() override { return this; }
void exception() override;
void proceed() override;
};
#endif /* _CORE__KERNEL__VM_H_ */

View File

@@ -19,9 +19,30 @@
/* base-internal includes */
#include <base/internal/capability_space.h>
#include <base/internal/native_thread.h>
using namespace Core;
static unsigned _nr_of_cpus = 0;
static void *_pager_thread_memory = nullptr;
void Core::init_pager_thread_per_cpu_memory(unsigned const cpus, void * mem)
{
_nr_of_cpus = cpus;
_pager_thread_memory = mem;
}
void Core::init_page_fault_handling(Rpc_entrypoint &) { }
/*************
** Mapping **
*************/
void Mapping::prepare_map_operation() const { }
/***************
** Ipc_pager **
@@ -51,13 +72,11 @@ void Pager_object::wake_up()
}
void Pager_object::start_paging(Kernel_object<Kernel::Signal_receiver> & receiver)
void Pager_object::start_paging(Kernel_object<Kernel::Signal_receiver> &receiver,
Platform_thread &pager_thread)
{
using Object = Kernel_object<Kernel::Signal_context>;
using Entry = Object_pool<Pager_object>::Entry;
create(*receiver, (unsigned long)this);
Entry::cap(Object::_cap);
_pager_thread = &pager_thread;
}
@@ -75,11 +94,11 @@ void Pager_object::print(Output &out) const
Pager_object::Pager_object(Cpu_session_capability cpu_session_cap,
Thread_capability thread_cap, addr_t const badge,
Affinity::Location, Session_label const &,
Affinity::Location location, Session_label const &,
Cpu_session::Name const &)
:
Object_pool<Pager_object>::Entry(Kernel_object<Kernel::Signal_context>::_cap),
_badge(badge), _cpu_session_cap(cpu_session_cap), _thread_cap(thread_cap)
_badge(badge), _location(location),
_cpu_session_cap(cpu_session_cap), _thread_cap(thread_cap)
{ }
@@ -87,27 +106,115 @@ Pager_object::Pager_object(Cpu_session_capability cpu_session_cap,
** Pager_entrypoint **
**********************/
void Pager_entrypoint::dissolve(Pager_object &o)
void Pager_entrypoint::Thread::entry()
{
Kernel::kill_signal_context(Capability_space::capid(o.cap()));
remove(&o);
while (1) {
/* receive fault */
if (Kernel::await_signal(Capability_space::capid(_kobj.cap())))
continue;
Pager_object *po = *(Pager_object**)Thread::myself()->utcb()->data();
if (!po)
continue;
Untyped_capability cap = po->cap();
/* fetch fault data */
Platform_thread * const pt = (Platform_thread *)po->badge();
if (!pt) {
warning("failed to get platform thread of faulter");
Kernel::ack_signal(Capability_space::capid(cap));
continue;
}
if (pt->exception_state() ==
Kernel::Thread::Exception_state::EXCEPTION) {
if (!po->submit_exception_signal())
warning("unresolvable exception: "
"pd='", pt->pd().label(), "', "
"thread='", pt->label(), "', "
"ip=", Hex(pt->state().cpu.ip));
pt->fault_resolved(cap, false);
continue;
}
_fault = pt->fault_info();
/* try to resolve fault directly via local region managers */
if (po->pager(*this) == Pager_object::Pager_result::STOP) {
pt->fault_resolved(cap, false);
continue;
}
/* apply mapping that was determined by the local region managers */
{
Locked_ptr<Address_space> locked_ptr(pt->address_space());
if (!locked_ptr.valid()) {
pt->fault_resolved(cap, false);
continue;
}
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
Cache cacheable = Genode::CACHED;
if (!_mapping.cached)
cacheable = Genode::UNCACHED;
if (_mapping.write_combined)
cacheable = Genode::WRITE_COMBINED;
Hw::Page_flags const flags {
.writeable = _mapping.writeable ? Hw::RW : Hw::RO,
.executable = _mapping.executable ? Hw::EXEC : Hw::NO_EXEC,
.privileged = Hw::USER,
.global = Hw::NO_GLOBAL,
.type = _mapping.io_mem ? Hw::DEVICE : Hw::RAM,
.cacheable = cacheable
};
as->insert_translation(_mapping.dst_addr, _mapping.src_addr,
1UL << _mapping.size_log2, flags);
}
pt->fault_resolved(cap, true);
}
}
Pager_entrypoint::Pager_entrypoint(Rpc_cap_factory &)
Pager_entrypoint::Thread::Thread(Affinity::Location cpu)
:
Thread(Weight::DEFAULT_WEIGHT, "pager_ep", PAGER_EP_STACK_SIZE,
Type::NORMAL),
Genode::Thread(Weight::DEFAULT_WEIGHT, "pager_ep", PAGER_EP_STACK_SIZE, cpu),
_kobj(_kobj.CALLED_FROM_CORE)
{
start();
}
void Pager_entrypoint::dissolve(Pager_object &o)
{
Kernel::kill_signal_context(Capability_space::capid(o.cap()));
}
Pager_capability Pager_entrypoint::manage(Pager_object &o)
{
o.start_paging(_kobj);
insert(&o);
unsigned const cpu = o.location().xpos();
if (cpu >= _cpus) {
error("Invalid location of pager object ", cpu);
} else {
o.start_paging(_threads[cpu]._kobj,
*_threads[cpu].native_thread().platform_thread);
}
return reinterpret_cap_cast<Pager_object>(o.cap());
}
Pager_entrypoint::Pager_entrypoint(Rpc_cap_factory &)
:
_cpus(_nr_of_cpus),
_threads((Thread*)_pager_thread_memory)
{
for (unsigned i = 0; i < _cpus; i++)
construct_at<Thread>((void*)&_threads[i], Affinity::Location(i, 0));
}

View File

@@ -17,12 +17,11 @@
/* Genode includes */
#include <base/session_label.h>
#include <base/thread.h>
#include <base/object_pool.h>
#include <base/signal.h>
#include <pager/capability.h>
/* core includes */
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <hw/mapping.h>
#include <mapping.h>
#include <object.h>
@@ -30,6 +29,9 @@
namespace Core {
class Platform;
class Platform_thread;
/**
* Interface used by generic region_map code
*/
@@ -53,6 +55,10 @@ namespace Core {
using Pager_capability = Capability<Pager_object>;
enum { PAGER_EP_STACK_SIZE = sizeof(addr_t) * 2048 };
extern void init_page_fault_handling(Rpc_entrypoint &);
void init_pager_thread_per_cpu_memory(unsigned const cpus, void * mem);
}
@@ -93,17 +99,17 @@ class Core::Ipc_pager
};
class Core::Pager_object : private Object_pool<Pager_object>::Entry,
private Kernel_object<Kernel::Signal_context>
class Core::Pager_object : private Kernel_object<Kernel::Signal_context>
{
friend class Pager_entrypoint;
friend class Object_pool<Pager_object>;
private:
unsigned long const _badge;
Affinity::Location _location;
Cpu_session_capability _cpu_session_cap;
Thread_capability _thread_cap;
Platform_thread *_pager_thread { nullptr };
/**
* User-level signal handler registered for this pager object via
@@ -111,6 +117,12 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
*/
Signal_context_capability _exception_sigh { };
/*
* Noncopyable
*/
Pager_object(const Pager_object&) = delete;
Pager_object& operator=(const Pager_object&) = delete;
public:
/**
@@ -123,11 +135,15 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
Affinity::Location, Session_label const&,
Cpu_session::Name const&);
virtual ~Pager_object() {}
/**
* User identification of pager object
*/
unsigned long badge() const { return _badge; }
Affinity::Location location() { return _location; }
/**
* Resume faulter
*/
@@ -158,7 +174,8 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
*
* \param receiver signal receiver that receives the page faults
*/
void start_paging(Kernel_object<Kernel::Signal_receiver> & receiver);
void start_paging(Kernel_object<Kernel::Signal_receiver> &receiver,
Platform_thread &pager_thread);
/**
* Called when a page-fault finally could not be resolved
@@ -167,6 +184,11 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
void print(Output &out) const;
void with_pager(auto const &fn)
{
if (_pager_thread) fn(*_pager_thread);
}
/******************
** Pure virtual **
@@ -192,24 +214,44 @@ class Core::Pager_object : private Object_pool<Pager_object>::Entry,
Cpu_session_capability cpu_session_cap() const { return _cpu_session_cap; }
Thread_capability thread_cap() const { return _thread_cap; }
using Object_pool<Pager_object>::Entry::cap;
Untyped_capability cap() {
return Kernel_object<Kernel::Signal_context>::_cap; }
};
class Core::Pager_entrypoint : public Object_pool<Pager_object>,
public Thread,
private Ipc_pager
class Core::Pager_entrypoint
{
private:
Kernel_object<Kernel::Signal_receiver> _kobj;
friend class Platform;
class Thread : public Genode::Thread,
private Ipc_pager
{
private:
friend class Pager_entrypoint;
Kernel_object<Kernel::Signal_receiver> _kobj;
public:
explicit Thread(Affinity::Location);
/**********************
** Thread interface **
**********************/
void entry() override;
};
unsigned const _cpus;
Thread *_threads;
public:
/**
* Constructor
*/
Pager_entrypoint(Rpc_cap_factory &);
explicit Pager_entrypoint(Rpc_cap_factory &);
/**
* Associate pager object 'obj' with entry point
@@ -220,13 +262,6 @@ class Core::Pager_entrypoint : public Object_pool<Pager_object>,
* Dissolve pager object 'obj' from entry point
*/
void dissolve(Pager_object &obj);
/**********************
** Thread interface **
**********************/
void entry() override;
};
#endif /* _CORE__PAGER_H_ */

View File

@@ -0,0 +1,79 @@
/*
* \brief Allocate an object with a physical address
* \author Norman Feske
* \author Benjamin Lamowski
* \date 2024-12-02
*/
/*
* Copyright (C) 2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
#ifndef _CORE__PHYS_ALLOCATED_H_
#define _CORE__PHYS_ALLOCATED_H_
/* base includes */
#include <base/allocator.h>
#include <base/attached_ram_dataspace.h>
#include <util/noncopyable.h>
/* core-local includes */
#include <types.h>
namespace Core {
template <typename T>
class Phys_allocated;
}
using namespace Core;
template <typename T>
class Core::Phys_allocated : Genode::Noncopyable
{
private:
Rpc_entrypoint &_ep;
Ram_allocator &_ram;
Region_map &_rm;
Attached_ram_dataspace _ds { _ram, _rm, sizeof(T) };
public:
T &obj = *_ds.local_addr<T>();
Phys_allocated(Rpc_entrypoint &ep,
Ram_allocator &ram,
Region_map &rm)
:
_ep(ep), _ram(ram), _rm(rm)
{
construct_at<T>(&obj);
}
Phys_allocated(Rpc_entrypoint &ep,
Ram_allocator &ram,
Region_map &rm,
auto const &construct_fn)
:
_ep(ep), _ram(ram), _rm(rm)
{
construct_fn(*this, &obj);
}
~Phys_allocated() { obj.~T(); }
addr_t phys_addr() {
addr_t phys_addr { };
_ep.apply(_ds.cap(), [&](Dataspace_component *dsc) {
phys_addr = dsc->phys_addr();
});
return phys_addr;
}
};
#endif /* _CORE__PHYS_ALLOCATED_H_ */

View File

@@ -19,6 +19,7 @@
/* base-hw core includes */
#include <map_local.h>
#include <pager.h>
#include <platform.h>
#include <platform_pd.h>
#include <kernel/main.h>
@@ -31,7 +32,6 @@
/* base internal includes */
#include <base/internal/crt0.h>
#include <base/internal/stack_area.h>
#include <base/internal/unmanaged_singleton.h>
/* base includes */
#include <trace/source_registry.h>
@@ -60,8 +60,9 @@ Hw::Page_table::Allocator & Platform::core_page_table_allocator()
using Allocator = Hw::Page_table::Allocator;
using Array = Allocator::Array<Hw::Page_table::CORE_TRANS_TABLE_COUNT>;
addr_t virt_addr = Hw::Mm::core_page_tables().base + sizeof(Hw::Page_table);
return *unmanaged_singleton<Array::Allocator>(_boot_info().table_allocator,
virt_addr);
static Array::Allocator alloc { _boot_info().table_allocator, virt_addr };
return alloc;
}
@@ -70,6 +71,7 @@ addr_t Platform::core_main_thread_phys_utcb()
return core_phys_addr(_boot_info().core_main_thread_utcb);
}
void Platform::_init_io_mem_alloc()
{
/* add entire adress space minus the RAM memory regions */
@@ -81,8 +83,9 @@ void Platform::_init_io_mem_alloc()
Hw::Memory_region_array const & Platform::_core_virt_regions()
{
return *unmanaged_singleton<Hw::Memory_region_array>(
Hw::Memory_region(stack_area_virtual_base(), stack_area_virtual_size()));
static Hw::Memory_region_array array {
Hw::Memory_region(stack_area_virtual_base(), stack_area_virtual_size()) };
return array;
}
@@ -161,6 +164,9 @@ void Platform::_init_platform_info()
xml.attribute("acpi", true);
xml.attribute("msi", true);
});
xml.node("board", [&] {
xml.attribute("name", BOARD_NAME);
});
_init_additional_platform_info(xml);
xml.node("affinity-space", [&] {
xml.attribute("width", affinity_space().width());
@@ -248,6 +254,10 @@ Platform::Platform()
);
}
unsigned const cpus = _boot_info().cpus;
size_t size = cpus * sizeof(Pager_entrypoint::Thread);
init_pager_thread_per_cpu_memory(cpus, _core_mem_alloc.alloc(size));
class Idle_thread_trace_source : public Trace::Source::Info_accessor,
private Trace::Control,
private Trace::Source

View File

@@ -119,6 +119,18 @@ class Core::Platform : public Platform_generic
static addr_t core_page_table();
static Hw::Page_table::Allocator & core_page_table_allocator();
/**
* Determine size of a core local mapping required for a
* Core_region_map::detach().
*/
size_t region_alloc_size_at(void * addr)
{
using Size_at_error = Allocator_avl::Size_at_error;
return (_core_mem_alloc.virt_alloc())()->size_at(addr).convert<size_t>(
[ ] (size_t s) { return s; },
[ ] (Size_at_error) { return 0U; });
}
/********************************
** Platform_generic interface **

View File

@@ -60,6 +60,13 @@ bool Hw::Address_space::insert_translation(addr_t virt, addr_t phys,
_tt.insert_translation(virt, phys, size, flags, _tt_alloc);
return true;
} catch(Hw::Out_of_tables &) {
/* core/kernel's page-tables should never get flushed */
if (_tt_phys == Platform::core_page_table()) {
error("core's page-table allocator is empty!");
return false;
}
flush(platform().vm_start(), platform().vm_size());
}
}

View File

@@ -15,7 +15,6 @@
/* core includes */
#include <platform_thread.h>
#include <platform_pd.h>
#include <core_env.h>
#include <rm_session_component.h>
#include <map_local.h>
@@ -30,6 +29,48 @@
using namespace Core;
addr_t Platform_thread::Utcb::_attach(Region_map &core_rm)
{
Region_map::Attr attr { };
attr.writeable = true;
return core_rm.attach(_ds, attr).convert<addr_t>(
[&] (Region_map::Range range) { return range.start; },
[&] (Region_map::Attach_error) {
error("failed to attach UTCB of new thread within core");
return 0ul; });
}
static addr_t _alloc_core_local_utcb(addr_t core_addr)
{
/*
* All non-core threads use the typical dataspace/rm_session
* mechanisms to allocate and attach its UTCB.
* But for the very first core threads, we need to use plain
* physical and virtual memory allocators to create/attach its
* UTCBs. Therefore, we've to allocate and map those here.
*/
return platform().ram_alloc().try_alloc(sizeof(Native_utcb)).convert<addr_t>(
[&] (void *utcb_phys) {
map_local((addr_t)utcb_phys, core_addr,
sizeof(Native_utcb) / get_page_size());
return addr_t(utcb_phys);
},
[&] (Range_allocator::Alloc_error) {
error("failed to allocate UTCB for core/kernel thread!");
return 0ul;
});
}
Platform_thread::Utcb::Utcb(addr_t core_addr)
:
core_addr(core_addr),
phys_addr(_alloc_core_local_utcb(core_addr))
{ }
void Platform_thread::_init() { }
@@ -37,21 +78,6 @@ Weak_ptr<Address_space>& Platform_thread::address_space() {
return _address_space; }
Platform_thread::~Platform_thread()
{
/* detach UTCB of main threads */
if (_main_thread) {
Locked_ptr<Address_space> locked_ptr(_address_space);
if (locked_ptr.valid())
locked_ptr->flush((addr_t)_utcb_pd_addr, sizeof(Native_utcb),
Address_space::Core_local_addr{0});
}
/* free UTCB */
core_env().pd_session()->free(_utcb);
}
void Platform_thread::quota(size_t const quota)
{
_quota = (unsigned)quota;
@@ -64,65 +90,57 @@ Platform_thread::Platform_thread(Label const &label, Native_utcb &utcb)
_label(label),
_pd(_kernel_main_get_core_platform_pd()),
_pager(nullptr),
_utcb_core_addr(&utcb),
_utcb_pd_addr(&utcb),
_utcb((addr_t)&utcb),
_main_thread(false),
_location(Affinity::Location()),
_kobj(_kobj.CALLED_FROM_CORE, _label.string())
{
/* create UTCB for a core thread */
platform().ram_alloc().try_alloc(sizeof(Native_utcb)).with_result(
[&] (void *utcb_phys) {
map_local((addr_t)utcb_phys, (addr_t)_utcb_core_addr,
sizeof(Native_utcb) / get_page_size());
},
[&] (Range_allocator::Alloc_error) {
error("failed to allocate UTCB");
/* XXX distinguish error conditions */
throw Out_of_ram();
}
);
}
_kobj(_kobj.CALLED_FROM_CORE, _location.xpos(), _label.string())
{ }
Platform_thread::Platform_thread(Platform_pd &pd,
Rpc_entrypoint &ep,
Ram_allocator &ram,
Region_map &core_rm,
size_t const quota,
Label const &label,
unsigned const virt_prio,
Affinity::Location const location,
addr_t const utcb)
addr_t /* utcb */)
:
_label(label),
_pd(pd),
_pager(nullptr),
_utcb_pd_addr((Native_utcb *)utcb),
_utcb(ep, ram, core_rm),
_priority(_scale_priority(virt_prio)),
_quota((unsigned)quota),
_main_thread(!pd.has_any_thread),
_location(location),
_kobj(_kobj.CALLED_FROM_CORE, _priority, _quota, _label.string())
_kobj(_kobj.CALLED_FROM_CORE, _location.xpos(),
_priority, _quota, _label.string())
{
try {
_utcb = core_env().pd_session()->alloc(sizeof(Native_utcb), CACHED);
} catch (...) {
error("failed to allocate UTCB");
throw Out_of_ram();
}
Region_map::Attr attr { };
attr.writeable = true;
core_env().rm_session()->attach(_utcb, attr).with_result(
[&] (Region_map::Range range) {
_utcb_core_addr = (Native_utcb *)range.start; },
[&] (Region_map::Attach_error) {
error("failed to attach UTCB of new thread within core"); });
_address_space = pd.weak_ptr();
pd.has_any_thread = true;
}
Platform_thread::~Platform_thread()
{
/* core/kernel threads have no dataspace, but plain memory as UTCB */
if (!_utcb._ds.valid()) {
error("UTCB of core/kernel thread gets destructed!");
return;
}
/* detach UTCB of main threads */
if (_main_thread) {
Locked_ptr<Address_space> locked_ptr(_address_space);
if (locked_ptr.valid())
locked_ptr->flush(user_utcb_main_thread(), sizeof(Native_utcb),
Address_space::Core_local_addr{0});
}
}
void Platform_thread::affinity(Affinity::Location const &)
{
/* yet no migration support, don't claim wrong location, e.g. for tracing */
@@ -137,36 +155,23 @@ void Platform_thread::start(void * const ip, void * const sp)
/* attach UTCB in case of a main thread */
if (_main_thread) {
/* lookup dataspace component for physical address */
auto lambda = [&] (Dataspace_component *dsc) {
if (!dsc) return -1;
/* lock the address space */
Locked_ptr<Address_space> locked_ptr(_address_space);
if (!locked_ptr.valid()) {
error("invalid RM client");
return -1;
};
_utcb_pd_addr = (Native_utcb *)user_utcb_main_thread();
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
if (!as->insert_translation((addr_t)_utcb_pd_addr, dsc->phys_addr(),
sizeof(Native_utcb), Hw::PAGE_FLAGS_UTCB)) {
error("failed to attach UTCB");
return -1;
}
return 0;
};
if (core_env().entrypoint().apply(_utcb, lambda))
Locked_ptr<Address_space> locked_ptr(_address_space);
if (!locked_ptr.valid()) {
error("unable to start thread in invalid address space");
return;
};
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
if (!as->insert_translation(user_utcb_main_thread(), _utcb.phys_addr,
sizeof(Native_utcb), Hw::PAGE_FLAGS_UTCB)) {
error("failed to attach UTCB");
return;
}
}
/* initialize thread registers */
_kobj->regs->ip = reinterpret_cast<addr_t>(ip);
_kobj->regs->sp = reinterpret_cast<addr_t>(sp);
/* start executing new thread */
unsigned const cpu = _location.xpos();
Native_utcb &utcb = *Thread::myself()->utcb();
/* reset capability counter */
@@ -174,18 +179,22 @@ void Platform_thread::start(void * const ip, void * const sp)
utcb.cap_add(Capability_space::capid(_kobj.cap()));
if (_main_thread) {
utcb.cap_add(Capability_space::capid(_pd.parent()));
utcb.cap_add(Capability_space::capid(_utcb));
utcb.cap_add(Capability_space::capid(_utcb._ds));
}
Kernel::start_thread(*_kobj, cpu, _pd.kernel_pd(), *_utcb_core_addr);
Kernel::start_thread(*_kobj, _pd.kernel_pd(),
*(Native_utcb*)_utcb.core_addr);
}
void Platform_thread::pager(Pager_object &pager)
void Platform_thread::pager(Pager_object &po)
{
using namespace Kernel;
thread_pager(*_kobj, Capability_space::capid(pager.cap()));
_pager = &pager;
po.with_pager([&] (Platform_thread &pt) {
thread_pager(*_kobj, *pt._kobj,
Capability_space::capid(po.cap())); });
_pager = &po;
}
@@ -231,3 +240,9 @@ void Platform_thread::restart()
{
Kernel::restart_thread(Capability_space::capid(_kobj.cap()));
}
void Platform_thread::fault_resolved(Untyped_capability cap, bool resolved)
{
Kernel::ack_pager_signal(Capability_space::capid(cap), *_kobj, resolved);
}

View File

@@ -19,6 +19,7 @@
#include <base/ram_allocator.h>
#include <base/thread.h>
#include <base/trace/types.h>
#include <base/rpc_server.h>
/* base-internal includes */
#include <base/internal/native_utcb.h>
@@ -26,6 +27,7 @@
/* core includes */
#include <address_space.h>
#include <object.h>
#include <dataspace_component.h>
/* kernel includes */
#include <kernel/core_interface.h>
@@ -55,13 +57,66 @@ class Core::Platform_thread : Noncopyable
using Label = String<32>;
struct Utcb : Noncopyable
{
struct {
Ram_allocator *_ram_ptr = nullptr;
Region_map *_core_rm_ptr = nullptr;
};
Ram_dataspace_capability _ds { }; /* UTCB ds of non-core threads */
addr_t const core_addr; /* UTCB address within core/kernel */
addr_t const phys_addr;
/*
* \throw Out_of_ram
* \throw Out_of_caps
*/
Ram_dataspace_capability _allocate(Ram_allocator &ram)
{
return ram.alloc(sizeof(Native_utcb), CACHED);
}
addr_t _attach(Region_map &);
static addr_t _ds_phys(Rpc_entrypoint &ep, Dataspace_capability ds)
{
return ep.apply(ds, [&] (Dataspace_component *dsc) {
return dsc ? dsc->phys_addr() : 0; });
}
/**
* Constructor used for core-local threads
*/
Utcb(addr_t core_addr);
/**
* Constructor used for threads outside of core
*/
Utcb(Rpc_entrypoint &ep, Ram_allocator &ram, Region_map &core_rm)
:
_core_rm_ptr(&core_rm),
_ds(_allocate(ram)),
core_addr(_attach(core_rm)),
phys_addr(_ds_phys(ep, _ds))
{ }
~Utcb()
{
if (_core_rm_ptr)
_core_rm_ptr->detach(core_addr);
if (_ram_ptr && _ds.valid())
_ram_ptr->free(_ds);
}
};
Label const _label;
Platform_pd &_pd;
Weak_ptr<Address_space> _address_space { };
Pager_object * _pager;
Native_utcb * _utcb_core_addr { }; /* UTCB addr in core */
Native_utcb * _utcb_pd_addr; /* UTCB addr in pd */
Ram_dataspace_capability _utcb { }; /* UTCB dataspace */
Utcb _utcb;
unsigned _priority {0};
unsigned _quota {0};
@@ -115,7 +170,8 @@ class Core::Platform_thread : Noncopyable
* \param virt_prio unscaled processor-scheduling priority
* \param utcb core local pointer to userland stack
*/
Platform_thread(Platform_pd &, size_t const quota, Label const &label,
Platform_thread(Platform_pd &, Rpc_entrypoint &, Ram_allocator &,
Region_map &, size_t const quota, Label const &label,
unsigned const virt_prio, Affinity::Location,
addr_t const utcb);
@@ -160,6 +216,8 @@ class Core::Platform_thread : Noncopyable
void restart();
void fault_resolved(Untyped_capability, bool);
/**
* Pause this thread
*/
@@ -241,7 +299,7 @@ class Core::Platform_thread : Noncopyable
Platform_pd &pd() const { return _pd; }
Ram_dataspace_capability utcb() const { return _utcb; }
Ram_dataspace_capability utcb() const { return _utcb._ds; }
};
#endif /* _CORE__PLATFORM_THREAD_H_ */

View File

@@ -1,94 +0,0 @@
/*
* \brief RM- and pager implementations specific for base-hw and core
* \author Martin Stein
* \author Stefan Kalkowski
* \date 2012-02-12
*/
/*
* Copyright (C) 2012-2017 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
/* base-hw core includes */
#include <pager.h>
#include <platform_pd.h>
#include <platform_thread.h>
using namespace Core;
void Pager_entrypoint::entry()
{
Untyped_capability cap;
while (1) {
if (cap.valid()) Kernel::ack_signal(Capability_space::capid(cap));
/* receive fault */
if (Kernel::await_signal(Capability_space::capid(_kobj.cap()))) continue;
Pager_object *po = *(Pager_object**)Thread::myself()->utcb()->data();
cap = po->cap();
if (!po) continue;
/* fetch fault data */
Platform_thread * const pt = (Platform_thread *)po->badge();
if (!pt) {
warning("failed to get platform thread of faulter");
continue;
}
if (pt->exception_state() ==
Kernel::Thread::Exception_state::EXCEPTION) {
if (!po->submit_exception_signal())
warning("unresolvable exception: "
"pd='", pt->pd().label(), "', "
"thread='", pt->label(), "', "
"ip=", Hex(pt->state().cpu.ip));
continue;
}
_fault = pt->fault_info();
/* try to resolve fault directly via local region managers */
if (po->pager(*this) == Pager_object::Pager_result::STOP)
continue;
/* apply mapping that was determined by the local region managers */
{
Locked_ptr<Address_space> locked_ptr(pt->address_space());
if (!locked_ptr.valid()) continue;
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
Cache cacheable = Genode::CACHED;
if (!_mapping.cached)
cacheable = Genode::UNCACHED;
if (_mapping.write_combined)
cacheable = Genode::WRITE_COMBINED;
Hw::Page_flags const flags {
.writeable = _mapping.writeable ? Hw::RW : Hw::RO,
.executable = _mapping.executable ? Hw::EXEC : Hw::NO_EXEC,
.privileged = Hw::USER,
.global = Hw::NO_GLOBAL,
.type = _mapping.io_mem ? Hw::DEVICE : Hw::RAM,
.cacheable = cacheable
};
as->insert_translation(_mapping.dst_addr, _mapping.src_addr,
1UL << _mapping.size_log2, flags);
}
/* let pager object go back to no-fault state */
po->wake_up();
}
}
void Mapping::prepare_map_operation() const { }

View File

@@ -19,7 +19,7 @@
/* core includes */
#include <object.h>
#include <kernel/signal_receiver.h>
#include <kernel/signal.h>
#include <assertion.h>
namespace Core {

View File

@@ -22,6 +22,32 @@
using namespace Core;
void Arm_cpu::Context::print(Output &output) const
{
using namespace Genode;
using Genode::print;
print(output, "\n");
print(output, " r0 = ", Hex(r0), "\n");
print(output, " r1 = ", Hex(r1), "\n");
print(output, " r2 = ", Hex(r2), "\n");
print(output, " r3 = ", Hex(r3), "\n");
print(output, " r4 = ", Hex(r4), "\n");
print(output, " r5 = ", Hex(r5), "\n");
print(output, " r6 = ", Hex(r6), "\n");
print(output, " r7 = ", Hex(r7), "\n");
print(output, " r8 = ", Hex(r8), "\n");
print(output, " r9 = ", Hex(r9), "\n");
print(output, " r10 = ", Hex(r10), "\n");
print(output, " r11 = ", Hex(r11), "\n");
print(output, " r12 = ", Hex(r12), "\n");
print(output, " ip = ", Hex(ip), "\n");
print(output, " sp = ", Hex(sp), "\n");
print(output, " lr = ", Hex(lr), "\n");
print(output, " cpsr = ", Hex(cpsr));
}
Arm_cpu::Context::Context(bool privileged)
{
using Psr = Arm_cpu::Psr;

View File

@@ -49,6 +49,18 @@ struct Core::Arm_cpu : public Hw::Arm_cpu
struct alignas(8) Context : Cpu_state, Fpu_context
{
Context(bool privileged);
void print(Output &output) const;
void for_each_return_address(Const_byte_range_ptr const &stack,
auto const &fn)
{
void **fp = (void**)r11;
while (stack.contains(fp-1) && stack.contains(fp) && fp[0]) {
fn(fp);
fp = (void **) fp[-1];
}
}
};
/**

View File

@@ -23,32 +23,35 @@
using namespace Kernel;
extern "C" void kernel_to_user_context_switch(Cpu::Context*, Cpu::Fpu_context*);
extern "C" void kernel_to_user_context_switch(Core::Cpu::Context*,
Core::Cpu::Fpu_context*);
void Thread::_call_suspend() { }
void Thread::exception(Cpu & cpu)
void Thread::exception()
{
using Ctx = Core::Cpu::Context;
switch (regs->cpu_exception) {
case Cpu::Context::SUPERVISOR_CALL:
case Ctx::SUPERVISOR_CALL:
_call();
return;
case Cpu::Context::PREFETCH_ABORT:
case Cpu::Context::DATA_ABORT:
case Ctx::PREFETCH_ABORT:
case Ctx::DATA_ABORT:
_mmu_exception();
return;
case Cpu::Context::INTERRUPT_REQUEST:
case Cpu::Context::FAST_INTERRUPT_REQUEST:
_interrupt(_user_irq_pool, cpu.id());
case Ctx::INTERRUPT_REQUEST:
case Ctx::FAST_INTERRUPT_REQUEST:
_interrupt(_user_irq_pool);
return;
case Cpu::Context::UNDEFINED_INSTRUCTION:
case Ctx::UNDEFINED_INSTRUCTION:
Genode::raw(*this, ": undefined instruction at ip=",
Genode::Hex(regs->ip));
_die();
return;
case Cpu::Context::RESET:
case Ctx::RESET:
return;
default:
Genode::raw(*this, ": triggered an unknown exception ",
@@ -71,17 +74,17 @@ void Kernel::Thread::Tlb_invalidation::execute(Cpu &) { }
void Thread::Flush_and_stop_cpu::execute(Cpu &) { }
void Cpu::Halt_job::proceed(Kernel::Cpu &) { }
void Cpu::Halt_job::proceed() { }
void Thread::proceed(Cpu & cpu)
void Thread::proceed()
{
if (!cpu.active(pd().mmu_regs) && type() != CORE)
cpu.switch_to(pd().mmu_regs);
if (!_cpu().active(pd().mmu_regs) && type() != CORE)
_cpu().switch_to(pd().mmu_regs);
regs->cpu_exception = cpu.stack_start();
kernel_to_user_context_switch((static_cast<Cpu::Context*>(&*regs)),
(static_cast<Cpu::Fpu_context*>(&*regs)));
regs->cpu_exception = _cpu().stack_start();
kernel_to_user_context_switch((static_cast<Core::Cpu::Context*>(&*regs)),
(static_cast<Core::Cpu::Fpu_context*>(&*regs)));
}

View File

@@ -16,12 +16,11 @@
/* core includes */
#include <platform.h>
#include <platform_pd.h>
#include <platform_services.h>
#include <core_env.h>
#include <core_service.h>
#include <map_local.h>
#include <vm_root.h>
#include <platform.h>
using namespace Core;
@@ -32,10 +31,13 @@ extern addr_t hypervisor_exception_vector;
/*
* Add ARM virtualization specific vm service
*/
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sh,
Registry<Service> &services,
Core::Trace::Source_registry &trace_sources)
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sh,
Registry<Service> &services,
Trace::Source_registry &trace_sources,
Ram_allocator &core_ram,
Region_map &core_rm,
Range_allocator &)
{
map_local(Platform::core_phys_addr((addr_t)&hypervisor_exception_vector),
Hw::Mm::hypervisor_exception_vector().base,
@@ -50,8 +52,7 @@ void Core::platform_add_local_services(Rpc_entrypoint &ep,
Hw::Mm::hypervisor_stack().size / get_page_size(),
Hw::PAGE_FLAGS_KERN_DATA);
static Vm_root vm_root(ep, sh, core_env().ram_allocator(),
core_env().local_rm(), trace_sources);
static Vm_root vm_root(ep, sh, core_ram, core_rm, trace_sources);
static Core_service<Vm_session_component> vm_service(services, vm_root);
},
[&] (Range_allocator::Alloc_error) {

View File

@@ -14,15 +14,11 @@
/* Genode includes */
#include <util/construct_at.h>
/* base internal includes */
#include <base/internal/unmanaged_singleton.h>
/* core includes */
#include <kernel/core_interface.h>
#include <vm_session_component.h>
#include <platform.h>
#include <cpu_thread_component.h>
#include <core_env.h>
using namespace Core;
@@ -87,29 +83,14 @@ void * Vm_session_component::_alloc_table()
}
using Vmid_allocator = Bit_allocator<256>;
static Vmid_allocator &alloc()
{
static Vmid_allocator * allocator = nullptr;
if (!allocator) {
allocator = unmanaged_singleton<Vmid_allocator>();
/* reserve VM ID 0 for the hypervisor */
addr_t id = allocator->alloc();
assert (id == 0);
}
return *allocator;
}
Genode::addr_t Vm_session_component::_alloc_vcpu_data(Genode::addr_t ds_addr)
{
return ds_addr;
}
Vm_session_component::Vm_session_component(Rpc_entrypoint &ds_ep,
Vm_session_component::Vm_session_component(Vmid_allocator & vmid_alloc,
Rpc_entrypoint &ds_ep,
Resources resources,
Label const &,
Diag,
@@ -127,7 +108,8 @@ Vm_session_component::Vm_session_component(Rpc_entrypoint &ds_ep,
_table(*construct_at<Board::Vm_page_table>(_alloc_table())),
_table_array(*(new (cma()) Board::Vm_page_table_array([] (void * virt) {
return (addr_t)cma().phys_addr(virt);}))),
_id({(unsigned)alloc().alloc(), cma().phys_addr(&_table)})
_vmid_alloc(vmid_alloc),
_id({(unsigned)_vmid_alloc.alloc(), cma().phys_addr(&_table)})
{
/* configure managed VM area */
_map.add_range(0, 0UL - 0x1000);
@@ -162,5 +144,5 @@ Vm_session_component::~Vm_session_component()
/* free guest-to-host page tables */
destroy(platform().core_mem_alloc(), &_table);
destroy(platform().core_mem_alloc(), &_table_array);
alloc().free(_id.id);
_vmid_alloc.free(_id.id);
}

View File

@@ -28,14 +28,13 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Identity & id)
:
Kernel::Object { *this },
Cpu_job(Scheduler::Priority::min(), 0),
Cpu_context(cpu, Scheduler::Priority::min(), 0),
_user_irq_pool(user_irq_pool),
_state(data),
_context(context),
_id(id),
_vcpu_context(cpu)
{
affinity(cpu);
/* once constructed, exit with a startup exception */
pause();
_state.cpu_exception = Genode::VCPU_EXCEPTION_STARTUP;
@@ -46,12 +45,12 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Vm::~Vm() {}
void Vm::exception(Cpu & cpu)
void Vm::exception()
{
switch(_state.cpu_exception) {
case Genode::Cpu_state::INTERRUPT_REQUEST: [[fallthrough]];
case Genode::Cpu_state::FAST_INTERRUPT_REQUEST:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
return;
case Genode::Cpu_state::DATA_ABORT:
_state.dfar = Cpu::Dfar::read();
@@ -69,19 +68,19 @@ bool secure_irq(unsigned const i);
extern "C" void monitor_mode_enter_normal_world(Genode::Vcpu_state&, void*);
void Vm::proceed(Cpu & cpu)
void Vm::proceed()
{
unsigned const irq = _state.irq_injection;
if (irq) {
if (cpu.pic().secure(irq)) {
if (_cpu().pic().secure(irq)) {
Genode::raw("Refuse to inject secure IRQ into VM");
} else {
cpu.pic().trigger(irq);
_cpu().pic().trigger(irq);
_state.irq_injection = 0;
}
}
monitor_mode_enter_normal_world(_state, (void*) cpu.stack_start());
monitor_mode_enter_normal_world(_state, (void*) _cpu().stack_start());
}

View File

@@ -17,7 +17,6 @@
/* core includes */
#include <platform.h>
#include <platform_services.h>
#include <core_env.h>
#include <core_service.h>
#include <vm_root.h>
#include <map_local.h>
@@ -29,10 +28,13 @@ extern int monitor_mode_exception_vector;
/*
* Add TrustZone specific vm service
*/
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sliced_heap,
Registry<Service> &local_services,
Core::Trace::Source_registry &trace_sources)
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sliced_heap,
Registry<Service> &services,
Trace::Source_registry &trace_sources,
Ram_allocator &core_ram,
Region_map &core_rm,
Range_allocator &)
{
static addr_t const phys_base =
Platform::core_phys_addr((addr_t)&monitor_mode_exception_vector);
@@ -40,8 +42,7 @@ void Core::platform_add_local_services(Rpc_entrypoint &ep,
map_local(phys_base, Hw::Mm::system_exception_vector().base, 1,
Hw::PAGE_FLAGS_KERN_TEXT);
static Vm_root vm_root(ep, sliced_heap, core_env().ram_allocator(),
core_env().local_rm(), trace_sources);
static Vm_root vm_root(ep, sliced_heap, core_ram, core_rm, trace_sources);
static Core_service<Vm_session_component> vm_service(local_services, vm_root);
static Core_service<Vm_session_component> vm_service(services, vm_root);
}

View File

@@ -58,7 +58,7 @@ Genode::addr_t Vm_session_component::_alloc_vcpu_data(Genode::addr_t ds_addr)
}
Vm_session_component::Vm_session_component(Rpc_entrypoint &ep,
Vm_session_component::Vm_session_component(Vmid_allocator &vmids, Rpc_entrypoint &ep,
Resources resources,
Label const &,
Diag,
@@ -74,6 +74,7 @@ Vm_session_component::Vm_session_component(Rpc_entrypoint &ep,
_region_map(region_map),
_table(*construct_at<Board::Vm_page_table>(_alloc_table())),
_table_array(dummy_array()),
_vmid_alloc(vmids),
_id({id_alloc++, nullptr})
{
if (_id.id) {

View File

@@ -101,7 +101,7 @@ void Board::Vcpu_context::Vm_irq::handle(Vm & vm, unsigned irq) {
void Board::Vcpu_context::Vm_irq::occurred()
{
Vm *vm = dynamic_cast<Vm*>(&_cpu.scheduled_job());
Vm *vm = dynamic_cast<Vm*>(&_cpu.current_context());
if (!vm) Genode::raw("VM interrupt while VM is not runnning!");
else handle(*vm, _irq_nr);
}
@@ -140,14 +140,13 @@ Kernel::Vm::Vm(Irq::Pool & user_irq_pool,
Identity & id)
:
Kernel::Object { *this },
Cpu_job(Scheduler::Priority::min(), 0),
Cpu_context(cpu, Scheduler::Priority::min(), 0),
_user_irq_pool(user_irq_pool),
_state(data),
_context(context),
_id(id),
_vcpu_context(cpu)
{
affinity(cpu);
/* once constructed, exit with a startup exception */
pause();
_state.cpu_exception = Genode::VCPU_EXCEPTION_STARTUP;
@@ -164,29 +163,29 @@ Kernel::Vm::~Vm()
}
void Kernel::Vm::exception(Cpu & cpu)
void Kernel::Vm::exception()
{
switch(_state.cpu_exception) {
case Genode::Cpu_state::INTERRUPT_REQUEST:
case Genode::Cpu_state::FAST_INTERRUPT_REQUEST:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
break;
default:
pause();
_context.submit(1);
}
if (cpu.pic().ack_virtual_irq(_vcpu_context.pic))
if (_cpu().pic().ack_virtual_irq(_vcpu_context.pic))
inject_irq(Board::VT_MAINTAINANCE_IRQ);
_vcpu_context.vtimer_irq.disable();
}
void Kernel::Vm::proceed(Cpu & cpu)
void Kernel::Vm::proceed()
{
if (_state.timer.irq) _vcpu_context.vtimer_irq.enable();
cpu.pic().insert_virtual_irq(_vcpu_context.pic, _state.irqs.virtual_irq);
_cpu().pic().insert_virtual_irq(_vcpu_context.pic, _state.irqs.virtual_irq);
/*
* the following values have to be enforced by the hypervisor
@@ -202,7 +201,7 @@ void Kernel::Vm::proceed(Cpu & cpu)
_state.esr_el2 = Cpu::Hstr::init();
_state.hpfar_el2 = Cpu::Hcr::init();
Hypervisor::switch_world(_state, host_context(cpu));
Hypervisor::switch_world(_state, host_context(_cpu()));
}

View File

@@ -22,6 +22,22 @@
using namespace Core;
void Cpu::Context::print(Output &output) const
{
using namespace Genode;
using Genode::print;
print(output, "\n");
for (unsigned i = 0; i < 31; i++)
print(output, " x", i, " = ", Hex(r[i]), "\n");
print(output, " ip = ", Hex(ip), "\n");
print(output, " sp = ", Hex(sp), "\n");
print(output, " esr = ", Hex(esr_el1), "\n");
print(output, " pstate = ", Hex(pstate), "\n");
print(output, " mdscr = ", Hex(mdscr_el1));
}
Cpu::Context::Context(bool privileged)
{
Spsr::El::set(pstate, privileged ? 1 : 0);

View File

@@ -79,6 +79,18 @@ struct Core::Cpu : Hw::Arm_64_cpu
Fpu_state fpu_state { };
Context(bool privileged);
void print(Output &output) const;
void for_each_return_address(Const_byte_range_ptr const &stack,
auto const &fn)
{
void **fp = (void**)r[29];
while (stack.contains(fp) && stack.contains(fp + 1) && fp[1]) {
fn(fp + 1);
fp = (void **) fp[0];
}
}
};
class Mmu_context

View File

@@ -27,7 +27,7 @@ using namespace Kernel;
void Thread::_call_suspend() { }
void Thread::exception(Cpu & cpu)
void Thread::exception()
{
switch (regs->exception_type) {
case Cpu::RESET: return;
@@ -35,7 +35,7 @@ void Thread::exception(Cpu & cpu)
case Cpu::IRQ_LEVEL_EL1: [[fallthrough]];
case Cpu::FIQ_LEVEL_EL0: [[fallthrough]];
case Cpu::FIQ_LEVEL_EL1:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
return;
case Cpu::SYNC_LEVEL_EL0: [[fallthrough]];
case Cpu::SYNC_LEVEL_EL1:
@@ -94,51 +94,51 @@ void Kernel::Thread::Tlb_invalidation::execute(Cpu &) { }
void Thread::Flush_and_stop_cpu::execute(Cpu &) { }
void Cpu::Halt_job::proceed(Kernel::Cpu &) { }
void Cpu::Halt_job::proceed() { }
bool Kernel::Pd::invalidate_tlb(Cpu & cpu, addr_t addr, size_t size)
{
using namespace Genode;
bool Kernel::Pd::invalidate_tlb(Cpu & cpu, addr_t addr, size_t size)
{
using namespace Genode;
/* only apply to the active cpu */
if (cpu.id() != Cpu::executing_id())
return false;
/* only apply to the active cpu */
if (cpu.id() != Cpu::executing_id())
return false;
/**
* The kernel part of the address space is mapped as global
* therefore we have to invalidate it differently
*/
if (addr >= Hw::Mm::supervisor_exception_vector().base) {
for (addr_t end = addr+size; addr < end; addr += get_page_size())
asm volatile ("tlbi vaae1is, %0" :: "r" (addr >> 12));
return false;
}
/**
* Too big mappings will result in long running invalidation loops,
* just invalidate the whole tlb for the ASID then.
*/
if (size > 8 * get_page_size()) {
asm volatile ("tlbi aside1is, %0"
:: "r" ((uint64_t)mmu_regs.id() << 48));
return false;
}
/**
* The kernel part of the address space is mapped as global
* therefore we have to invalidate it differently
*/
if (addr >= Hw::Mm::supervisor_exception_vector().base) {
for (addr_t end = addr+size; addr < end; addr += get_page_size())
asm volatile ("tlbi vaae1is, %0" :: "r" (addr >> 12));
asm volatile ("tlbi vae1is, %0"
:: "r" (addr >> 12 | (uint64_t)mmu_regs.id() << 48));
return false;
}
/**
* Too big mappings will result in long running invalidation loops,
* just invalidate the whole tlb for the ASID then.
*/
if (size > 8 * get_page_size()) {
asm volatile ("tlbi aside1is, %0"
:: "r" ((uint64_t)mmu_regs.id() << 48));
return false;
}
for (addr_t end = addr+size; addr < end; addr += get_page_size())
asm volatile ("tlbi vae1is, %0"
:: "r" (addr >> 12 | (uint64_t)mmu_regs.id() << 48));
return false;
}
void Thread::proceed()
{
if (!_cpu().active(pd().mmu_regs) && type() != CORE)
_cpu().switch_to(pd().mmu_regs);
void Thread::proceed(Cpu & cpu)
{
if (!cpu.active(pd().mmu_regs) && type() != CORE)
cpu.switch_to(pd().mmu_regs);
kernel_to_user_context_switch((static_cast<Cpu::Context*>(&*regs)),
(void*)cpu.stack_start());
kernel_to_user_context_switch((static_cast<Core::Cpu::Context*>(&*regs)),
(void*)_cpu().stack_start());
}

View File

@@ -76,7 +76,7 @@ void Board::Vcpu_context::Vm_irq::handle(Vm & vm, unsigned irq) {
void Board::Vcpu_context::Vm_irq::occurred()
{
Vm *vm = dynamic_cast<Vm*>(&_cpu.scheduled_job());
Vm *vm = dynamic_cast<Vm*>(&_cpu.current_context());
if (!vm) Genode::raw("VM interrupt while VM is not runnning!");
else handle(*vm, _irq_nr);
}
@@ -115,15 +115,13 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Identity & id)
:
Kernel::Object { *this },
Cpu_job(Scheduler::Priority::min(), 0),
Cpu_context(cpu, Scheduler::Priority::min(), 0),
_user_irq_pool(user_irq_pool),
_state(data),
_context(context),
_id(id),
_vcpu_context(cpu)
{
affinity(cpu);
_state.id_aa64isar0_el1 = Cpu::Id_aa64isar0_el1::read();
_state.id_aa64isar1_el1 = Cpu::Id_aa64isar1_el1::read();
_state.id_aa64mmfr0_el1 = Cpu::Id_aa64mmfr0_el1::read();
@@ -167,14 +165,14 @@ Vm::~Vm()
}
void Vm::exception(Cpu & cpu)
void Vm::exception()
{
switch (_state.exception_type) {
case Cpu::IRQ_LEVEL_EL0: [[fallthrough]];
case Cpu::IRQ_LEVEL_EL1: [[fallthrough]];
case Cpu::FIQ_LEVEL_EL0: [[fallthrough]];
case Cpu::FIQ_LEVEL_EL1:
_interrupt(_user_irq_pool, cpu.id());
_interrupt(_user_irq_pool);
break;
case Cpu::SYNC_LEVEL_EL0: [[fallthrough]];
case Cpu::SYNC_LEVEL_EL1: [[fallthrough]];
@@ -188,17 +186,17 @@ void Vm::exception(Cpu & cpu)
" not implemented!");
};
if (cpu.pic().ack_virtual_irq(_vcpu_context.pic))
if (_cpu().pic().ack_virtual_irq(_vcpu_context.pic))
inject_irq(Board::VT_MAINTAINANCE_IRQ);
_vcpu_context.vtimer_irq.disable();
}
void Vm::proceed(Cpu & cpu)
void Vm::proceed()
{
if (_state.timer.irq) _vcpu_context.vtimer_irq.enable();
cpu.pic().insert_virtual_irq(_vcpu_context.pic, _state.irqs.virtual_irq);
_cpu().pic().insert_virtual_irq(_vcpu_context.pic, _state.irqs.virtual_irq);
/*
* the following values have to be enforced by the hypervisor
@@ -208,7 +206,7 @@ void Vm::proceed(Cpu & cpu)
Cpu::Vttbr_el2::Asid::set(vttbr_el2, _id.id);
addr_t guest = Hw::Mm::el2_addr(&_state);
addr_t pic = Hw::Mm::el2_addr(&_vcpu_context.pic);
addr_t host = Hw::Mm::el2_addr(&host_context(cpu));
addr_t host = Hw::Mm::el2_addr(&host_context(_cpu()));
Hypervisor::switch_world(guest, host, pic, vttbr_el2);
}

View File

@@ -25,6 +25,47 @@ using Mmu_context = Core::Cpu::Mmu_context;
using namespace Core;
void Cpu::Context::print(Output &output) const
{
using namespace Genode;
using Genode::print;
print(output, "\n");
print(output, " ip = ", Hex(ip), "\n");
print(output, " ra = ", Hex(ra), "\n");
print(output, " sp = ", Hex(sp), "\n");
print(output, " gp = ", Hex(gp), "\n");
print(output, " tp = ", Hex(tp), "\n");
print(output, " t0 = ", Hex(t0), "\n");
print(output, " t1 = ", Hex(t1), "\n");
print(output, " t2 = ", Hex(t2), "\n");
print(output, " s0 = ", Hex(s0), "\n");
print(output, " s1 = ", Hex(s1), "\n");
print(output, " a0 = ", Hex(a0), "\n");
print(output, " a1 = ", Hex(a1), "\n");
print(output, " a2 = ", Hex(a2), "\n");
print(output, " a3 = ", Hex(a3), "\n");
print(output, " a4 = ", Hex(a4), "\n");
print(output, " a5 = ", Hex(a5), "\n");
print(output, " a6 = ", Hex(a6), "\n");
print(output, " a7 = ", Hex(a7), "\n");
print(output, " s2 = ", Hex(s2), "\n");
print(output, " s3 = ", Hex(s3), "\n");
print(output, " s4 = ", Hex(s4), "\n");
print(output, " s5 = ", Hex(s5), "\n");
print(output, " s6 = ", Hex(s6), "\n");
print(output, " s7 = ", Hex(s7), "\n");
print(output, " s8 = ", Hex(s8), "\n");
print(output, " s9 = ", Hex(s9), "\n");
print(output, " s10 = ", Hex(s10), "\n");
print(output, " s11 = ", Hex(s11), "\n");
print(output, " t3 = ", Hex(t3), "\n");
print(output, " t4 = ", Hex(t4), "\n");
print(output, " t5 = ", Hex(t5), "\n");
print(output, " t6 = ", Hex(t6));
}
Cpu::Context::Context(bool)
{
/*

View File

@@ -56,6 +56,11 @@ class Core::Cpu : public Hw::Riscv_cpu
struct alignas(8) Context : Genode::Cpu_state
{
Context(bool);
void print(Output &output) const;
void for_each_return_address(Const_byte_range_ptr const &,
auto const &) { }
};
class Mmu_context

View File

@@ -49,6 +49,10 @@ using namespace Kernel;
CALL_4_FILL_ARG_REGS \
register Call_arg arg_4_reg asm("a4") = arg_4;
#define CALL_6_FILL_ARG_REGS \
CALL_5_FILL_ARG_REGS \
register Call_arg arg_5_reg asm("a5") = arg_5;
extern Genode::addr_t _kernel_entry;
/*
@@ -75,6 +79,7 @@ extern Genode::addr_t _kernel_entry;
#define CALL_3_SWI CALL_2_SWI, "r" (arg_2_reg)
#define CALL_4_SWI CALL_3_SWI, "r" (arg_3_reg)
#define CALL_5_SWI CALL_4_SWI, "r" (arg_4_reg)
#define CALL_6_SWI CALL_5_SWI, "r" (arg_5_reg)
/******************
@@ -137,3 +142,16 @@ Call_ret Kernel::call(Call_arg arg_0,
asm volatile(CALL_5_SWI : "ra");
return arg_0_reg;
}
Call_ret Kernel::call(Call_arg arg_0,
Call_arg arg_1,
Call_arg arg_2,
Call_arg arg_3,
Call_arg arg_4,
Call_arg arg_5)
{
CALL_6_FILL_ARG_REGS
asm volatile(CALL_6_SWI : "ra");
return arg_0_reg;
}

View File

@@ -25,21 +25,21 @@ void Thread::Tlb_invalidation::execute(Cpu &) { }
void Thread::Flush_and_stop_cpu::execute(Cpu &) { }
void Cpu::Halt_job::proceed(Kernel::Cpu &) { }
void Cpu::Halt_job::proceed() { }
void Thread::exception(Cpu & cpu)
void Thread::exception()
{
using Context = Core::Cpu::Context;
using Stval = Core::Cpu::Stval;
if (regs->is_irq()) {
/* cpu-local timer interrupt */
if (regs->irq() == cpu.timer().interrupt_id()) {
cpu.handle_if_cpu_local_interrupt(cpu.timer().interrupt_id());
if (regs->irq() == _cpu().timer().interrupt_id()) {
_cpu().handle_if_cpu_local_interrupt(_cpu().timer().interrupt_id());
} else {
/* interrupt controller */
_interrupt(_user_irq_pool, 0);
_interrupt(_user_irq_pool);
}
return;
}
@@ -113,7 +113,7 @@ void Kernel::Thread::_call_cache_line_size()
}
void Kernel::Thread::proceed(Cpu & cpu)
void Kernel::Thread::proceed()
{
/*
* The sstatus register defines to which privilege level
@@ -123,8 +123,8 @@ void Kernel::Thread::proceed(Cpu & cpu)
Cpu::Sstatus::Spp::set(v, (type() == USER) ? 0 : 1);
Cpu::Sstatus::write(v);
if (!cpu.active(pd().mmu_regs) && type() != CORE)
cpu.switch_to(_pd->mmu_regs);
if (!_cpu().active(pd().mmu_regs) && type() != CORE)
_cpu().switch_to(_pd->mmu_regs);
asm volatile("csrw sscratch, %1 \n"
"mv x31, %0 \n"

View File

@@ -37,6 +37,27 @@ struct Pseudo_descriptor
} __attribute__((packed));
void Cpu::Context::print(Output &output) const
{
using namespace Genode;
using Genode::print;
print(output, "\n");
print(output, " ip = ", Hex(ip), "\n");
print(output, " sp = ", Hex(sp), "\n");
print(output, " cs = ", Hex(cs), "\n");
print(output, " ss = ", Hex(ss), "\n");
print(output, " eflags = ", Hex(eflags), "\n");
print(output, " rax = ", Hex(rax), "\n");
print(output, " rbx = ", Hex(rbx), "\n");
print(output, " rcx = ", Hex(rcx), "\n");
print(output, " rdx = ", Hex(rdx), "\n");
print(output, " rdi = ", Hex(rdi), "\n");
print(output, " rsi = ", Hex(rsi), "\n");
print(output, " rbp = ", Hex(rbp));
}
Cpu::Context::Context(bool core)
{
eflags = EFLAGS_IF_SET;

View File

@@ -100,6 +100,18 @@ class Core::Cpu : public Hw::X86_64_cpu
};
Context(bool privileged);
void print(Output &output) const;
void for_each_return_address(Const_byte_range_ptr const &stack,
auto const &fn)
{
void **fp = (void**)rbp;
while (stack.contains(fp) && stack.contains(fp + 1) && fp[1]) {
fn(fp + 1);
fp = (void **) fp[0];
}
}
} __attribute__((packed));

View File

@@ -60,6 +60,7 @@ class Genode::Fpu_context
}
addr_t fpu_context() const { return _fxsave_addr; }
addr_t fpu_size() const { return sizeof(_fxsave_area); }
};
#endif /* _CORE__SPEC__X86_64__FPU_H_ */

Some files were not shown because too many files have changed in this diff Show More