diff --git a/VERSION b/VERSION
index 3dce2e921c..f88da62e24 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-24.08
+24.11
diff --git a/doc/build_system.txt b/doc/build_system.txt
deleted file mode 100644
index de926c09f3..0000000000
--- a/doc/build_system.txt
+++ /dev/null
@@ -1,517 +0,0 @@
-
-
- =======================
- The Genode build system
- =======================
-
-
- Norman Feske
-
-Abstract
-########
-
-The Genode OS Framework comes with a custom build system that is designed for
-the creation of highly modular and portable systems software. Understanding
-its basic concepts is pivotal for using the full potential of the framework.
-This document introduces those concepts and the best practises of putting them
-to good use. Beside building software components from source code, common
-and repetitive development tasks are the testing of individual components
-and the integration of those components into complex system scenarios. To
-streamline such tasks, the build system is accompanied with special tooling
-support. This document introduces those tools.
-
-
-Build directories and repositories
-##################################
-
-The build system is supposed to never touch the source tree. The procedure of
-building components and integrating them into system scenarios is done at
-a distinct build directory. One build directory targets a specific platform,
-i.e., a kernel and hardware architecture. Because the source tree is decoupled
-from the build directory, one source tree can have many different build
-directories associated, each targeted at another platform.
-
-The recommended way for creating a build directory is the use of the
-'create_builddir' tool located at '/tool/'. By starting the tool
-without arguments, its usage information will be printed. For creating a new
-build directory, one of the listed target platforms must be specified.
-Furthermore, the location of the new build directory has to be specified via
-the 'BUILD_DIR=' argument. For example:
-
-! cd
-! ./tool/create_builddir linux_x86 BUILD_DIR=/tmp/build.linux_x86
-
-This command will create a new build directory for the Linux/x86 platform
-at _/tmp/build.linux_x86/_.
-
-
-Build-directory configuration via 'build.conf'
-==============================================
-
-The fresh build directory will contain a 'Makefile', which is a symlink to
-_tool/builddir/build.mk_. This makefile is the front end of the build system
-and not supposed to be edited. Beside the makefile, there is a _etc/_
-subdirectory that contains the build-directory configuration. For most
-platforms, there is only a single _build.conf_ file, which defines the parts of
-the Genode source tree incorporated in the build process. Those parts are
-called _repositories_.
-
-The repository concept allows for keeping the source code well separated for
-different concerns. For example, the platform-specific code for each target
-platform is located in a dedicated _base-_ repository. Also, different
-abstraction levels and features of the system are residing in different
-repositories. The _etc/build.conf_ file defines the set of repositories to
-consider in the build process. At build time, the build system overlays the
-directory structures of all repositories specified via the 'REPOSITORIES'
-declaration to form a single logical source tree. By changing the list of
-'REPOSITORIES', the view of the build system on the source tree can be altered.
-The _etc/build.conf_ as found in a fresh created build directory will list the
-_base-_ repository of the platform selected at the 'create_builddir'
-command line as well as the 'base', 'os', and 'demo' repositories needed for
-compiling Genode's default demonstration scenario. Furthermore, there are a
-number of commented-out lines that can be uncommented for enabling additional
-repositories.
-
-Note that the order of the repositories listed in the 'REPOSITORIES' declaration
-is important. Front-most repositories shadow subsequent repositories. This
-makes the repository mechanism a powerful tool for tweaking existing repositories:
-By adding a custom repository in front of another one, customized versions of
-single files (e.g., header files or target description files) can be supplied to
-the build system without changing the original repository.
-
-
-Building targets
-================
-
-To build all targets contained in the list of 'REPOSITORIES' as defined in
-_etc/build.conf_, simply issue 'make'. This way, all components that are
-compatible with the build directory's base platform will be built. In practice,
-however, only some of those components may be of interest. Hence, the build
-can be tailored to those components which are of actual interest by specifying
-source-code subtrees. For example, using the following command
-! make core server/nitpicker
-the build system builds all targets found in the 'core' and 'server/nitpicker'
-source directories. You may specify any number of subtrees to the build
-system. As indicated by the build output, the build system revisits
-each library that is used by each target found in the specified subtrees.
-This is very handy for developing libraries because instead of re-building
-your library and then your library-using program, you just build your program
-and that's it. This concept even works recursively, which means that libraries
-may depend on other libraries.
-
-In practice, you won't ever need to build the _whole tree_ but only the
-targets that you are interested in.
-
-
-Cleaning the build directory
-============================
-
-To remove all but kernel-related generated files, use
-! make clean
-
-To remove all generated files, use
-! make cleanall
-
-Both 'clean' and 'cleanall' won't remove any files from the _bin/_
-subdirectory. This makes the _bin/_ a safe place for files that are
-unrelated to the build process, yet required for the integration stage, e.g.,
-binary data.
-
-
-Controlling the verbosity of the build process
-==============================================
-
-To understand the inner workings of the build process in more detail, you can
-tell the build system to display each directory change by specifying
-
-! make VERBOSE_DIR=
-
-If you are interested in the arguments that are passed to each invocation of
-'make', you can make them visible via
-
-! make VERBOSE_MK=
-
-Furthermore, you can observe each single shell-command invocation by specifying
-
-! make VERBOSE=
-
-Of course, you can combine these verboseness toggles for maximizing the noise.
-
-
-Enabling parallel builds
-========================
-
-To utilize multiple CPU cores during the build process, you may invoke 'make'
-with the '-j' argument. If manually specifying this argument becomes an
-inconvenience, you may add the following line to your _etc/build.conf_ file:
-
-! MAKE += -j
-
-This way, the build system will always use '' CPUs for building.
-
-
-Caching inter-library dependencies
-==================================
-
-The build system allows to repeat the last build without performing any
-library-dependency checks by using:
-
-! make again
-
-The use of this feature can significantly improve the work flow during
-development because in contrast to source-codes, library dependencies rarely
-change. So the time needed for re-creating inter-library dependencies at each
-build can be saved.
-
-
-Repository directory layout
-###########################
-
-Each Genode repository has the following layout:
-
- Directory | Description
- ------------------------------------------------------------
- 'doc/' | Documentation, specific for the repository
- ------------------------------------------------------------
- 'etc/' | Default configuration of the build process
- ------------------------------------------------------------
- 'mk/' | The build system
- ------------------------------------------------------------
- 'include/' | Globally visible header files
- ------------------------------------------------------------
- 'src/' | Source codes and target build descriptions
- ------------------------------------------------------------
- 'lib/mk/' | Library build descriptions
-
-
-Creating targets and libraries
-##############################
-
-Target descriptions
-===================
-
-A good starting point is to look at the init target. The source code of init is
-located at _os/src/init/_. In this directory, you will find a target description
-file named _target.mk_. This file contains the building instructions and it is
-usually very simple. The build process is controlled by defining the following
-variables.
-
-
-Build variables to be defined by you
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-:'TARGET': is the name of the binary to be created. This is the
- only *mandatory variable* to be defined in a _target.mk_ file.
-
-:'REQUIRES': expresses the requirements that must be satisfied in order to
- build the target. You find more details about the underlying mechanism in
- Section [Specializations].
-
-:'LIBS': is the list of libraries that are used by the target.
-
-:'SRC_CC': contains the list of '.cc' source files. The default search location
- for source codes is the directory, where the _target.mk_ file resides.
-
-:'SRC_C': contains the list of '.c' source files.
-
-:'SRC_S': contains the list of assembly '.s' source files.
-
-:'SRC_BIN': contains binary data files to be linked to the target.
-
-:'INC_DIR': is the list of include search locations. Directories should
- always be appended by using +=. Never use an assignment!
-
-:'EXT_OBJECTS': is a list of Genode-external objects or libraries. This
- variable is mostly used for interfacing Genode with legacy software
- components.
-
-
-Rarely used variables
----------------------
-
-:'CC_OPT': contains additional compiler options to be used for '.c' as
- well as for '.cc' files.
-
-:'CC_CXX_OPT': contains additional compiler options to be used for the
- C++ compiler only.
-
-:'CC_C_OPT': contains additional compiler options to be used for the
- C compiler only.
-
-
-Specifying search locations
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-When specifying search locations for header files via the 'INC_DIR' variable or
-for source files via 'vpath', relative pathnames are illegal to use. Instead,
-you can use the following variables to reference locations within the
-source-code repository, where your target lives:
-
-:'REP_DIR': is the base directory of the current source-code repository.
- Normally, specifying locations relative to the base of the repository is
- never used by _target.mk_ files but needed by library descriptions.
-
-:'PRG_DIR': is the directory, where your _target.mk_ file resides. This
- variable is always to be used when specifying a relative path.
-
-
-Library descriptions
-====================
-
-In contrast to target descriptions that are scattered across the whole source
-tree, library descriptions are located at the central place _lib/mk_. Each
-library corresponds to a _.mk_ file. The base of the description file
-is the name of the library. Therefore, no 'TARGET' variable needs to be set.
-The source-code locations are expressed as '$(REP_DIR)'-relative 'vpath'
-commands.
-
-Library-description files support the following additional declarations:
-
-:'SHARED_LIB = yes': declares that the library should be built as a shared
- object rather than a static library. The resulting object will be called
- _.lib.so_.
-
-
-Specializations
-===============
-
-Building components for different platforms likely implicates portions of code
-that are tied to certain aspects of the target platform. For example, a target
-platform may be characterized by
-
-* A kernel API such as L4v2, Linux, L4.sec,
-* A hardware architecture such as x86, ARM, Coldfire,
-* A certain hardware facility such as a custom device, or
-* Other properties such as software license requirements.
-
-Each of these attributes express a specialization of the build process. The
-build system provides a generic mechanism to handle such specializations.
-
-The _programmer_ of a software component knows the properties on which his
-software relies and thus, specifies these requirements in his build description
-file.
-
-The _user/customer/builder_ decides to build software for a specific platform
-and defines the platform specifics via the 'SPECS' variable per build
-directory in _etc/specs.conf_. In addition to an (optional) _etc/specs.conf_
-file within the build directory, the build system incorporates the first
-_etc/specs.conf_ file found in the repositories as configured for the
-build directory. For example, for a 'linux_x86' build directory, the
-_base-linux/etc/specs.conf_ file is used by default. The build directory's
-'specs.conf' file can still be used to extend the 'SPECS' declarations, for
-example to enable special features.
-
-Each '' in the 'SPECS' variable instructs the build system to
-
-* Include the 'make'-rules of a corresponding _base/mk/spec-.mk_
- file. This enables the customization of the build process for each platform.
-
-* Search for _.mk_ files in the _lib/mk//_ subdirectory.
- This way, we can provide alternative implementations of one and the same
- library interface for different platforms.
-
-Before a target or library gets built, the build system checks if the 'REQUIRES'
-entries of the build description file are satisfied by entries of the 'SPECS'
-variable. The compilation is executed only if each entry in the 'REQUIRES'
-variable is present in the 'SPECS' variable as supplied by the build directory
-configuration.
-
-
-Building tools to be executed on the host platform
-===================================================
-
-Sometimes, software requires custom tools that are used to generate source
-code or other ingredients for the build process, for example IDL compilers.
-Such tools won't be executed on top of Genode but on the host platform
-during the build process. Hence, they must be compiled with the tool chain
-installed on the host, not the Genode tool chain.
-
-The Genode build system accommodates the building of such host tools as a side
-effect of building a library or a target. Even though it is possible to add
-the tool compilation step to a regular build description file, it is
-recommended to introduce a dedicated pseudo library for building such tools.
-This way, the rules for building host tools are kept separate from rules that
-refer to Genode programs. By convention, the pseudo library should be named
-__host_tools_ and the host tools should be built at
-_/tool//_. With __, we refer to the name of the
-software package the tool belongs to, e.g., qt5 or mupdf. To build a tool
-named __, the pseudo library contains a custom make rule like the
-following:
-
-! $(BUILD_BASE_DIR)/tool//:
-! $(MSG_BUILD)$(notdir $@)
-! $(VERBOSE)mkdir -p $(dir $@)
-! $(VERBOSE)...build commands...
-
-To let the build system trigger the rule, add the custom target to the
-'HOST_TOOLS' variable:
-
-! HOST_TOOLS += $(BUILD_BASE_DIR)/tool//
-
-Once the pseudo library for building the host tools is in place, it can be
-referenced by each target or library that relies on the respective tools via
-the 'LIBS' declaration. The tool can be invoked by referring to
-'$(BUILD_BASE_DIR)/tool//tool'.
-
-For an example of using custom host tools, please refer to the mupdf package
-found within the libports repository. During the build of the mupdf library,
-two custom tools fontdump and cmapdump are invoked. The tools are built via
-the _lib/mk/mupdf_host_tools.mk_ library description file. The actual mupdf
-library (_lib/mk/mupdf.mk_) has the pseudo library 'mupdf_host_tools' listed
-in its 'LIBS' declaration and refers to the tools relative to
-'$(BUILD_BASE_DIR)'.
-
-
-Building additional custom targets accompanying library or program
-==================================================================
-
-There are cases when it is important to build additional targets
-besides standard files built for library or program. Of course there
-is no problem with writing specific make rules for commands that
-generate those target files but for them to be built a proper
-dependency must be specified. To achieve it those additional targets
-should be added to 'CUSTOM_TARGET_DEPS' variable like e.g. in
-iwl_firmware library from dde_linux repository:
-
-! CUSTOM_TARGET_DEPS += $(addprefix $(BIN_DIR)/,$(IMAGES))
-
-
-Automated integration and testing
-#################################
-
-Genode's cross-kernel portability is one of the prime features of the
-framework. However, each kernel takes a different route when it comes to
-configuring, integrating, and booting the system. Hence, for using a particular
-kernel, profound knowledge about the boot concept and the kernel-specific tools
-is required. To streamline the testing of Genode-based systems across the many
-different supported kernels, the framework comes equipped with tools that
-relieve you from these peculiarities.
-
-Run scripts
-===========
-
-Using so-called run scripts, complete Genode systems can be described in a
-concise and kernel-independent way. Once created, a run script can be used
-to integrate and test-drive a system scenario directly from the build directory.
-The best way to get acquainted with the concept is reviewing the run script
-for the 'hello_tutorial' located at _hello_tutorial/run/hello.run_.
-Let's revisit each step expressed in the _hello.run_ script:
-
-* Building the components needed for the system using the 'build' command.
- This command instructs the build system to compile the targets listed in
- the brace block. It has the same effect as manually invoking 'make' with
- the specified argument from within the build directory.
-
-* Creating a new boot directory using the 'create_boot_directory' command.
- The integration of the scenario is performed in a dedicated directory at
- _/var/run//_. When the run script is finished,
- this directory will contain all components of the final system. In the
- following, we will refer to this directory as run directory.
-
-* Installing the Genode 'config' file into the run directory using the
- 'install_config' command. The argument to this command will be written
- to a file called 'config' at the run directory picked up by
- Genode's init process.
-
-* Creating a bootable system image using the 'build_boot_image' command.
- This command copies the specified list of files from the _/bin/_
- directory to the run directory and executes the platform-specific steps
- needed to transform the content of the run directory into a bootable
- form. This form depends on the actual base platform and may be an ISO
- image or a bootable ELF image.
-
-* Executing the system image using the 'run_genode_until' command. Depending
- on the base platform, the system image will be executed using an emulator.
- For most platforms, Qemu is the tool of choice used by default. On Linux,
- the scenario is executed by starting 'core' directly from the run
- directory. The 'run_genode_until' command takes a regular expression
- as argument. If the log output of the scenario matches the specified
- pattern, the 'run_genode_until' command returns. If specifying 'forever'
- as argument (as done in 'hello.run'), this command will never return.
- If a regular expression is specified, an additional argument determines
- a timeout in seconds. If the regular expression does not match until
- the timeout is reached, the run script will abort.
-
-Please note that the _hello.run_ script does not contain kernel-specific
-information. Therefore it can be executed from the build directory of any base
-platform by using:
-
-! make run/hello
-
-When invoking 'make' with an argument of the form 'run/*', the build system
-will look in all repositories for a run script with the specified name. The run
-script must be located in one of the repositories 'run/' subdirectories and
-have the file extension '.run'.
-
-For a more comprehensive run script, _os/run/demo.run_ serves as a good
-example. This run script describes Genode's default demo scenario. As seen in
-'demo.run', parts of init's configuration can be made dependent on the
-platform's properties expressed as spec values. For example, the PCI driver
-gets included in init's configuration only on platforms with a PCI bus. For
-appending conditional snippets to the _config_ file, there exists the 'append_if'
-command, which takes a condition as first and the snippet as second argument.
-To test for a SPEC value, the command '[have_spec ]' is used as
-condition. Analogously to how 'append_if' appends strings, there exists
-'lappend_if' to append list items. The latter command is used to conditionally
-include binaries to the list of boot modules passed to the 'build_boot_image'
-command.
-
-
-The run mechanism explained
-===========================
-
-Under the hood, run scripts are executed by an expect interpreter. When the
-user invokes a run script via _make run/_, the build system invokes
-the run tool at _/tool/run_ with the run script as argument. The
-run tool is an expect script that has no other purpose than defining several
-commands used by run scripts, including a platform-specific script snippet
-called run environment ('env'), and finally including the actual run script.
-Whereas _tool/run_ provides the implementations of generic and largely
-platform-independent commands, the _env_ snippet included from the platform's
-respective _base-/run/env_ file contains all platform-specific
-commands. For reference, the most simplistic run environment is the one at
-_base-linux/run/env_, which implements the 'create_boot_directory',
-'install_config', 'build_boot_image', and 'run_genode_until' commands for Linux
-as base platform. For the other platforms, the run environments are far more
-elaborative and document precisely how the integration and boot concept works
-on each platform. Hence, the _base-/run/env_ files are not only
-necessary parts of Genode's tooling support but serve as resource for
-peculiarities of using each kernel.
-
-
-Using run script to implement test cases
-========================================
-
-Because run scripts are actually expect scripts, the whole arsenal of
-language features of the Tcl scripting language is available to them. This
-turns run scripts into powerful tools for the automated execution of test
-cases. A good example is the run script at _libports/run/lwip.run_, which tests
-the lwIP stack by running a simple Genode-based HTTP server on Qemu. It fetches
-and validates a HTML page from this server. The run script makes use of a
-regular expression as argument to the 'run_genode_until' command to detect the
-state when the web server becomes ready, subsequently executes the 'lynx' shell
-command to fetch the web site, and employs Tcl's support for regular
-expressions to validate the result. The run script works across base platforms
-that use Qemu as execution environment.
-
-To get the most out of the run mechanism, a basic understanding of the Tcl
-scripting language is required. Furthermore the functions provided by
-_tool/run_ and _base-/run/env_ should be studied.
-
-
-Automated testing across base platforms
-=======================================
-
-To execute one or multiple test cases on more than one base platform, there
-exists a dedicated tool at _tool/autopilot_. Its primary purpose is the
-nightly execution of test cases. The tool takes a list of platforms and of
-run scripts as arguments and executes each run script on each platform. The
-build directory for each platform is created at
-_/tmp/autopilot./_ and the output of each run script is
-written to a file called _..log_. On stderr, autopilot
-prints the statistics about whether or not each run script executed
-successfully on each platform. If at least one run script failed, autopilot
-returns a non-zero exit code, which makes it straight forward to include
-autopilot into an automated build-and-test environment.
-
-
diff --git a/doc/coding_style.txt b/doc/coding_style.txt
deleted file mode 100644
index 007b77b15c..0000000000
--- a/doc/coding_style.txt
+++ /dev/null
@@ -1,299 +0,0 @@
-Coding style guidelines for Genode
-##################################
-
-Things to avoid
-===============
-
-Please avoid using pre-processor macros. C++ provides language
-features for almost any case, for which a C programmer uses
-macros.
-
-:Defining constants:
-
- Use 'enum' instead of '#define'
- ! enum { MAX_COLORS = 3 };
- ! enum {
- ! COLOR_RED = 1,
- ! COLOR_BLUE = 2,
- ! COLOR_GREEN = 3
- ! };
-
-:Meta programming:
-
- Use templates instead of pre-processor macros. In contrast to macros,
- templates are type-safe and fit well with the implementation syntax.
-
-:Conditional-code inclusion:
-
- Please avoid C-hacker style '#ifdef CONFIG_PLATFROM' - '#endif'
- constructs. Instead, factor-out the encapsulated code into a
- separate file and introduce a proper function interface.
- The build process should then be used to select the appropriate
- platform-specific files at compile time. Keep platform dependent
- code as small as possible. Never pollute existing generic code
- with platform-specific code.
-
-
-Header of each file
-===================
-
-! /*
-! * \brief Short description of the file
-! * \author Original author
-! * \date Creation date
-! *
-! * Some more detailed description. This is optional.
-! */
-
-
-Identifiers
-===========
-
-* The first character of class names are uppercase, any other characters are
- lowercase.
-* Function and variable names are lower case.
-* 'Multi_word_identifiers' use underline to separate words.
-* 'CONSTANTS' and template arguments are upper case.
-* Private and protected members of a class begin with an '_'-character.
-* Accessor methods are named after their corresponding attributes:
-
- ! /**
- ! * Request private member variable
- ! */
- ! int value() const { return _value; }
- !
- ! /**
- ! * Set the private member variable
- ! */
- ! void value(int value) { _value = value; }
-
-* Accessors that return a boolean value do not carry an 'is_' prefix. E.g.,
- a method for requesting the validity of an object should be named
- 'valid()', not 'is_valid()'.
-
-
-Indentation
-===========
-
-* Use one tab per indentation step. *Do not mix tabs and spaces!*
-* Use no tabs except at the beginning of a line.
-* Use spaces for the alignment of continuation lines such as function
- arguments that span multiple lines. The alignment spaces of such lines
- should start after the (tab-indented) indentation level. For example:
- ! {
- ! function_with_many_arguments(arg1,
- ! <--- spaces for aligment --->arg2,
- ! ...
- ! }
-* Remove trailing spaces at the end of lines
-
-This way, each developer can set his preferred tab size in his editor
-and the source code always looks good.
-
-_Hint:_ In VIM, use the 'set list' and 'set listchars' commands to make tabs
-and spaces visible.
-
-* If class initializers span multiple lines, put the colon on a separate
- line and indent the initializers using one tab. For example:
- ! Complicated_machinery(Material &material, Deadline deadline)
- ! :
- ! _material(material),
- ! _deadline(deadline),
- ! ...
- ! {
- ! ...
- ! }
-
-* Preferably place statements that alter the control flow - such as
- 'break', 'continue', or 'return' - at the beginning of a separate line,
- followed by vertical space (a blank line or the closing brace of the
- surrounding scope).
- ! if (early_return_possible)
- ! return;
-
-
-Switch statements
-~~~~~~~~~~~~~~~~~
-
-Switch-statement blocks should be indented as follows:
-
-! switch (color) {
-!
-! case BLUE:
-! break;
-!
-! case GREEN:
-! {
-! int declaration_required;
-! ...
-! }
-!
-! default:
-! }
-
-Please note that the case labels have the same indentation
-level as the switch statement. This avoids a two-level
-indentation-change at the end of the switch block that
-would occur otherwise.
-
-
-Vertical whitespaces
-====================
-
-In header files:
-
-* Leave two empty lines between classes.
-* Leave one empty line between member functions.
-
-In implementation files:
-
-* Leave two empty lines between functions.
-
-
-Braces
-======
-
-* Braces after class, struct and function names are placed at a new line:
- ! class Foo
- ! {
- ! public:
- !
- ! void method(void)
- ! {
- ! ...
- ! }
- ! };
-
- except for one-line functions.
-
-* All other occurrences of open braces (for 'if', 'while', 'do', 'for',
- 'namespace', 'enum' etc.) are at the end of a line:
-
- ! if (flag) {
- ! ..
- ! } else {
- ! ..
- ! }
-
-* One-line functions should be written on a single line as long as the line
- length does not exceed approximately 80 characters.
- Typically, this applies for accessor functions.
- If slightly more space than one line is needed, indent as follows:
-
- ! int heavy_computation(int a, int lot, int of, int args) {
- ! return a + lot + of + args; }
-
-
-Comments
-========
-
-Function/method header
-~~~~~~~~~~~~~~~~~~~~~~
-
-Each public or protected (but no private) method in a header-file should be
-prepended by a header as follows:
-
-! /**
-! * Short description
-! *
-! * \param a meaning of parameter a
-! * \param b meaning of parameter b
-! * \param c,d meaning of parameters c and d
-! *
-! * \throw Exception_type meaning of the exception
-! *
-! * \return meaning of return value
-! *
-! * More detailed information about the function. This is optional.
-! */
-
-Descriptions of parameters and return values should be lower-case and brief.
-More elaborative descriptions can be documented in the text area below.
-
-In implementation files, only local and private functions should feature
-function headers.
-
-
-Single-line comments
-~~~~~~~~~~~~~~~~~~~~
-
-! /* use this syntax for single line comments */
-
-A single-line comment should be prepended by an empty line.
-Single-line comments should be short - no complete sentences. Use lower-case.
-
-C++-style comments ('//') should only be used for temporarily commenting-out
-code. Such commented-out garbage is easy to 'grep' and there are handy
-'vim'-macros available for creating and removing such comments.
-
-
-Variable descriptions
-~~~~~~~~~~~~~~~~~~~~~
-
-Use the same syntax as for single-line comments. Insert two or more
-spaces before your comment starts.
-
-! int size; /* in kilobytes */
-
-
-Multi-line comments
-~~~~~~~~~~~~~~~~~~~
-
-Multi-line comments are more detailed descriptions in the form of
-sentences.
-A multi-line comment should be enclosed by empty lines.
-
-! /*
-! * This is some tricky
-! * algorithm that works
-! * as follows:
-! * ...
-! */
-
-The first and last line of a multi-line comment contain no words.
-
-
-Source-code blocks
-~~~~~~~~~~~~~~~~~~
-
-For structuring your source code, you can entitle the different
-parts of a file like this:
-
-! <- two empty lines
-!
-! /********************
-! ** Event handlers **
-! ********************/
-! <- one empty line
-
-Note the two stars at the left and right. There are two of them to
-make the visible width of the border match its height (typically,
-characters are ca. twice as high as wide).
-
-A source-code block header represents a headline for the following
-code. To couple this headline with the following code closer than
-with previous code, leave two empty lines above and one empty line
-below the source-code block header.
-
-
-Order of public, protected, and private blocks
-==============================================
-
-For consistency reasons, use the following class layout:
-
-! class Sandstein
-! {
-! private:
-! ...
-! protected:
-! ...
-! public:
-! };
-
-Typically, the private section contains member variables that are used
-by public accessor functions below. In this common case, we only reference
-symbols that are defined above as it is done when programming plain C.
-
-Leave one empty line (or a line that contains only a brace) above and below
-a 'private', 'protected', or 'public' label. This also applies when the
-label is followed by a source-code block header.
diff --git a/doc/conventions.txt b/doc/conventions.txt
index 50b3130ed0..124211dc3d 100644
--- a/doc/conventions.txt
+++ b/doc/conventions.txt
@@ -1,70 +1,333 @@
- Conventions for the Genode development
-
- Norman Feske
+ ==================================================
+ Conventions and coding-style guidelines for Genode
+ ==================================================
-Documentation
-#############
+
+Documentation and naming of files
+#################################
We use the GOSH syntax [https://github.com/nfeske/gosh] for documentation and
README files.
+We encourage that each directory contains a file called 'README' that briefly
+explains what the directory is about.
-README files
-############
+File names
+----------
-Each directory should contain a file called 'README' that briefly explains
-what the directory is about. In 'doc/Makefile' is a rule for
-generating a directory overview from the 'README' files automatically.
-
-You can structure your 'README' file by using the GOSH style for subsections:
-! Subsection
-! ~~~~~~~~~~
-Do not use chapters or sections in your 'README' files.
-
-
-Filenames
-#########
-
-All normal filenames are lowercase. Filenames should be chosen to be
-expressive. Someone who explores your files for the first time might not
+All normal file names are lowercase. Filenames should be chosen to be
+expressive. Someone who explores your files for the first time might not
understand what 'mbi.cc' means but 'multiboot_info.cc' would ring a bell. If a
-filename contains multiple words, use the '_' to separate them (instead of
+file name contains multiple words, use the '_' to separate them (instead of
'miscmath.h', use 'misc_math.h').
Coding style
############
-A common coding style helps a lot to ease collaboration. The official coding
-style of the Genode base components is described in 'doc/coding_style.txt'.
-If you consider working closely together with the Genode main developers,
-your adherence to this style is greatly appreciated.
+Things to avoid
+===============
+
+Please avoid using pre-processor macros. C++ provides language
+features for almost any case, for which a C programmer uses
+macros.
+
+:Defining constants:
+
+ Use 'enum' instead of '#define'
+ ! enum { MAX_COLORS = 3 };
+ ! enum {
+ ! COLOR_RED = 1,
+ ! COLOR_BLUE = 2,
+ ! COLOR_GREEN = 3
+ ! };
+
+:Meta programming:
+
+ Use templates instead of pre-processor macros. In contrast to macros,
+ templates are type-safe and fit well with the implementation syntax.
+
+:Conditional-code inclusion:
+
+ Please avoid C-hacker style '#ifdef CONFIG_PLATFROM' - '#endif'
+ constructs. Instead, factor-out the encapsulated code into a
+ separate file and introduce a proper function interface.
+ The build process should then be used to select the appropriate
+ platform-specific files at compile time. Keep platform dependent
+ code as small as possible. Never pollute existing generic code
+ with platform-specific code.
-Include files and RPC interfaces
-################################
+Header of each file
+===================
-Never place include files directly into the '/include/' directory
-but use a meaningful subdirectory that corresponds to the component that
-provides the interfaces.
-
-Each RPC interface is represented by a separate include subdirectory. For
-an example, see 'base/include/ram_session/'. The header file that defines
-the RPC function interface has the same base name as the directory. The RPC
-stubs are called 'client.h' and 'server.h'. If your interface uses a custom
-capability type, it is defined in 'capability.h'. Furthermore, if your
-interface is a session interface of a service, it is good practice to
-provide a connection class in a 'connection.h' file for managing session-
-construction arguments and the creation and destruction of sessions.
-
-Specialization-dependent include directories are placed in 'include//'.
+! /*
+! * \brief Short description of the file
+! * \author Original author
+! * \date Creation date
+! *
+! * Some more detailed description. This is optional.
+! */
-Service Names
-#############
+Identifiers
+===========
+
+* The first character of class names are uppercase, any other characters are
+ lowercase.
+* Function and variable names are lower case.
+* 'Multi_word_identifiers' use underline to separate words.
+* 'CONSTANTS' and template arguments are upper case.
+* Private and protected members of a class begin with an '_'-character.
+* Accessor methods are named after their corresponding attributes:
+
+ ! /**
+ ! * Request private member variable
+ ! */
+ ! int value() const { return _value; }
+ !
+ ! /**
+ ! * Set the private member variable
+ ! */
+ ! void value(int value) { _value = value; }
+
+* Accessors that return a boolean value do not carry an 'is_' prefix. E.g.,
+ a method for requesting the validity of an object should be named
+ 'valid()', not 'is_valid()'.
+
+
+Indentation
+===========
+
+* Use one tab per indentation step. *Do not mix tabs and spaces!*
+* Use no tabs except at the beginning of a line.
+* Use spaces for the alignment of continuation lines such as function
+ arguments that span multiple lines. The alignment spaces of such lines
+ should start after the (tab-indented) indentation level. For example:
+ ! {
+ ! function_with_many_arguments(arg1,
+ ! <--- spaces for aligment --->arg2,
+ ! ...
+ ! }
+* Remove trailing spaces at the end of lines
+
+This way, each developer can set his preferred tab size in his editor
+and the source code always looks good.
+
+_Hint:_ In VIM, use the 'set list' and 'set listchars' commands to make tabs
+and spaces visible.
+
+* If class initializers span multiple lines, put the colon on a separate
+ line and indent the initializers using one tab. For example:
+ ! Complicated_machinery(Material &material, Deadline deadline)
+ ! :
+ ! _material(material),
+ ! _deadline(deadline),
+ ! ...
+ ! {
+ ! ...
+ ! }
+
+* Preferably place statements that alter the control flow - such as
+ 'break', 'continue', or 'return' - at the beginning of a separate line,
+ followed by vertical space (a blank line or the closing brace of the
+ surrounding scope).
+ ! if (early_return_possible)
+ ! return;
+
+
+Switch statements
+~~~~~~~~~~~~~~~~~
+
+Switch-statement blocks should be indented as follows:
+
+! switch (color) {
+!
+! case BLUE:
+! break;
+!
+! case GREEN:
+! {
+! int declaration_required;
+! ...
+! }
+!
+! default:
+! }
+
+Please note that the case labels have the same indentation
+level as the switch statement. This avoids a two-level
+indentation-change at the end of the switch block that
+would occur otherwise.
+
+
+Vertical whitespaces
+====================
+
+In header files:
+
+* Leave two empty lines between classes.
+* Leave one empty line between member functions.
+
+In implementation files:
+
+* Leave two empty lines between functions.
+
+
+Braces
+======
+
+* Braces after class, struct and function names are placed at a new line:
+ ! class Foo
+ ! {
+ ! public:
+ !
+ ! void method(void)
+ ! {
+ ! ...
+ ! }
+ ! };
+
+ except for one-line functions.
+
+* All other occurrences of open braces (for 'if', 'while', 'do', 'for',
+ 'namespace', 'enum' etc.) are at the end of a line:
+
+ ! if (flag) {
+ ! ..
+ ! } else {
+ ! ..
+ ! }
+
+* One-line functions should be written on a single line as long as the line
+ length does not exceed approximately 80 characters.
+ Typically, this applies for accessor functions.
+ If slightly more space than one line is needed, indent as follows:
+
+ ! int heavy_computation(int a, int lot, int of, int args) {
+ ! return a + lot + of + args; }
+
+
+Comments
+========
+
+Function/method header
+~~~~~~~~~~~~~~~~~~~~~~
+
+Each public or protected (but no private) method in a header-file should be
+prepended by a header as follows:
+
+! /**
+! * Short description
+! *
+! * \param a meaning of parameter a
+! * \param b meaning of parameter b
+! * \param c,d meaning of parameters c and d
+! *
+! * \throw Exception_type meaning of the exception
+! *
+! * \return meaning of return value
+! *
+! * More detailed information about the function. This is optional.
+! */
+
+Descriptions of parameters and return values should be lower-case and brief.
+More elaborative descriptions can be documented in the text area below.
+
+In implementation files, only local and private functions should feature
+function headers.
+
+
+Single-line comments
+~~~~~~~~~~~~~~~~~~~~
+
+! /* use this syntax for single line comments */
+
+A single-line comment should be prepended by an empty line.
+Single-line comments should be short - no complete sentences. Use lower-case.
+
+C++-style comments ('//') should only be used for temporarily commenting-out
+code. Such commented-out garbage is easy to 'grep' and there are handy
+'vim'-macros available for creating and removing such comments.
+
+
+Variable descriptions
+~~~~~~~~~~~~~~~~~~~~~
+
+Use the same syntax as for single-line comments. Insert two or more
+spaces before your comment starts.
+
+! int size; /* in kilobytes */
+
+
+Multi-line comments
+~~~~~~~~~~~~~~~~~~~
+
+Multi-line comments are more detailed descriptions in the form of
+sentences.
+A multi-line comment should be enclosed by empty lines.
+
+! /*
+! * This is some tricky
+! * algorithm that works
+! * as follows:
+! * ...
+! */
+
+The first and last line of a multi-line comment contain no words.
+
+
+Source-code blocks
+~~~~~~~~~~~~~~~~~~
+
+For structuring your source code, you can entitle the different
+parts of a file like this:
+
+! <- two empty lines
+!
+! /********************
+! ** Event handlers **
+! ********************/
+! <- one empty line
+
+Note the two stars at the left and right. There are two of them to
+make the visible width of the border match its height (typically,
+characters are ca. twice as high as wide).
+
+A source-code block header represents a headline for the following
+code. To couple this headline with the following code closer than
+with previous code, leave two empty lines above and one empty line
+below the source-code block header.
+
+
+Order of public, protected, and private blocks
+==============================================
+
+For consistency reasons, use the following class layout:
+
+! class Sandstein
+! {
+! private:
+! ...
+! protected:
+! ...
+! public:
+! };
+
+Typically, the private section contains member variables that are used
+by public accessor functions below. In this common case, we only reference
+symbols that are defined above as it is done when programming plain C.
+
+Leave one empty line (or a line that contains only a brace) above and below
+a 'private', 'protected', or 'public' label. This also applies when the
+label is followed by a source-code block header.
+
+
+Naming of Genode services
+=========================
Service names as announced via the 'parent()->announce()' function follow
the following convention:
diff --git a/doc/depot.txt b/doc/depot.txt
deleted file mode 100644
index abd9196a13..0000000000
--- a/doc/depot.txt
+++ /dev/null
@@ -1,514 +0,0 @@
-
-
- ============================
- Package management on Genode
- ============================
-
-
- Norman Feske
-
-
-
-Motivation and inspiration
-##########################
-
-The established system-integration work flow with Genode is based on
-the 'run' tool, which automates the building, configuration, integration,
-and testing of Genode-based systems. Whereas the run tool succeeds in
-overcoming the challenges that come with Genode's diversity of kernels and
-supported hardware platforms, its scalability is somewhat limited to
-appliance-like system scenarios: The result of the integration process is
-a system image with a certain feature set. Whenever requirements change,
-the system image is replaced with a new created image that takes those
-requirements into account. In practice, there are two limitations of this
-system-integration approach:
-
-First, since the run tool implicitly builds all components required for a
-system scenario, the system integrator has to compile all components from
-source. E.g., if a system includes a component based on Qt5, one needs to
-compile the entire Qt5 application framework, which induces significant
-overhead to the actual system-integration tasks of composing and configuring
-components.
-
-Second, general-purpose systems tend to become too complex and diverse to be
-treated as system images. When looking at commodity OSes, each installation
-differs with respect to the installed set of applications, user preferences,
-used device drivers and system preferences. A system based on the run tool's
-work flow would require the user to customize the run script of the system for
-each tweak. To stay up to date, the user would need to re-create the
-system image from time to time while manually maintaining any customizations.
-In practice, this is a burden, very few end users are willing to endure.
-
-The primary goal of Genode's package management is to overcome these
-scalability limitations, in particular:
-
-* Alleviating the need to build everything that goes into system scenarios
- from scratch,
-* Facilitating modular system compositions while abstracting from technical
- details,
-* On-target system update and system development,
-* Assuring the user that system updates are safe to apply by providing the
- ability to easily roll back the system or parts thereof to previous versions,
-* Securing the integrity of the deployed software,
-* Fostering a federalistic evolution of Genode systems,
-* Low friction for existing developers.
-
-The design of Genode's package-management concept is largely influenced by Git
-as well as the [https://nixos.org/nix/ - Nix] package manager. In particular
-the latter opened our eyes to discover the potential that lies beyond the
-package management employed in state-of-the art commodity systems. Even though
-we considered adapting Nix for Genode and actually conducted intensive
-experiments in this direction (thanks to Emery Hemingway who pushed forward
-this line of work), we settled on a custom solution that leverages Genode's
-holistic view on all levels of the operating system including the build system
-and tooling, source structure, ABI design, framework API, system
-configuration, inter-component interaction, and the components itself. Whereby
-Nix is designed for being used on top of Linux, Genode's whole-systems view
-led us to simplifications that eliminated the needs for Nix' powerful features
-like its custom description language.
-
-
-Nomenclature
-############
-
-When speaking about "package management", one has to clarify what a "package"
-in the context of an operating system represents. Traditionally, a package
-is the unit of delivery of a bunch of "dumb" files, usually wrapped up in
-a compressed archive. A package may depend on the presence of other
-packages. Thereby, a dependency graph is formed. To express how packages fit
-with each other, a package is usually accompanied with meta data
-(description). Depending on the package manager, package descriptions follow
-certain formalisms (e.g., package-description language) and express
-more-or-less complex concepts such as versioning schemes or the distinction
-between hard and soft dependencies.
-
-Genode's package management does not follow this notion of a "package".
-Instead of subsuming all deliverable content under one term, we distinguish
-different kinds of content, each in a tailored and simple form. To avoid the
-clash of the notions of the common meaning of a "package", we speak of
-"archives" as the basic unit of delivery. The following subsections introduce
-the different categories.
-Archives are named with their version as suffix, appended via a slash. The
-suffix is maintained by the author of the archive. The recommended naming
-scheme is the use of the release date as version suffix, e.g.,
-'report_rom/2017-05-14'.
-
-
-Raw-data archives
-=================
-
-A raw-data archive contains arbitrary data that is - in contrast to executable
-binaries - independent from the processor architecture. Examples are
-configuration data, game assets, images, or fonts. The content of raw-data
-archives is expected to be consumed by components at runtime. It is not
-relevant for the build process for executable binaries. Each raw-data
-archive contains merely a collection of data files. There is no meta data.
-
-
-API archive
-===========
-
-An API archive has the structure of a Genode source-code repository. It may
-contain all the typical content of such a source-code repository such as header
-files (in the _include/_ subdirectory), source codes (in the _src/_
-subdirectory), library-description files (in the _lib/mk/_ subdirectory), or
-ABI symbols (_lib/symbols/_ subdirectory). At the top level, a LICENSE file is
-expected that clarifies the license of the contained source code. There is no
-meta data contained in an API archive.
-
-An API archive is meant to provide _ingredients_ for building components. The
-canonical example is the public programming interface of a library (header
-files) and the library's binary interface in the form of an ABI-symbols file.
-One API archive may contain the interfaces of multiple libraries. For example,
-the interfaces of libc and libm may be contained in a single "libc" API
-archive because they are closely related to each other. Conversely, an API
-archive may contain a single header file only. The granularity of those
-archives may vary. But they have in common that they are used at build time
-only, not at runtime.
-
-
-Source archive
-==============
-
-Like an API archive, a source archive has the structure of a Genode
-source-tree repository and is expected to contain all the typical content of
-such a source repository along with a LICENSE file. But unlike an API archive,
-it contains descriptions of actual build targets in the form of Genode's usual
-'target.mk' files.
-
-In addition to the source code, a source archive contains a file
-called 'used_apis', which contains a list of API-archive names with each
-name on a separate line. For example, the 'used_apis' file of the 'report_rom'
-source archive looks as follows:
-
-! base/2017-05-14
-! os/2017-05-13
-! report_session/2017-05-13
-
-The 'used_apis' file declares the APIs needed to incorporate into the build
-process when building the source archive. Hence, they represent _build-time_
-_dependencies_ on the specific API versions.
-
-A source archive may be equipped with a top-level file called 'api' containing
-the name of exactly one API archive. If present, it declares that the source
-archive _implements_ the specified API. For example, the 'libc/2017-05-14'
-source archive contains the actual source code of the libc and libm as well as
-an 'api' file with the content 'libc/2017-04-13'. The latter refers to the API
-implemented by this version of the libc source package (note the differing
-versions of the API and source archives)
-
-
-Binary archive
-==============
-
-A binary archive contains the build result of the equally-named source archive
-when built for a particular architecture. That is, all files that would appear
-at the _/bin/_ subdirectory when building all targets present in
-the source archive. There is no meta data present in a binary archive.
-
-A binary archive is created out of the content of its corresponding source
-archive and all API archives listed in the source archive's 'used_apis' file.
-Note that since a binary archive depends on only one source archive, which
-has no further dependencies, all binary archives can be built independently
-from each other.
-For example, a libc-using application needs the source code of the
-application as well as the libc's API archive (the libc's header file and
-ABI) but it does not need the actual libc library to be present.
-
-
-Package archive
-===============
-
-A package archive contains an 'archives' file with a list of archive names
-that belong together at runtime. Each listed archive appears on a separate line.
-For example, the 'archives' file of the package archive for the window
-manager 'wm/2018-02-26' looks as follows:
-
-! genodelabs/raw/wm/2018-02-14
-! genodelabs/src/wm/2018-02-26
-! genodelabs/src/report_rom/2018-02-26
-! genodelabs/src/decorator/2018-02-26
-! genodelabs/src/floating_window_layouter/2018-02-26
-
-In contrast to the list of 'used_apis' of a source archive, the content of
-the 'archives' file denotes the origin of the respective archives
-("genodelabs"), the archive type, followed by the versioned name of the
-archive.
-
-An 'archives' file may specify raw archives, source archives, or package
-archives (as type 'pkg'). It thereby allows the expression of _runtime
-dependencies_. If a package archive lists another package archive, it inherits
-the content of the listed archive. This way, a new package archive may easily
-customize an existing package archive.
-
-A package archive does not specify binary archives directly as they differ
-between the architecture and are already referenced by the source archives.
-
-In addition to an 'archives' file, a package archive is expected to contain
-a 'README' file explaining the purpose of the collection.
-
-
-Depot structure
-###############
-
-Archives are stored within a directory tree called _depot/_. The depot
-is structured as follows:
-
-! /pubkey
-! /download
-! /src///
-! /api///
-! /raw///
-! /pkg///
-! /bin////
-
-The stands for the origin of the contained archives. For example, the
-official archives provided by Genode Labs reside in a _genodelabs/_
-subdirectory. Within this directory, there is a 'pubkey' file with the
-user's public key that is used to verify the integrity of archives downloaded
-from the user. The file 'download' specifies the download location as an URL.
-
-Subsuming archives in a subdirectory that correspond to their origin
-(user) serves two purposes. First, it provides a user-local name space for
-versioning archives. E.g., there might be two versions of a
-'nitpicker/2017-04-15' source archive, one by "genodelabs" and one by
-"nfeske". However, since each version resides under its origin's subdirectory,
-version-naming conflicts between different origins cannot happen. Second, by
-allowing multiple archive origins in the depot side-by-side, package archives
-may incorporate archives of different origins, which fosters the goal of a
-federalistic development, where contributions of different origins can be
-easily combined.
-
-The actual archives are stored in the subdirectories named after the archive
-types ('raw', 'api', 'src', 'bin', 'pkg'). Archives contained in the _bin/_
-subdirectories are further subdivided in the various architectures (like
-'x86_64', or 'arm_v7').
-
-
-Depot management
-################
-
-The tools for managing the depot content reside under the _tool/depot/_
-directory. When invoked without arguments, each tool prints a brief
-description of the tool and its arguments.
-
-Unless stated otherwise, the tools are able to consume any number of archives
-as arguments. By default, they perform their work sequentially. This can be
-changed by the '-j' argument, where denotes the desired level of
-parallelization. For example, by specifying '-j4' to the _tool/depot/build_
-tool, four concurrent jobs are executed during the creation of binary archives.
-
-
-Downloading archives
-====================
-
-The depot can be populated with archives in two ways, either by creating
-the content from locally available source codes as explained by Section
-[Automated extraction of archives from the source tree], or by downloading
-ready-to-use archives from a web server.
-
-In order to download archives originating from a specific user, the depot's
-corresponding user subdirectory must contain two files:
-
-:_pubkey_: contains the public key of the GPG key pair used by the creator
- (aka "user") of the to-be-downloaded archives for signing the archives. The
- file contains the ASCII-armored version of the public key.
-
-:_download_: contains the base URL of the web server where to fetch archives
- from. The web server is expected to mirror the structure of the depot.
- That is, the base URL is followed by a sub directory for the user,
- which contains the archive-type-specific subdirectories.
-
-If both the public key and the download locations are defined, the download
-tool can be used as follows:
-
-! ./tool/depot/download genodelabs/src/zlib/2018-01-10
-
-The tool automatically downloads the specified archives and their
-dependencies. For example, as the zlib depends on the libc API, the libc API
-archive is downloaded as well. All archive types are accepted as arguments
-including binary and package archives. Furthermore, it is possible to download
-all binary archives referenced by a package archive. For example, the
-following command downloads the window-manager (wm) package archive including
-all binary archives for the 64-bit x86 architecture. Downloaded binary
-archives are always accompanied with their corresponding source and used API
-archives.
-
-! ./tool/depot/download genodelabs/pkg/x86_64/wm/2018-02-26
-
-Archive content is not downloaded directly to the depot. Instead, the
-individual archives and signature files are downloaded to a quarantine area in
-the form of a _public/_ directory located in the root of Genode's source tree.
-As its name suggests, the _public/_ directory contains data that is imported
-from or to-be exported to the public. The download tool populates it with the
-downloaded archives in their compressed form accompanied with their
-signatures.
-
-The compressed archives are not extracted before their signature is checked
-against the public key defined at _depot//pubkey_. If however the
-signature is valid, the archive content is imported to the target destination
-within the depot. This procedure ensures that depot content - whenever
-downloaded - is blessed by a cryptographic signature of its creator.
-
-
-Building binary archives from source archives
-=============================================
-
-With the depot populated with source and API archives, one can use the
-_tool/depot/build_ tool to produce binary archives. The arguments have the
-form '/bin//' where '' stands for the targeted
-CPU architecture. For example, the following command builds the 'zlib'
-library for the 64-bit x86 architecture. It executes four concurrent jobs
-during the build process.
-
-! ./tool/depot/build genodelabs/bin/x86_64/zlib/2018-01-10 -j4
-
-Note that the command expects a specific version of the source archive as
-argument. The depot may contain several versions. So the user has to decide,
-which one to build.
-
-After the tool is finished, the freshly built binary archive can be found in
-the depot within the _genodelabs/bin////_ subdirectory.
-Only the final result of the built process is preserved. In the example above,
-that would be the _zlib.lib.so_ library.
-
-For debugging purposes, it might be interesting to inspect the intermediate
-state of the build. This is possible by adding 'KEEP_BUILD_DIR=1' as argument
-to the build command. The binary's intermediate build directory can be
-found besides the binary archive's location named with a '.build' suffix.
-
-By default, the build tool won't attempt to rebuild a binary archive that is
-already present in the depot. However, it is possible to force a rebuild via
-the 'REBUILD=1' argument.
-
-
-Publishing archives
-===================
-
-Archives located in the depot can be conveniently made available to the public
-using the _tool/depot/publish_ tool. Given an archive path, the tool takes
-care of determining all archives that are implicitly needed by the specified
-one, wrapping the archive's content into compressed tar archives, and signing
-those.
-
-As a precondition, the tool requires you to possess the private key that
-matches the _depot//pubkey_ file within your depot. The key pair should
-be present in the key ring of your GNU privacy guard.
-
-To publish archives, one needs to specify the specific version to publish.
-For example:
-
-! ./tool/depot/publish /pkg/x86_64/wm/2018-02-26
-
-The command checks that the specified archive and all dependencies are present
-in the depot. It then proceeds with the archiving and signing operations. For
-the latter, the pass phrase for your private key will be requested. The
-publish tool prints the information about the processed archives, e.g.:
-
-! publish /.../public//api/base/2018-02-26.tar.xz
-! publish /.../public//api/framebuffer_session/2017-05-31.tar.xz
-! publish /.../public//api/gems/2018-01-28.tar.xz
-! publish /.../public//api/input_session/2018-01-05.tar.xz
-! publish /.../public//api/nitpicker_gfx/2018-01-05.tar.xz
-! publish /.../public//api/nitpicker_session/2018-01-05.tar.xz
-! publish /.../public//api/os/2018-02-13.tar.xz
-! publish /.../public//api/report_session/2018-01-05.tar.xz
-! publish /.../public//api/scout_gfx/2018-01-05.tar.xz
-! publish /.../public//bin/x86_64/decorator/2018-02-26.tar.xz
-! publish /.../public//bin/x86_64/floating_window_layouter/2018-02-26.tar.xz
-! publish /.../public//bin/x86_64/report_rom/2018-02-26.tar.xz
-! publish /.../public//bin/x86_64/wm/2018-02-26.tar.xz
-! publish /.../public//pkg/wm/2018-02-26.tar.xz
-! publish /.../public//raw/wm/2018-02-14.tar.xz
-! publish /.../public//src/decorator/2018-02-26.tar.xz
-! publish /.../public//src/floating_window_layouter/2018-02-26.tar.xz
-! publish /.../public//src/report_rom/2018-02-26.tar.xz
-! publish /.../public//src/wm/2018-02-26.tar.xz
-
-
-According to the output, the tool populates a directory called _public/_
-at the root of the Genode source tree with the to-be-published archives.
-The content of the _public/_ directory is now ready to be copied to a
-web server, e.g., by using rsync.
-
-
-Automated extraction of archives from the source tree
-#####################################################
-
-Genode users are expected to populate their local depot with content obtained
-via the _tool/depot/download_ tool. However, Genode developers need a way to
-create depot archives locally in order to make them available to users. Thanks
-to the _tool/depot/extract_ tool, the assembly of archives does not need to be
-a manual process. Instead, archives can be conveniently generated out of the
-source codes present in the Genode source tree and the _contrib/_ directory.
-
-However, the granularity of splitting source code into archives, the
-definition of what a particular API entails, and the relationship between
-archives must be augmented by the archive creator as this kind of information
-is not present in the source tree as is. This is where so-called "archive
-recipes" enter the picture. An archive recipe defines the content of an
-archive. Such recipes can be located at an _recipes/_ subdirectory of any
-source-code repository, similar to how port descriptions and run scripts
-are organized. Each _recipe/_ directory contains subdirectories for the
-archive types, which, in turn, contain a directory for each archive. The
-latter is called a _recipe directory_.
-
-Recipe directory
-----------------
-
-The recipe directory is named after the archive _omitting the archive version_
-and contains at least one file named _hash_. This file defines the version
-of the archive along with a hash value of the archive's content
-separated by a space character. By tying the version name to a particular hash
-value, the _extract_ tool is able to detect the appropriate points in time
-whenever the version should be increased due to a change of the archive's
-content.
-
-API, source, and raw-data archive recipes
------------------------------------------
-
-Recipe directories for API, source, or raw-data archives contain a
-_content.mk_ file that defines the archive content in the form of make
-rules. The content.mk file is executed from the archive's location within
-the depot. Hence, the contained rules can refer to archive-relative files as targets.
-The first (default) rule of the content.mk file is executed with a customized
-make environment:
-
-:GENODE_DIR: A variable that holds the path to root of the Genode source tree,
-:REP_DIR: A variable with the path to source code repository where the recipe
- is located
-:port_dir: A make function that returns the directory of a port within the
- _contrib/_ directory. The function expects the location of the
- corresponding port file as argument, for example, the 'zlib' recipe
- residing in the _libports/_ repository may specify '$(REP_DIR)/ports/zlib'
- to access the 3rd-party zlib source code.
-
-Source archive recipes contain simplified versions of the 'used_apis' and
-(for libraries) 'api' files as found in the archives. In contrast to the
-depot's counterparts of these files, which contain version-suffixed names,
-the files contained in recipe directories omit the version suffix. This
-is possible because the extract tool always extracts the _current_ version
-of a given archive from the source tree. This current version is already
-defined in the corresponding recipe directory.
-
-Package-archive recipes
------------------------
-
-The recipe directory for a package archive contains the verbatim content of
-the to-be-created package archive except for the _archives_ file. All other
-files are copied verbatim to the archive. The content of the recipe's
-_archives_ file may omit the version information from the listed ingredients.
-Furthermore, the user part of each entry can be left blank by using '_' as a
-wildcard. When generating the package archive from the recipe, the extract
-tool will replace this wildcard with the user that creates the archive.
-
-
-Convenience front-end to the extract, build tools
-#################################################
-
-For developers, the work flow of interacting with the depot is most often the
-combination of the _extract_ and _build_ tools whereas the latter expects
-concrete version names as arguments. The _create_ tool accelerates this common
-usage pattern by allowing the user to omit the version names. Operations
-implicitly refer to the _current_ version of the archives as defined in
-the recipes.
-
-Furthermore, the _create_ tool is able to manage version updates for the
-developer. If invoked with the argument 'UPDATE_VERSIONS=1', it automatically
-updates hash files of the involved recipes by taking the current date as
-version name. This is a valuable assistance in situations where a commonly
-used API changes. In this case, the versions of the API and all dependent
-archives must be increased, which would be a labour-intensive task otherwise.
-If the depot already contains an archive of the current version, the create
-tools won't re-create the depot archive by default. Local modifications of
-the source code in the repository do not automatically result in a new archive.
-To ensure that the depot archive is current, one can specify 'FORCE=1' to
-the create tool. With this argument, existing depot archives are replaced by
-freshly extracted ones and version updates are detected. When specified for
-creating binary archives, 'FORCE=1' normally implies 'REBUILD=1'. To prevent
-the superfluous rebuild of binary archives whose source versions remain
-unchanged, 'FORCE=1' can be combined with the argument 'REBUILD='.
-
-
-Accessing depot content from run scripts
-########################################
-
-The depot tools are not meant to replace the run tool but rather to complement
-it. When both tools are combined, the run tool implicitly refers to "current"
-archive versions as defined for the archive's corresponding recipes. This way,
-the regular run-tool work flow can be maintained while attaining a
-productivity boost by fetching content from the depot instead of building it.
-
-Run scripts can use the 'import_from_depot' function to incorporate archive
-content from the depot into a scenario. The function must be called after the
-'create_boot_directory' function and takes any number of pkg, src, or raw
-archives as arguments. An archive is specified as depot-relative path of the
-form '//name'. Run scripts may call 'import_from_depot'
-repeatedly. Each argument can refer to a specific version of an archive or
-just the version-less archive name. In the latter case, the current version
-(as defined by a corresponding archive recipe in the source tree) is used.
-
-If a 'src' archive is specified, the run tool integrates the content of
-the corresponding binary archive into the scenario. The binary archives
-are selected according the spec values as defined for the build directory.
-
diff --git a/doc/getting_started.txt b/doc/getting_started.txt
deleted file mode 100644
index e9679cd351..0000000000
--- a/doc/getting_started.txt
+++ /dev/null
@@ -1,154 +0,0 @@
-
- =============================
- How to start exploring Genode
- =============================
-
- Norman Feske
-
-
-Abstract
-########
-
-This guide is meant to provide you a painless start with using the Genode OS
-Framework. It explains the steps needed to get a simple demo system running
-on Linux first, followed by the instructions on how to run the same scenario
-on a microkernel.
-
-
-Quick start to build Genode for Linux
-#####################################
-
-The best starting point for exploring Genode is to run it on Linux. Make sure
-that your system satisfies the following requirements:
-
-* GNU Make version 3.81 or newer
-* 'libsdl2-dev', 'libdrm-dev', and 'libgbm-dev' (needed to run interactive
- system scenarios directly on Linux)
-* 'tclsh' and 'expect'
-* 'byacc' (only needed for the L4/Fiasco kernel)
-* 'qemu' and 'xorriso' (for testing non-Linux platforms via Qemu)
-
-For using the entire collection of ported 3rd-party software, the following
-packages should be installed additionally: 'autoconf2.64', 'autogen', 'bison',
-'flex', 'g++', 'git', 'gperf', 'libxml2-utils', 'subversion', and 'xsltproc'.
-
-Your exploration of Genode starts with obtaining the source code of the
-[https://sourceforge.net/projects/genode/files/latest/download - latest version]
-of the framework. For detailed instructions and alternatives to the
-download from Sourceforge please refer to [https://genode.org/download].
-Furthermore, you will need to install the official Genode tool chain, which
-you can download at [https://genode.org/download/tool-chain].
-
-The Genode build system never touches the source tree but generates object
-files, libraries, and programs in a dedicated build directory. We do not have a
-build directory yet. For a quick start, let us create one for the Linux base
-platform:
-
-! cd
-! ./tool/create_builddir x86_64
-
-This creates a new build directory for building x86_64 binaries in './build'.
-The build system creates unified binaries that work on the given
-architecture independent from the underlying base platform, in this case Linux.
-
-Now change into the fresh build directory:
-
-! cd build/x86_64
-
-Please uncomment the following line in 'etc/build.conf' to make the
-build process as smooth as possible.
-
-! RUN_OPT += --depot-auto-update
-
-To give Genode a try, build and execute a simple demo scenario via:
-
-! make KERNEL=linux BOARD=linux run/demo
-
-By invoking 'make' with the 'run/demo' argument, all components needed by the
-demo scenario are built and the demo is executed. This includes all components
-which are implicitly needed by the base platform. The base platform that the
-components will be executed upon on is selected via the 'KERNEL' and 'BOARD'
-variables. If you are interested in looking behind the scenes of the demo
-scenario, please refer to 'doc/build_system.txt' and the run script at
-'os/run/demo.run'.
-
-
-Using platforms other than Linux
-================================
-
-Running Genode on Linux is the most convenient way to get acquainted with the
-framework. However, the point where Genode starts to shine is when used as the
-user land executed on a microkernel. The framework supports a variety of
-different kernels such as L4/Fiasco, L4ka::Pistachio, OKL4, and NOVA. Those
-kernels largely differ in terms of feature sets, build systems, tools, and boot
-concepts. To relieve you from dealing with those peculiarities, Genode provides
-you with an unified way of using them. For each kernel platform, there exists
-a dedicated description file that enables the 'prepare_port' tool to fetch and
-prepare the designated 3rd-party sources. Just issue the following command
-within the toplevel directory of the Genode source tree:
-
-! ./tool/ports/prepare_port
-
-Note that each 'base-' directory comes with a 'README' file, which
-you should revisit first when exploring the base platform. Additionally, most
-'base-' directories provide more in-depth information within their
-respective 'doc/' subdirectories.
-
-For the VESA driver on x86, the x86emu library is required and can be
-downloaded and prepared by again invoking the 3rd-party sources preparation
-tool:
-
-! ./tool/ports/prepare_port x86emu
-
-On x86 base platforms the GRUB2 boot loader is required and can be
-downloaded and prepared by invoking:
-
-! ./tool/ports/prepare_port grub2
-
-Now that the base platform is prepared, the 'create_builddir' tool can be used
-to create a build directory for your architecture of choice by giving the
-architecture as argument. To see the list of available architecture, execute
-'create_builddir' with no arguments. Note, that not all kernels support all
-architectures.
-
-For example, to give the demo scenario a spin on the OKL4 kernel, the following
-steps are required:
-
-# Download the kernel:
- ! cd
- ! ./tool/ports/prepare_port okl4
-# Create a build directory
- ! ./tool/create_builddir x86_32
-# Uncomment the following line in 'x86_32/etc/build.conf'
- ! REPOSITORIES += $(GENODE_DIR)/repos/libports
-# Build and execute the demo using Qemu
- ! make -C build/x86_32 KERNEL=okl4 BOARD=pc run/demo
-
-The procedure works analogously for the other base platforms. You can, however,
-reuse the already created build directory and skip its creation step if the
-architecture matches.
-
-
-How to proceed with exploring Genode
-####################################
-
-Now that you have taken the first steps into using Genode, you may seek to
-get more in-depth knowledge and practical experience. The foundation for doing
-so is a basic understanding of the build system. The documentation at
-'build_system.txt' provides you with the information about the layout of the
-source tree, how new components are integrated, and how complete system
-scenarios can be expressed. Equipped with this knowledge, it is time to get
-hands-on experience with creating custom Genode components. A good start is the
-'hello_tutorial', which shows you how to implement a simple client-server
-scenario. To compose complex scenarios out of many small components, the
-documentation of the Genode's configuration concept at 'os/doc/init.txt' is an
-essential reference.
-
-Certainly, you will have further questions on your way with exploring Genode.
-The best place to get these questions answered is the Genode mailing list.
-Please feel welcome to ask your questions and to join the discussions:
-
-:Genode Mailing Lists:
-
- [https://genode.org/community/mailing-lists]
-
diff --git a/doc/gsoc_2012.txt b/doc/gsoc_2012.txt
deleted file mode 100644
index 0f11fa4229..0000000000
--- a/doc/gsoc_2012.txt
+++ /dev/null
@@ -1,236 +0,0 @@
-
-
- ==========================
- Google Summer of Code 2012
- ==========================
-
-
-Genode Labs has applied as mentoring organization for the Google Summer of Code
-program in 2012. This document summarizes all information important to Genode's
-participation in the program.
-
-:[http://www.google-melange.com/gsoc/homepage/google/gsoc2012]:
- Visit the official homepage of the Google Summer of Code program.
-
-*Update* Genode Labs was not accepted as mentoring organization for GSoC 2012.
-
-
-Application of Genode Labs as mentoring organization
-####################################################
-
-:Organization ID: genodelabs
-
-:Organization name: Genode Labs
-
-:Organization description:
-
- Genode Labs is a self-funded company founded by the original creators of the
- Genode OS project. Its primary mission is to bring the Genode operating-system
- technology, which started off as an academic research project, to the real
- world. At present, Genode Labs is the driving force behind the Genode OS
- project.
-
-:Organization home page url:
-
- http://www.genode-labs.com
-
-:Main organization license:
-
- GNU General Public License version 2
-
-:Admins:
-
- nfeske, chelmuth
-
-:What is the URL for your Ideas page?:
-
- [http://genode.org/community/gsoc_2012]
-
-:What is the main IRC channel for your organization?:
-
- #genode
-
-:What is the main development mailing list for your organization?:
-
- genode-main@lists.sourceforge.net
-
-:Why is your organization applying to participate? What do you hope to gain?:
-
- During the past three months, our project underwent the transition from a
- formerly company-internal development to a completely open and transparent
- endeavour. By inviting a broad community for participation in shaping the
- project, we hope to advance Genode to become a broadly used and recognised
- technology. GSoC would help us to build our community.
-
- The project has its roots at the University of Technology Dresden where the
- Genode founders were former members of the academic research staff. We have
- a long and successful track record with regard to supervising students. GSoC
- would provide us with the opportunity to establish and cultivate
- relationships to new students and to spawn excitement about Genode OS
- technology.
-
-:Does your organization have an application templateo?:
-
- GSoC student projects follow the same procedure as regular community
- contributions, in particular the student is expected to sign the Genode
- Contributor's Agreement. (see [http://genode.org/community/contributions])
-
-:What criteria did you use to select your mentors?:
-
- We selected the mentors on the basis of their long-time involvement with the
- project and their time-tested communication skills. For each proposed working
- topic, there is least one stakeholder with profound technical background within
- Genode Labs. This person will be the primary contact person for the student
- working on the topic. However, we will encourgage the student to make his/her
- development transparant to all community members (i.e., via GitHub). So
- So any community member interested in the topic is able to bring in his/her
- ideas at any stage of development. Consequently, in practive, there will be
- multiple persons mentoring each students.
-
-:What is your plan for dealing with disappearing students?:
-
- Actively contact them using all channels of communication available to us,
- find out the reason for disappearance, trying to resolve the problems. (if
- they are related to GSoC or our project for that matter).
-
-:What is your plan for dealing with disappearing mentors?:
-
- All designated mentors are local to Genode Labs. So the chance for them to
- disappear to very low. However, if a mentor disappears for any serious reason
- (i.e., serious illness), our organization will provide a back-up mentor.
-
-:What steps will you take to encourage students to interact with your community?:
-
- First, we discussed GSoC on our mailing list where we received an overly
- positive response. We checked back with other Open-Source projects related to
- our topics, exchanged ideas, and tried to find synergies between our
- respective projects. For most project ideas, we have created issues in our
- issue tracker to collect technical information and discuss the topic.
- For several topics, we already observed interests of students to participate.
-
- During the work on the topics, the mentors will try to encourage the
- students to play an active role in discussions on our mailing list, also on
- topics that are not strictly related to the student project. We regard an
- active participation as key to to enable new community members to develop a
- holistic view onto our project and gather a profound understanding of our
- methodologies.
-
- Student projects will be carried out in a transparent fashion at GitHub.
- This makes it easy for each community member to get involved, discuss
- the rationale behind design decisions, and audit solutions.
-
-
-Topics
-######
-
-While discussing GSoC participation on our mailing list, we identified the
-following topics as being well suited for GSoC projects. However, if none of
-those topics receives resonance from students, there is more comprehensive list
-of topics available at our road map and our collection of future challenges:
-
-:[http://genode.org/about/road-map]: Road-map
-:[http://genode.org/about/challenges]: Challenges
-
-
-Combining Genode with the HelenOS/SPARTAN kernel
-================================================
-
-[http://www.helenos.org - HelenOS] is a microkernel-based multi-server OS
-developed at the university of Prague. It is based on the SPARTAN microkernel,
-which runs on a wide variety of CPU architectures including Sparc, MIPS, and
-PowerPC. This broad platform support makes SPARTAN an interesting kernel to
-look at alone. But a further motivation is the fact that SPARTAN does not
-follow the classical L4 road, providing a kernel API that comes with an own
-terminology and different kernel primitives. This makes the mapping of
-SPARTAN's kernel API to Genode a challenging endeavour and would provide us
-with feedback regarding the universality of Genode's internal interfaces.
-Finally, this project has the potential to ignite a further collaboration
-between the HelenOS and Genode communities.
-
-
-Block-level encryption
-======================
-
-Protecting privacy is one of the strongest motivational factors for developing
-Genode. One pivotal element with that respect is the persistence of information
-via block-level encryption. For example, to use Genode every day at Genode
-Labs, it's crucial to protect the confidentiality of some information that's
-not part of the Genode code base, e.g., emails and reports. There are several
-expansion stages imaginable to reach the goal and the basic building blocks
-(block-device interface, ATA/SATA driver for Qemu) are already in place.
-
-:[https://github.com/genodelabs/genode/issues/55 - Discuss the issue...]:
-
-
-Virtual NAT
-===========
-
-For sharing one physical network interface among multiple applications, Genode
-comes with a component called nic_bridge, which implements proxy ARP. Through
-this component, each application receives a distinct (virtual) network
-interface that is visible to the real network. I.e., each application requests
-an IP address via a DHCP request at the local network. An alternative approach
-would be a component that implements NAT on Genode's NIC session interface.
-This way, the whole Genode system would use only one IP address visible to the
-local network. (by stacking multiple nat and nic_bridge components together, we
-could even form complex virtual networks inside a single Genode system)
-
-The implementation of the virtual NAT could follow the lines of the existing
-nic_bridge component. For parsing network packets, there are already some handy
-utilities available (at os/include/net/).
-
-:[https://github.com/genodelabs/genode/issues/114 - Discuss the issue...]:
-
-
-Runtime for the Go or D programming language
-============================================
-
-Genode is implemented in C++. However, we are repeatedly receiving requests
-for offering more safe alternatives for implementing OS-level functionality
-such as device drivers, file systems, and other protocol stacks. The goals
-for this project are to investigate the Go and D programming languages with
-respect to their use within Genode, port the runtime of of those languages
-to Genode, and provide a useful level of integration with Genode.
-
-
-Block cache
-===========
-
-Currently, there exists only the iso9660 server that is able to cache block
-accesses. A generic solution for caching block-device accesses would be nice.
-One suggestion is a component that requests a block session (routed to a block
-device driver) as back end and also announces a block service (front end)
-itself. Such a block-cache server waits for requests at the front end and
-forwards them to the back end. But it uses its own memory to cache blocks.
-
-The first version could support only read-only block devices (such as CDROM) by
-caching the results of read accesses. In this version, we already need an
-eviction strategy that kicks in once the block cache gets saturated. For a
-start this could be FIFO or LRU (least recently used).
-
-A more sophisticated version would support write accesses, too. Here we need a
-way to sync blocks to the back end at regular intervals in order to guarantee
-that all block-write accesses are becoming persistent after a certain time. We
-would also need a way to explicitly flush the block cache (i.e., when the
-front-end block session gets closed).
-
-:[https://github.com/genodelabs/genode/issues/113 - Discuss the issue...]:
-
-
-; _Since Genode Labs was not accepted as GSoC mentoring organization, the_
-; _following section has become irrelevant. Hence, it is commented-out_
-;
-; Student applications
-; ####################
-;
-; The formal steps for applying to the GSoC program will be posted once Genode
-; Labs is accepted as mentoring organization. If you are a student interested
-; in working on a Genode-related GSoC project, now is a good time to get
-; involved with the Genode community. The best way is joining the discussions
-; at our mailing list and the issue tracker. This way, you will learn about
-; the currently relevant topics, our discussion culture, and the people behind
-; the project.
-;
-; :[http://genode.org/community/mailing-lists]: Join our mailing list
-; :[https://github.com/genodelabs/genode/issues]: Discuss issues around Genode
-
diff --git a/doc/news.txt b/doc/news.txt
index 69ca452c64..b18513a440 100644
--- a/doc/news.txt
+++ b/doc/news.txt
@@ -4,6 +4,78 @@
===========
+Genode OS Framework release 24.11 | 2024-11-22
+##############################################
+
+| With mirrored and panoramic multi-monitor setups, pointer grabbing,
+| atomic blitting and panning, and panel-self-refresh support, Genode's GUI
+| stack gets ready for the next decade. Hardware-wise, version 24.11 brings
+| a massive driver update for the i.MX SoC family. As a special highlight, the
+| release is accompanied by the first edition of the free book "Genode
+| Applications" as a gateway for application developers into Genode.
+
+Closing up the Year of Sculpt OS usability as the theme of our road map
+for 2024, we are excited to unveil the results of two intense lines of
+usability-concerned work with the release of Genode 24.11.
+
+For the usability of the Genode-based Sculpt OS as day-to-day operating
+system, the support of multi-monitor setups has been an unmet desire
+for a long time. Genode 24.11 does not only deliver a solution as a
+singular feature but improves the entire GUI stack in a holistic way,
+addressing panel self-refresh, mechanisms needed to overcome tearing
+artifacts, rigid resource partitioning between GUI applications, up to
+pointer-grabbing support.
+
+The second line of work addresses the usability of application development for
+Genode and Sculpt OS in particular. Over the course of the year, our Goa SDK
+has seen a succession of improvements that make the development, porting,
+debugging, and publishing of software a breeze. Still, given Genode's
+novelties, the learning curve to get started has remained challenging. Our new
+book "Genode Applications" is intended as a gateway into the world of Genode
+for those of us who enjoy dwelling in architectural beauty but foremost want
+to get things done. It features introductory material, explains fundamental
+concepts and components, and invites the reader on to a ride through a series
+of beginner-friendly as well as advanced tutorials. The book can be downloaded
+for free at [https://genode.org].
+
+Regarding hardware support, our work during the release cycle was hugely
+motivated by the prospect of bringing Genode to the MNT Pocket Reform laptop,
+which is based on the NXP i.MX8MP SoC. Along this way, we upgraded all
+Linux-based i.MX drivers to kernel version 6.6 while consolidating a variety
+of vendor kernels, equipped our platform driver with watchdog support, and
+added board support for this platform to Sculpt OS.
+
+You can find these among more topics covered in the detailed
+[https:/documentation/release-notes/24.11 - release documentation of version 24.11...]
+
+
+Sculpt OS release 24.10 | 2024-10-30
+####################################
+
+| Thanks to a largely revamped GUI stack, the Genode-based
+| Sculpt OS 24.10 has gained profound support for multi-monitor setups.
+
+Among the many usability-related topics on our road map, multi-monitor
+support is certainly the most anticipated feature. It motivated a holistic
+modernization of Genode's GUI stack over several months, encompassing drivers,
+the GUI multiplexer, inter-component interfaces, up to widget toolkits. Sculpt
+OS 24.10 combines these new foundations with a convenient
+[https:/documentation/articles/sculpt-24-10#Multi-monitor_support - user interface]
+for controlling monitor modes, making brightness adjustments, and setting up
+mirrored and panoramic monitor configurations.
+
+Besides this main theme, version 24.10 benefits from the advancements of the
+Genode OS Framework over the past six months: compatibility with Qt6,
+drivers ported from the Linux kernel version 6.6.47, and comprehensive
+[https:/documentation/release-notes/24.08#Goa_SDK - debugging support]
+for the Goa SDK.
+
+Sculpt OS 24.10 is available as ready-to-use system image for PC hardware,
+the PinePhone, and the MNT Reform laptop at the
+[https:/download/sculpt - Sculpt download page] accompanied
+with updated [https:/documentation/articles/sculpt-24-10 - documentation].
+
+
Genode OS Framework release 24.08 | 2024-08-29
##############################################
diff --git a/doc/porting_guide.txt b/doc/porting_guide.txt
deleted file mode 100644
index dd2fe85597..0000000000
--- a/doc/porting_guide.txt
+++ /dev/null
@@ -1,1451 +0,0 @@
- ====================
- Genode Porting Guide
- ====================
-
- Genode Labs GmbH
-
-
-Overview
-########
-
-This document describes the basic workflows for porting applications, libraries,
-and device drivers to the Genode framework. It consists of the following
-sections:
-
-:[http:porting_applications - Porting third-party code to Genode]:
- Overview of the general steps needed to use 3rd-party code on Genode.
-
-:[http:porting_dosbox - Porting a program to natively run on Genode]:
- Step-by-step description of applying the steps described in the first
- section to port an application, using DosBox as an example.
-
-:[http:porting_libraries - Native Genode port of a library]:
- Many 3rd-party applications have library dependencies. This section shows
- how to port a library using SDL_net (needed by DosBox) as an example.
-
-:[http:porting_noux_packages - Porting an application to Genode's Noux runtime]:
- On Genode, there exists an environment specially tailored to execute
- command-line based Unix software, the so-called Noux runtime. This section
- demonstrates how to port and execute the tar program within Noux.
-
-:[http:porting_device_drivers - Porting devices drivers]:
- This chapter describes the concepts of how to port a device driver to the
- Genode framework. It requires the basic knowledge introduced in the previous
- chapters and should be read last.
-
-Before reading this guide, it is strongly advised to read the "The Genode
-Build System" documentation:
-
-:Build-system manual:
-
- [http://genode.org/documentation/developer-resources/build_system]
-
-
-Porting third-party code to Genode
-##################################
-
-Porting an existing program or library to Genode is for the most part a
-straight-forward task and depends mainly on the complexity of the program
-itself. Genode provides a fairly complete libc based on FreeBSD's libc whose
-functionality can be extended by so-called libc plugins. If the program one
-wants to port solely uses standard libc functions, porting becomes easy. Every
-porting task involves usually the same steps which are outlined below.
-
-
-Steps in porting applications to Genode
-=======================================
-
-# Check requirements/dependencies (e.g. on Linux)
-
- The first step is gathering information about the application,
- e.g. what functionality needs to be provided by the target system and
- which libraries does it use.
-
-# Create a port file
-
- Prepare the source code of the application for the use within Genode. The
- Genode build-system infrastructure uses fetch rules, so called port files,
- which declare where the source is obtained from, what patches are applied
- to the source code, and where the source code will be stored and
- configured.
-
-# Check platform dependent code and create stub code
-
- This step may require changes to the original source code
- of the application to be compilable for Genode. At this point, it
- is not necessary to provide a working implementation for required
- functions. Just creating stubs of the various functions is fine.
-
-# Create build-description file
-
- To compile the application we need build rules. Within these rules
- we also declare all dependencies (e.g. libraries) that are needed
- by it. The location of these rules depends on the type
- of the application. Normal programs on one hand use a _target.mk_ file,
- which is located in the program directory (e.g. _src/app/foobar_)
- within a given Genode repository. Libraries on the other hand use
- one or more _.mk_ files that are placed in the _lib/mk_
- directory of a Genode repository. In addition, libraries have to
- provide _import-.mk_ files. Amongst other things, these
- files are used by applications to find the associated header files
- of a library. The import files are placed in the _lib/import_
- directory.
-
-# Create a run script to ease testing
-
- To ease the testing of applications, it is reasonable to write a run script
- that creates a test scenario for the application. This run script is used
- to automatically build all components of the Genode OS framework that are
- needed to run the application as well as the application itself. Testing
- the application on any of the kernels supported by Genode becomes just a
- matter of executing the run script.
-
-# Compile the application
-
- The ported application is compiled from within the respective build
- directory like any other application or component of Genode. The build
- system of Genode uses the build rules created in the fourth step.
-
-# Run the application
-
- While porting an application, easy testing is crucial. By using the run script
- that was written in the fifth step we reduce the effort.
-
-# Debug the application
-
- In most cases, a ported application does not work right away. We have to
- debug misbehaviour and implement certain functionality in the platform-depending
- parts of the application so that is can run on Genode. There are
- several facilities available on Genode that help in the process. These are
- different on each Genode platform but basically break down to using either a
- kernel debugger (e.g., JDB on Fiasco.OC) or 'gdb(1)'. The reader of this guide
- is advised to take a look at the "User-level debugging on Genode via GDB"
- documentation.
-
-_The order of step 1-4 is not mandatory but is somewhat natural._
-
-
-Porting a program to natively run on Genode
-###########################################
-
-As an example on how to create a native port of a program for Genode, we will
-describe the porting of DosBox more closely. Hereby, each of the steps
-outlined in the previous section will be discussed in detail.
-
-
-Check requirements/dependencies
-===============================
-
-In the first step, we build DosBox for Linux/x86 to obtain needed information.
-Nowadays, most applications use a build-tool like Autotools or something
-similar that will generate certain files (e.g., _config.h_). These files are
-needed to successfully compile the program. Naturally they are required on
-Genode as well. Since Genode does not use the original build tool of the
-program for native ports, it is appropriate to copy those generated files
-and adjust them later on to match Genode's settings.
-
-We start by checking out the source code of DosBox from its subversion repository:
-
-! $ svn export http://svn.code.sf.net/p/dosbox/code-0/dosbox/trunk@3837 dosbox-svn-3837
-! $ cd dosbox-svn-3837
-
-At this point, it is helpful to disable certain options that are not
-available or used on Genode just to keep the noise down:
-
-! $ ./configure --disable-opengl
-! $ make > build.log 2>&1
-
-After the DosBox binary is successfully built, we have a log file
-(build.log) of the whole build process at our disposal. This log file will
-be helpful later on when the _target.mk_ file needs to be created. In
-addition, we will inspect the DosBox binary:
-
-! $ readelf -d -t src/dosbox|grep NEEDED
-! 0x0000000000000001 (NEEDED) Shared library: [libasound.so.2]
-! 0x0000000000000001 (NEEDED) Shared library: [libdl.so.2]
-! 0x0000000000000001 (NEEDED) Shared library: [libpthread.so.0]
-! 0x0000000000000001 (NEEDED) Shared library: [libSDL-1.2.so.0]
-! 0x0000000000000001 (NEEDED) Shared library: [libpng12.so.0]
-! 0x0000000000000001 (NEEDED) Shared library: [libz.so.1]
-! 0x0000000000000001 (NEEDED) Shared library: [libSDL_net-1.2.so.0]
-! 0x0000000000000001 (NEEDED) Shared library: [libX11.so.6]
-! 0x0000000000000001 (NEEDED) Shared library: [libstdc++.so.6]
-! 0x0000000000000001 (NEEDED) Shared library: [libm.so.6]
-! 0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1]
-! 0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
-
-Using _readelf_ on the binary shows all direct dependencies. We now know
-that at least libSDL, libSDL_net, libstdc++, libpng, libz, and
-libm are required by DosBox. The remaining libraries are mostly
-mandatory on Linux and do not matter on Genode. Luckily all of these
-libraries are already available on Genode. For now all we have to do is to
-keep them in mind.
-
-
-Creating the port file
-======================
-
-Since DosBox is an application, which depends on several ported
-libraries (e.g., libSDL), the _ports_ repository within the Genode
-source tree is a natural fit. On that account, the port file
-_ports/ports/dosbox.port_ is created.
-
-For DosBox the _dosbox.port_ looks as follows:
-
-! LICENSE := GPLv2
-! VERSION := svn
-! DOWNLOADS := dosbox.svn
-!
-! URL(dosbox) := http://svn.code.sf.net/p/dosbox/code-0/dosbox/trunk
-! DIR(dosbox) := src/app/dosbox
-! REV(dosbox) := 3837
-
-First, we define the license, the version and the type of the source code
-origin. In case of DosBox, we checkout the source code from a Subversion
-repository. This is denoted by the '.svn' suffix of the item specified in
-the 'DOWNLOADS' declaration. Other valid types are 'file' (a plain file),
-'archive' (an archive of the types tar.gz, tar.xz, tgz, tar.bz2, or zip)
-or 'git' (a Git repository).
-To checkout the source code out from the Subversion repository, we also need
-its URL, the revision we want to check out and the destination directory
-that will contain the sources afterwards. These declarations are mandatory and
-must always be specified. Otherwise the preparation of the port will fail.
-
-! PATCHES := $(addprefix src/app/dosbox/patches/,\
-! $(notdir $(wildcard $(REP_DIR)/src/app/dosbox/patches/*.patch)))
-!
-! PATCH_OPT := -p2 -d src/app/dosbox
-
-As next step, we declare all patches that are needed for the DosBox port.
-Since in this case, the patches are using a different path format, we have
-to override the default patch settings by defining the _PATCH_OPT_ variable.
-
-Each port file comes along with a hash file. This hash is generated by taking
-several sources into account. For one, the port file, each patch and the
-port preparation tool (_tool/ports/prepare_port_) are the ingredients for
-the hash value. If any of these files is changed, a new hash will be generated,
-For now, we just write "dummy" in the '_ports/ports/dosbox.hash_ file.
-
-The DosBox port can now be prepared by executing
-
-! $ /tool/ports/prepare_port dosbox
-
-However, we get the following error message:
-
-! Error: /ports/dosbox.port is out of date, expected
-
-We get this message because we had specified the "dummy" hash value in
-the _dosbox.hash_ file. The prepare_port tool computes a fingerprint
-of the actual version of the port and compares this fingerprint with the
-hash value specified in _dosbox.hash_. The computed fingerprint can
-be found at _/contrib/dosbox-dummy/dosbox.hash_. In the final
-step of the port, we will replace the dummy fingerprint with the actual
-fingerprint of the port. But before finalizing the porting work, it is
-practical to keep using the dummy hash and suppress the fingerprint check.
-This can be done by adding 'CHECK_HASH=no' as argument to the prepare_port tool:
-
-! $ /tool/ports/prepare-port dosbox CHECK_HASH=no
-
-
-Check platform-dependent code
-=============================
-
-At this point, it is important to spot platform-dependent source files or
-rather certain functions that are not yet available on Genode. These source
-files should be omitted. Of course they may be used as a guidance when
-implementing the functionality for Genode later on, when creating the
-_target.mk_ file. In particular the various 'cdrom_ioctl_*.cpp' files are such
-candidates in this example.
-
-
-Creating the build Makefile
-===========================
-
-Now it is time to write the build rules into the _target.mk_, which will be
-placed in _ports/src/app/dosbox_.
-
-Armed with the _build.log_ that we created while building DosBox on Linux,
-we assemble a list of needed source files. If an application just
-uses a simple Makefile and not a build tool, it might be easier to just
-reuse the contents of this Makefile instead.
-
-First of all, we create a shortcut for the source directory of DosBox by calling
-the 'select_from_ports' function:
-
-! DOSBOX_DIR := $(call select_from_ports,dosbox)/src/app/dosbox
-
-Under the hood, the 'select_from_ports' function looks up the
-fingerprint of the specified port by reading the corresponding
-.hash file. It then uses this hash value to construct the
-directory path within the _contrib/_ directory that belongs to
-the matching version of the port. If there is no hash file that matches the
-port name, or if the port directory does not exist, the build system
-will back out with an error message.
-
-Examining the log file leaves us with the following list of source files:
-
-! SRC_CC_cpu = $(notdir $(wildcard $(DOSBOX_DIR)/src/cpu/*.cpp))
-! SRC_CC_debug = $(notdir $(wildcard $(DOSBOX_DIR)/src/debug/*.cpp))
-! FILTER_OUT_dos = cdrom_aspi_win32.cpp cdrom_ioctl_linux.cpp cdrom_ioctl_os2.cpp \
-! cdrom_ioctl_win32.cpp
-! SRC_CC_dos = $(filter-out $(FILTER_OUT_dos), \
-! $(notdir $(wildcard $(DOSBOX_DIR)/src/dos/*.cpp)))
-! […]
-! SRC_CC = $(notdir $(DOSBOX_DIR)/src/dosbox.cpp)
-! SRC_CC += $(SRC_CC_cpu) $(SRC_CC_debug) $(SRC_CC_dos) $(SRC_CC_fpu) \
-! $(SRC_CC_gui) $(SRC_CC_hw) $(SRC_CC_hw_ser) $(SRC_CC_ints) \
-! $(SRC_CC_ints) $(SRC_CC_misc) $(SRC_CC_shell)
-!
-! vpath %.cpp $(DOSBOX_DIR)/src
-! vpath %.cpp $(DOSBOX_DIR)/src/cpu
-! […]
-
-_The only variable here that is actually evaluated by Genode's build-system is_
-'SRC_CC'. _The rest of the variables are little helpers that make our_
-_life more comfortable._
-
-In this case, it is mandatory to use GNUMake's 'notdir' file name function
-because otherwise the compiled object files would be stored within
-the _contrib_ directories. Genode runs on multiple platforms with varying
-architectures and mixing object files is considered harmful, which can happen
-easily if the application is build from the original source directory. That's
-why you have to use a build directory for each platform. The Genode build
-system will create the needed directory hierarchy within the build directory
-automatically. By combining GNUMake's 'notdir' and 'wildcard' function, we
-can assemble a list of all needed source files without much effort. We then
-use 'vpath' to point GNUMake to the right source file within the dosbox
-directory.
-
-The remaining thing to do now is setting the right include directories and proper
-compiler flags:
-
-! INC_DIR += $(PRG_DIR)
-! INC_DIR += $(DOSBOX_DIR)/include
-! INC_DIR += $(addprefix $(DOSBOX_DIR)/src, cpu debug dos fpu gui hardware \
-! hardware/serialport ints misc shell)
-
-'PRG_DIR' _is a special variable of Genode's build-system_
-_and its value is always the absolute path to the directory containing_
-_the 'target.mk' file._
-
-We copy the _config.h_ file, which was generated in the first step to this
-directory and change certain parts of it to better match Genode's
-environment. Below is a skimmed diff of these changes:
-
-! --- config.h.orig 2013-10-21 15:27:45.185719517 +0200
-! +++ config.h 2013-10-21 15:36:48.525727975 +0200
-! @@ -25,7 +25,8 @@
-! /* #undef AC_APPLE_UNIVERSAL_BUILD */
-!
-! /* Compiling on BSD */
-! -/* #undef BSD */
-! +/* Genode's libc is based on FreeBSD 8.2 */
-! +#define BSD 1
-!
-! […]
-!
-! /* The type of cpu this target has */
-! -#define C_TARGETCPU X86_64
-! +/* we define it ourself */
-! +/* #undef C_TARGETCPU */
-!
-! […]
-
-Thereafter, we specify the compiler flags:
-
-! CC_OPT = -DHAVE_CONFIG_H -D_GNU_SOURCE=1 -D_REENTRANT
-! ifeq ($(filter-out $(SPECS),x86_32),)
-! INC_DIR += $(PRG_DIR)/x86_32
-! CC_OPT += -DC_TARGETCPU=X86
-! else ifeq ($(filter-out $(SPECS),x86_64),)
-! INC_DIR += $(PRG_DIR)/x86_64
-! CC_OPT += -DC_TARGETCPU=X86_64
-! endif
-!
-! CC_WARN = -Wall
-! #CC_WARN += -Wno-unused-variable -Wno-unused-function -Wno-switch \
-! -Wunused-value -Wno-unused-but-set-variable
-
-As noted in the commentary seen in the diff we define 'C_TARGETCPU'
-and adjust the include directories ourselves according to the target
-architecture.
-
-While debugging, compiler warnings for 3rd-party code are really helpful but
-tend to be annoying after the porting work is finished, we can
-remove the hashmark to keep the compiler from complaining too
-much.
-
-Lastly, we need to add the required libraries, which we acquired in step 1:
-
-! LIBS += libc libm libpng sdl stdcxx zlib
-! LIBS += libc_lwip_nic_dhcp config_args
-
-In addition to the required libraries, a few Genode specific
-libraries are also needed. These libraries implement certain
-functions in the libc via the libc's plugin mechanism.
-libc_lwip_nic_dhcp, for example, is used to connect the BSD socket interface
-to a NIC service such as a network device driver.
-
-
-Creating the run script
-=======================
-
-To ease compiling, running and debugging DosBox, we create a run script
-at _ports/run/dosbox.run_.
-
-First, we specify the components that need to be built
-
-! set build_components {
-! core init drivers/audio drivers/framebuffer drivers/input
-! drivers/pci drivers/timer app/dosbox
-! }
-! build $build_components
-
-and instruct _tool/run_ to create the boot directory that hosts
-all binaries and files which belong to the DosBox scenario.
-
-As the name 'build_components' suggests, you only have to declare
-the components of Genode, which are needed in this scenario. All
-dependencies of DosBox (e.g. libSDL) will be built before DosBox
-itself.
-
-Nextm we provide the scenario's configuration 'config':
-
-! append config {
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-! }
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-! }
-! install_config $config
-
-The _config_ file will be used by the init program to start all
-components and application of the scenario, including DosBox.
-
-Thereafter we declare all boot modules:
-
-! set boot_modules {
-! core init timer audio_drv fb_drv ps2_drv ld.lib.so
-! libc.lib.so libm.lib.so
-! lwip.lib.so libpng.lib.so stdcxx.lib.so sdl.lib.so
-! pthread.lib.so zlib.lib.so dosbox dosbox.tar
-! }
-! build_boot_image $boot_modules
-
-The boot modules comprise all binaries and other files like
-the tar archive that contains DosBox' configuration file _dosbox.conf_
-that are needed for this scenario to run sucessfully.
-
-Finally, we set certain options, which are used when Genode is executed
-in Qemu and instruct _tool/run_ to keep the scenario executing as long
-as it is not manually stopped:
-
-! append qemu_args " -m 256 -soundhw ac97 "
-! run_genode_until forever
-
-_It is reasonable to write the run script in a way that makes it possible_
-_to use it for multiple Genode platforms. Debugging is often done on_
-_Genode/Linux or on another Genode platform running in Qemu but testing_
-_is normally done using actual hardware._
-
-
-Compiling the program
-=====================
-
-To compile DosBox and all libraries it depends on, we execute
-
-! $ make app/dosbox
-
-from within Genode's build directory.
-
-_We could also use the run script that we created in the previous step but_
-_that would build all components that are needed to actually run_ DosBox
-_and at this point our goal is just to get_ DosBox _compiled._
-
-At the first attempt, the compilation stopped because g++ could not find
-the header file _sys/timeb.h_:
-
-! /src/genode/ports/contrib/dosbox-svn-3837/src/ints/bios.cpp:35:23: fatal error:
-! sys/timeb.h: No such file or directory
-
-This header is part of the libc but until now there was no program, which
-actually used this header. So nobody noticed that it was missing. This
-can happen all time when porting a new application to Genode because most
-functionality is enabled or rather added on demand. Someone who is
-porting applications to Genode has to be aware of the fact that it might be
-necessary to extend Genode functionality by enabling so far disabled
-bits or implementing certain functionality needed by the
-application that is ported.
-
-Since 'ftime(3)' is a deprecated function anyway we change the code of
-DosBox to use 'gettimeofday(2)'.
-
-After this was fixed, we face another problem:
-
-! /src/genode/ports/contrib/dosbox-svn-3837/src/ints/int10_vesa.cpp:48:33: error:
-! unable to find string literal operator ‘operator"" VERSION’
-
-The fix is quite simple and the compile error was due to the fact
-that Genode uses C++11 by now. It often happens that 3rd party code
-is not well tested with a C++11 enabled compiler. In any case, a patch file
-should be created which will be applied when preparing the port.
-
-Furthermore it would be reasonable to report the bug to the DosBox
-developers so it can be fixed upstream. We can then get rid of our
-local patch.
-
-The next show stoppers are missing symbols in Genode's SDL library port.
-As it turns out, we never actually compiled and linked in the cdrom dummy
-code which is provided by SDL.
-
-
-Running the application
-=======================
-
-DosBox was compiled successfully. Now it is time to execute the binary
-on Genode. Hence we use the run script we created in step 5:
-
-! $ make run/dosbox
-
-This may take some time because all other components of the Genode OS
-Framework that are needed for this scenario have to be built.
-
-
-Debugging the application
-=========================
-
-DosBox was successfully compiled but unfortunately it did not run.
-To be honest that was expected and here the fun begins.
-
-At this point, there are several options to chose from. By running
-Genode/Fiasco.OC within Qemu, we can use the kernel debugger (JDB)
-to take a deeper look at what went wrong (e.g., backtraces of the
-running processes, memory dumps of the faulted DosBox process etc.).
-Doing this can be quite taxing but fortunately Genode runs on multiple
-kernels and often problems on one kernel can be reproduced on another
-kernel. For this reason, we choose Genode/Linux where we can use all
-the normal debugging tools like 'gdb(1)', 'valgrind(1)' and so on. Luckily
-for us, DosBox also fails to run on Genode/Linux. The debugging steps
-are naturally dependent on the ported software. In the case of DosBox,
-the remaining stumbling blocks were a few places where DosBox assumed
-Linux as a host platform.
-
-For the sake of completeness here is a list of all files that were created by
-porting DosBox to Genode:
-
-! ports/ports/dosbox.hash
-! ports/ports/dosbox.port
-! ports/run/dosbox.run
-! ports/src/app/dosbox/config.h
-! ports/src/app/dosbox/patches/bios.patch
-! ports/src/app/dosbox/patches/int10_vesa.patch
-! ports/src/app/dosbox/target.mk
-! ports/src/app/dosbox/x86_32/size_defs.h
-! ports/src/app/dosbox/x86_64/size_defs.h
-
-[image dosbox]
- DosBox ported to Genode
-
-Finally, after having tested that both the preparation-step and the
-build of DosBox work as expected, it is time to
-finalize the fingerprint stored in the _/ports/ports/dosbox.hash_
-file. This can be done by copying the content of the
-_/contrib/dosbox-dummy/dosbox.hash file_.
-Alternatively, you may invoke the _tool/ports/update_hash_ tool with the
-port name "dosbox" as argument. The next time, you
-invoke the prepare_port tool, do not specify the 'CHECK_HASH=no' argument.
-So the fingerprint check will validate that the _dosbox.hash_ file
-corresponds to your _dosbox.port_ file. From now on, the
-_/contrib/dosbox-dummy_ directory will no longer be used because
-the _dosbox.hash_ file points to the port directory named after the real
-fingerprint.
-
-
-Native Genode port of a library
-###############################
-
-Porting a library to be used natively on Genode is similar to porting
-an application to run natively on Genode. The source codes have to be
-obtained and, if needed, patched to run on Genode.
-As an example on how to port a library to natively run on Genode, we
-will describe the porting of SDL_net in more detail. Ported libraries
-are placed in the _libports_ repository of Genode. But this is just a
-convention. Feel free to host your library port in a custom repository
-of your's.
-
-
-Checking requirements/dependencies
-==================================
-
-We will proceed as we did when we ported DosBox to run natively on Genode.
-First we build SDL_net on Linux to obtain a log file of the whole build
-process:
-
-! $ wget http://www.libsdl.org/projects/SDL_net/release/SDL_net-1.2.8.tar.gz
-! $ tar xvzf SDL_net-1.2.8.tar.gz
-! $ cd SDL_net-1.2.8
-! $ ./configure
-! $ make > build.log 2>&1
-
-
-Creating the port file
-======================
-
-We start by creating _/libports/ports/sdl_net.port:
-
-! LICENSE := BSD
-! VERSION := 1.2.8
-! DOWNLOADS := sdl_net.archive
-!
-! URL(sdl_net) := http://www.libsdl.org/projects/SDL_net/release/SDL_net-$(VERSION).tar.gz
-! SHA(sdl_net) := fd393059fef8d9925dc20662baa3b25e02b8405d
-! DIR(sdl_net) := src/lib/sdl_net
-!
-! PATCHES := src/lib/sdl_net/SDLnet.patch src/lib/sdl_net/SDL_net.h.patch
-
-In addition to the URL the SHA1 checksum of the SDL_net archive needs to
-specified because _tool/prepare_port_ validates the downloaded archive
-by using this hash.
-
-Applications that want to use SDL_net have to include the 'SDL_net.h' header
-file. Hence it is necessary to make this file visible to applications. This is
-done by populating the _/contrib/sdl-/include_ directory:
-
-! DIRS := include/SDL
-! DIR_CONTENT(include/SDL) := src/lib/sdl_net/SDL_net.h
-
-For now, we also use a dummy hash in the _sdl_net.hash_ file like it was done
-while porting DosBox. We will replace the dummy hash with the proper one at
-the end.
-
-
-Creating the build Makefile
-===========================
-
-We create the build rules in _libports/lib/mk/sdl_net.mk_:
-
-! SDL_NET_DIR := $(call select_from_ports,sdl_net)/src/lib/sdl_net
-!
-! SRC_C = $(notdir $(wildcard $(SDL_NET_DIR)/SDLnet*.c))
-!
-! vpath %.c $(SDL_NET_DIR)
-!
-! INC_DIR += $(SDL_NET_DIR)
-!
-! LIBS += libc sdl
-
-'SDL_net' should be used as shared library. To achieve this, we
-have to add the following statement to the 'mk' file:
-
-! SHARED_LIB = yes
-
-_If we omit this statement, Genode's build system will automatically_
-_build SDL_net as a static library called_ 'sdl_net.lib.a' _that_
-_is linked directly into the application._
-
-It is reasonable to create a dummy application that uses the
-library because it is only possible to build libraries automatically
-as a dependency of an application.
-
-Therefore we create
-_libports/src/test/libports/sdl_net/target.mk_ with the following content:
-
-! TARGET = test-sdl_net
-! LIBS = libc sdl_net
-! SRC_CC = main.cc
-
-! vpath main.cc $(PRG_DIR)/..
-
-At this point we also create _lib/import/import-sdl_net.mk_
-with the following content:
-
-! SDL_NET_PORT_DIR := $(call select_from_ports,sdl_net)
-! INC_DIR += $(SDL_NET_PORT_DIR)/include $(SDL_NET_PORT_DIR)/include/SDL
-
-Each port that depends on SDL_net and has added it to its LIBS variable
-will automatically include the _import-sdl_net.mk_ file and therefore
-will use the specified include directory to find the _SDL_net.h_ header.
-
-
-Compiling the library
-=====================
-
-We compile the SDL_net library as a side effect of building our dummy test
-program by executing
-
-! $ make test/libports/sdl_net
-
-All source files are compiled fine but unfortunately the linking of the
-library does not succeed:
-
-! /src/genodebuild/foc_x86_32/var/libcache/sdl_net/sdl_net.lib.so:
-! undefined reference to `gethostbyaddr'
-
-The symbol 'gethostbyaddr' is missing, which is often a clear sign
-of a missing dependency. In this case however 'gethostbyaddr(3)' is
-missing because this function does not exist in Genode's libc _(*)_.
-But 'getaddrinfo(3)' exists. We are now facing the choice of implementing
-'gethostbyaddr(3)' or changing the code of SDL_net to use 'getaddrinfo(3)'.
-Porting applications or libraries to Genode always may involve this kind of
-choice. Which way is the best has to be decided by closely examining the
-matter at hand. Sometimes it is better to implement the missing functions
-and sometimes it is more beneficial to change the contributed code.
-In this case, we opt for changing SDL_net because the former function is
-obsolete anyway and implementing 'gethostbyaddr(3)' involves changes to
-several libraries in Genode, namely libc and the network related
-libc plugin. Although we have to keep in mind that it is likely to encounter
-another application or library that also uses this function in the future.
-
-With this change in place, SDL_net compiles fine.
-
-_(*) Actually this function is implemented in the Genode's_ libc _but is_
-_only available by using libc_resolv which we did not do for the sake of_
-_this example._
-
-
-Testing the library
-===================
-
-The freshly ported library is best tested with the application, which was the
-reason the library was ported in the first place, since it is unlikely that
-we port a library just for fun and no profit. Therefore, it is not necessary to
-write a run script for a library alone.
-
-For the records, here is a list of all files that were created by
-porting SDL_net to Genode:
-
-! libports/lib/mk/sdl_net.mk
-! libports/lib/mk/import/import-sdl_net.mk
-! libports/ports/sdl_net.hash
-! libports/ports/sdl_net.port
-! libports/src/lib/sdl_net/SDLnet.patch
-! libports/test/libports/sdl_net/target.mk
-
-
-Porting an application to Genode's Noux runtime
-###############################################
-
-Porting an application to Genode's Noux runtime is basically the same as
-porting a program to natively run on Genode. The source code has to be
-prepared and, if needed, patched to run in Noux. However in contrast to
-this, there are Noux build rules (_ports/mk/noux.mk_) that enable us to use
-the original build-tool if it is based upon Autotools. Building the
-application is done within a cross-compile environment. In this environment
-all needed variables like 'CC', 'LD', 'CFLAGS' and so on are set to their
-proper values. In addition to these precautions, using _noux.mk_ simplifies certain things.
-The system-call handling/functions is/are implemented in the libc plugin
-_libc_noux_ (the source code is found in _ports/src/lib/libc_noux_). All
-applications running in Noux have to be linked against this library which is
-done implicitly by using the build rules of Noux.
-
-As an example on how to port an application to Genode's Noux runtime, we
-will describe the porting of GNU's 'tar' tool in more detail. A ported
-application is normally referred to as a Noux package.
-
-Checking requirements/dependencies
-==================================
-
-As usual, we first build GNU tar on Linux/x86 and capture the build
-process:
-
-! $ wget http://ftp.gnu.org/gnu/tar/tar-1.27.tar.xz
-! $ tar xJf tar-1.27.tar.xz
-! $ cd tar-1.27
-! $ ./configure
-! $ make > build.log 2>&1
-
-
-Creating the port file
-======================
-
-We start by creating the port Makefile _ports/ports/tar.mk_:
-
-! LICENSE := GPLv3
-! VERSION := 1.27
-! DOWNLOADS := tar.archive
-!
-! URL(tar) := http://ftp.gnu.org/gnu/tar/tar-$(VERSION).tar.xz
-! SHA(tar) := 790cf784589a9fcc1ced33517e71051e3642642f
-! SIG(tar) := ${URL(tar)}.sig
-! KEY(tar) := GNU
-! DIR(tar) := src/noux-pkg/tar
-
-_As of version 14.05, Genode does not check the signature specified via_
-_the SIG and KEY declaration but relies the SHA checksum only. However,_
-_as signature checks are planned in the future, we use to include the_
-_respective declarations if signature files are available._
-
-While porting GNU tar we will use a dummy hash as well.
-
-
-Creating the build rule
-=======================
-
-Build rules for Noux packages are located in _/ports/src/noux-pkgs_.
-
-The _tar/target.mk_ corresponding to GNU tar looks like this:
-
-! CONFIGURE_ARGS = --bindir=/bin \
-! --libexecdir=/libexec
-!
-! include $(REP_DIR)/mk/noux.mk
-
-The variable 'CONFIGURE_ARGS' contains the options that are
-passed on to Autoconf's configure script. The Noux specific build
-rules in _noux.mk_ always have to be included last.
-
-The build rules for GNU tar are quite short and therefore at the end
-of this chapter we take a look at a much more extensive example.
-
-
-Creating a run script
-=====================
-
-Creating a run script to test Noux packages is the same as it is
-with running natively ported applications. Therefore we will only focus
-on the Noux-specific parts of the run script and omit the rest.
-
-First, we add the desired Noux packages to 'build_components':
-
-! set noux_pkgs "bash coreutils tar"
-!
-! foreach pkg $noux_pkgs {
-! lappend_if [expr ![file exists bin/$pkg]] build_components noux-pkg/$pkg }
-!
-! build $build_components
-
-Since each Noux package is, like every other Genode binary, installed to the
-_/bin_ directory, we create a tar archive of each package from
-each directory:
-
-! foreach pkg $noux_pkgs {
-! exec tar cfv bin/$pkg.tar -h -C bin/$pkg . }
-
-_Using noux.mk makes sure that each package is always installed to_
-_/bin/._
-
-Later on, we will use these tar archives to assemble the file system
-hierarchy within Noux.
-
-Most applications ported to Noux want to read and write files. On that
-matter, it is reasonable to provide a file-system service and the easiest
-way to do this is to use the ram_fs server. This server provides a RAM-backed
-file system, which is perfect for testing Noux applications. With
-the help of the session label we can route multiple directories to the
-file system in Noux:
-
-! append config {
-!
-! […]
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-!
-! […]
-
-The file system Noux presents to the running applications is constructed
-out of several stacked file systems. These file systems have to be
-registered in the 'fstab' node in the configuration node of Noux:
-
-!
-!
-!
-! }
-
-Each Noux package is added
-
-! foreach pkg $noux_pkgs {
-! append config {
-! " }}
-
-and the routes to the ram_fs file system are configured:
-
-! append config {
-!
-!
-!
-!
-!
-!
-!
-!
-! }
-
-In this example we save the run script as _ports/run/noux_tar.run_.
-
-
-Compiling the Noux package
-==========================
-
-Now we can trigger the compilation of tar by executing
-
-! $ make VERBOSE= noux-pkg/tar
-
-_At least on the first compilation attempt, it is wise to unset_ 'VERBOSE'
-_because it enables us to see the whole output of the_ 'configure' _process._
-
-By now, Genode provides almost all libc header files that are used by
-typical POSIX programs. In most cases, it is rather a matter of enabling
-the right definitions and compilation flags. It might be worth to take a
-look at FreeBSD's ports tree because Genode's libc is based upon the one
-of FreeBSD 8.2.0 and if certain changes to the contributed code are needed,
-they are normally already done in the ports tree.
-
-The script _noux_env.sh_ that is used to create the cross-compile
-environment as well as the famous _config.log_ are found
-in _/noux-pkg/_.
-
-
-Running the Noux package
-========================
-
-We use the previously written run script to start the scenario, in which we
-can execute and test the Noux package by issuing:
-
-! $ make run/noux_tar
-
-After the system has booted and Noux is running, we first create some test
-files from within the running bash process:
-
-! bash-4.1$ mkdir /tmp/foo
-! bash-4.1$ echo 'foobar' > /tmp/foo/bar
-
-Following this we try to create a ".tar" archive of the directory _/tmp/foo_
-
-! bash-4.1$ cd /tmp
-! bash-4.1$ tar cvf foo.tar foo/
-! tar: /tmp/foo: Cannot stat: Function not implemented
-! tar: Exiting with failure status due to previous errors
-! bash-4.1$
-
-Well, this does not look too good but at least we have a useful error message
-that leads (hopefully) us into the right direction.
-
-
-Debugging an application that uses the Noux runtime
-===================================================
-
-Since the Noux service is basically the kernel part of our POSIX runtime
-environment, we can ask Noux to show us the system calls executed by tar.
-We change its configuration in the run script to trace all system calls:
-
-! […]
-!
-!
-! […]
-
-We start the runscript again, create the test files and try to create a
-".tar" archive. It still fails but now we have a trace of all system calls
-and know at least what is going in Noux itself:
-
-! […]
-! [init -> noux] PID 0 -> SYSCALL FORK
-! [init -> noux] PID 0 -> SYSCALL WAIT4
-! [init -> noux] PID 5 -> SYSCALL STAT
-! [init -> noux] PID 5 -> SYSCALL EXECVE
-! [init -> noux] PID 5 -> SYSCALL STAT
-! [init -> noux] PID 5 -> SYSCALL GETTIMEOFDAY
-! [init -> noux] PID 5 -> SYSCALL STAT
-! [init -> noux] PID 5 -> SYSCALL OPEN
-! [init -> noux] PID 5 -> SYSCALL FTRUNCATE
-! [init -> noux] PID 5 -> SYSCALL FSTAT
-! [init -> noux] PID 5 -> SYSCALL GETTIMEOFDAY
-! [init -> noux] PID 5 -> SYSCALL FCNTL
-! [init -> noux] PID 5 -> SYSCALL WRITE
-! [init -> noux -> /bin/tar] DUMMY fstatat(): fstatat called, not implemented
-! [init -> noux] PID 5 -> SYSCALL FCNTL
-! [init -> noux] PID 5 -> SYSCALL FCNTL
-! [init -> noux] PID 5 -> SYSCALL WRITE
-! [init -> noux] PID 5 -> SYSCALL FCNTL
-! [init -> noux] PID 5 -> SYSCALL WRITE
-! [init -> noux] PID 5 -> SYSCALL GETTIMEOFDAY
-! [init -> noux] PID 5 -> SYSCALL CLOSE
-! [init -> noux] PID 5 -> SYSCALL FCNTL
-! [init -> noux] PID 5 -> SYSCALL WRITE
-! [init -> noux] PID 5 -> SYSCALL CLOSE
-! [init -> noux] child /bin/tar exited with exit value 2
-! […]
-
-_The trace log was shortened to only contain the important information._
-
-We now see at which point something went wrong. To be honest, we see the
-'DUMMY' message even without enabling the tracing of system calls. But
-there are situations where a application is actually stuck in a (blocking)
-system call and it is difficult to see in which.
-
-Anyhow, 'fstatat' is not properly implemented. At this point, we either have
-to add this function to the Genode's libc or rather add it to libc_noux.
-If we add it to the libc, not only applications running in Noux will
-benefit but all applications using the libc. Implementing it in
-libc_noux is the preferred way if there are special circumstances because
-we have to treat the function differently when used in Noux (e.g. 'fork').
-
-For the sake of completeness here is a list of all files that were created by
-porting GNU tar to Genode's Noux runtime:
-
-! ports/ports/tar.hash
-! ports/ports/tar.port
-! ports/run/noux_tar.run
-! ports/src/noux-pkg/tar/target.mk
-
-
-Extensive build rules example
-=============================
-
-The build rules for OpenSSH are much more extensive than the ones in
-the previous example. Let us take a quick look at those build rules to
-get a better understanding of possible challenges one may encounter while
-porting a program to Noux:
-
-! # This prefix 'magic' is needed because OpenSSH uses $exec_prefix
-! # while compiling (e.g. -DSSH_PATH) and in the end the $prefix and
-! # $exec_prefix path differ.
-!
-! CONFIGURE_ARGS += --disable-ip6 \
-! […]
-! --exec-prefix= \
-! --bindir=/bin \
-! --sbindir=/bin \
-! --libexecdir=/bin
-
-In addition to the normal configure options, we have to also define the
-path prefixes. The OpenSSH build system embeds certain paths in the
-ssh binary, which need to be changed for Noux.
-
-! INSTALL_TARGET = install
-
-Normally the Noux build rules (_noux.mk_) execute 'make install-strip' to
-explicitly install binaries that are stripped of their debug symbols. The
-generated Makefile of OpenSSH does not use this target. It automatically
-strips the binaries when executing 'make install'. Therefore, we set the
-variable 'INSTALL_TARGET' to override the default behaviour of the
-Noux build rules.
-
-! LIBS += libcrypto libssl zlib libc_resolv
-
-As OpenSSH depends on several libraries, we need to include these in the
-build Makefile. These libraries are runtime dependencies and need to be
-present when running OpenSSH in Noux.
-
-Sometimes it is needed to patch the original build system. One way to do
-this is by applying a patch while preparing the source code. The other
-way is to do it before building the Noux package:
-
-! noux_built.tag: Makefile Makefile_patch
-!
-! Makefile_patch: Makefile
-! @#
-! @# Our $(LDFLAGS) contain options which are usable by gcc(1)
-! @# only. So instead of using ld(1) to link the binary, we have
-! @# to use gcc(1).
-! @#
-! $(VERBOSE)sed -i 's|^LD=.*|LD=$(CC)|' Makefile
-! @#
-! @# We do not want to generate host-keys because we are crosscompiling
-! @# and we can not run Genode binaries on the build system.
-! @#
-! $(VERBOSE)sed -i 's|^install:.*||' Makefile
-! $(VERBOSE)sed -i 's|^install-nokeys:|install:|' Makefile
-! @#
-! @# The path of ssh(1) is hardcoded to $(bindir)/ssh which in our
-! @# case is insufficient.
-! @#
-! $(VERBOSE)sed -i 's|^SSH_PROGRAM=.*|SSH_PROGRAM=/bin/ssh|' Makefile
-
-The target _noux_built.tag_ is a special target defined by the Noux build
-rules. It will be used by the build rules when building the Noux package.
-We add the 'Makefile_patch' target as a dependency to it. So after configure
-is executed, the generated Makefile will be patched.
-
-Autoconf's configure script checks if all requirements are fulfilled and
-therefore, tests if all required libraries are installed on the host system.
-This is done by linking a small test program against the particular library.
-Since these libraries are only build-time dependencies, we fool the configure
-script by providing dummy libraries:
-
-! #
-! # Make the zlib linking test succeed
-! #
-! Makefile: dummy_libs
-!
-! LDFLAGS += -L$(PWD)
-!
-! dummy_libs: libz.a libcrypto.a libssl.a
-!
-! libcrypto.a:
-! $(VERBOSE)$(AR) -rc $@
-! libssl.a:
-! $(VERBOSE)$(AR) -rc $@
-! libz.a:
-! $(VERBOSE)$(AR) -rc $@
-
-
-Porting devices drivers
-#######################
-
-Even though Genode encourages writing native device drivers, this task sometimes
-becomes infeasible. Especially if there is no documentation available for a
-certain device or if there are not enough programming resources at hand to
-implement a fully fledged driver. Examples of ported drivers can be found in
-the 'dde_linux', 'dde_bsd', and 'dde_ipxe' repositories.
-
-In this chapter we will exemplary discuss how to port a Linux driver for an ARM
-based SoC to Genode. The goal is to execute driver code in user land directly on
-Genode while making the driver believe it is running within the Linux kernel.
-Traditionally there have been two approaches to reach this goal in Genode. In
-the past, Genode provided a Linux environment, called 'dde_linux26', with the
-purpose to offer just enough infrastructure to easily port drivers. However,
-after adding more drivers it became clear that this repository grew extensively,
-making it hard to maintain. Also updating the environment to support newer
-Linux-kernel versions became a huge effort which let the repository to be neglected
-over time.
-
-Therefore we choose the path to write a customized environment for each driver,
-which provides a specially tailored infrastructure. We found that the
-support code usually is not larger than a couple of thousand lines of code,
-while upgrading to newer driver versions, as we did with the USB drivers, is
-feasible.
-
-
-Basic driver structure
-======================
-
-The first step in porting a driver is to identify the driver code that has to be
-ported. Once the code is located, we usually create a new Genode repository and
-write a port file to download and extract the code. It is good practice to name
-the port and the hash file like the new repository, e.g. _dde_linux.port_ if
-the repository directory is called _/repos/dde_linux_.
-Having the source code ready, there are three main tasks the environment must
-implement. The first is the driver back end, which is responsible for raw device
-access using Genode primitives, the actual environment that emulates Linux
-function calls the driver code is using, and the front end, which exposes for
-example some Genode-session interface (like NIC or block session) that client
-applications can connect to.
-
-
-Further preparations
-====================
-
-Having the code ready, the next step is to create an _*.mk_ file that actually
-compiles the code. For a driver library _lib/mk/.mk_ has to be
-created and for a stand-alone program _src//target.mk_ is created
-within the repository. With the _*.mk_ file in place, we can now start the
-actual compilation. Of course this will cause a whole lot of errors and
-warnings. Most of the messages will deal with implicit declarations of functions
-and unknown data types. What we have to do now is to go through each warning and
-error message and either add the header file containing the desired function or
-data type to the list of files that will be extracted to the _contrib_ directory
-or create our own prototype or data definition.
-
-When creating our own prototypes, we put them in a file called _lx_emul.h_. To
-actually get this file included in all driver files we use the following code in
-the _*.mk_ file:
-
-! CC_C_OPT += -include $(INC_DIR)/lx_emul.h
-
-where 'INC_DIR' points to the include path of _lx_emul.h_.
-
-The hard part is to decide which of the two ways to go for a specific function
-or data type, since adding header files also adds more dependencies and often
-more errors and warnings. As a rule of thumb, try adding as few headers as
-possible.
-
-The compiler will also complain about a lot of missing header files. Since we do
-not want to create all these header files, we use a trick in our _*.mk_ file that
-extracts all header file includes from the driver code and creates symbolic
-links that correspond to the file name and links to _lx_emul.h_. You can put the
-following code snippet in your _*.mk_ file which does the trick:
-
-!#
-!# Determine the header files included by the contrib code. For each
-!# of these header files we create a symlink to _lx_emul.h_.
-!#
-!GEN_INCLUDES := $(shell grep -rh "^\#include .*\/" $(DRIVER_CONTRIB_DIR) |\
-! sed "s/^\#include [^<\"]*[<\"]\([^>\"]*\)[>\"].*/\1/" | \
-! sort | uniq)
-!
-!#
-!# Filter out original Linux headers that exist in the contrib directory
-!#
-!NO_GEN_INCLUDES := $(shell cd $(DRIVER_CONTRIB_DIR); find -name "*.h" | sed "s/.\///" | \
-! sed "s/.*include\///")
-!GEN_INCLUDES := $(filter-out $(NO_GEN_INCLUDES),$(GEN_INCLUDES))
-!
-!#
-!# Put Linux headers in 'GEN_INC' dir, since some include use "../../" paths use
-!# three level include hierarchy
-!#
-!GEN_INC := $(shell pwd)/include/include/include
-!
-!$(shell mkdir -p $(GEN_INC))
-!
-!GEN_INCLUDES := $(addprefix $(GEN_INC)/,$(GEN_INCLUDES))
-!INC_DIR += $(GEN_INC)
-!
-!#
-!# Make sure to create the header symlinks prior building
-!#
-!$(SRC_C:.c=.o) $(SRC_CC:.cc=.o): $(GEN_INCLUDES)
-!
-!$(GEN_INCLUDES):
-! $(VERBOSE)mkdir -p $(dir $@)
-! $(VERBOSE)ln -s $(LX_INC_DIR)/lx_emul.h $@
-
-Make sure 'LX_INC_DIR' is the directory containing the _lx_emul.h_ file. Note
-that 'GEN_INC' is added to your 'INC_DIR' variable.
-
-The 'DRIVER_CONTRIB_DIR' variable is defined by calling the _select_from_port_
-function at the beginning of a Makefile or a include file, which is used by
-all other Makefiles:
-
-! DRIVER_CONTRIB_DIR := $(call select_from_ports,driver_repo)/src/lib/driver_repo
-
-The process of function definition and type declaration continues until the code
-compiles. This process can be quite tiresome. When the driver code finally compiles, the
-next stage is linking. This will of course lead to another whole set of errors
-that complain about undefined references. To actually obtain a linked binary we
-create a _dummies.cc_ file. To ease things up we suggest to create a macro called
-'DUMMY' and implement functions as in the example below:
-
-! /*
-! * Do not include 'lx_emul.h', since the implementation will most likely clash
-! * with the prototype
-! */
-!
-!#define DUMMY(retval, name) \
-! DUMMY name(void) { \
-! PDBG( #name " called (from %p) not implemented", __builtin_return_address(0)); \
-! return retval; \
-!}
-!
-! DUMMY(-1, kmalloc)
-! DUMMY(-1, memcpy)
-! ...
-
-Create a 'DUMMY' for each undefined reference until the binary links. We now
-have a linked binary with a dummy environment.
-
-
-Debugging
-=========
-
-From here on, we will actually start executing code, but before we do that, let us
-have a look at the debugging options for device drivers. Since drivers have to
-be tested on the target platform, there are not as many debugging options
-available as for higher level applications, like running applications on the
-Linux version of Genode while using GDB for debugging. Having these
-restrictions, debugging is almost completely performed over the serial line and
-on rare occasions with an hardware debugger using JTAG.
-
-For basic Linux driver debugging it is useful to implement the 'printk'
-function (use 'dde_kit_printf' or something similar) first. This way, the driver
-code can output something and additions for debugging can be made. The
-'__builtin_return_address' function is also useful in order to determine where a
-specific function was called from. 'printk' may become a problem with devices
-that require certain time constrains because serial line output is very slow. This is
-why we port most drivers by running them on top of the Fiasco.OC version of
-Genode. There you can take advantage of Fiasco's debugger (JDB) and trace buffer
-facility.
-
-The trace buffer can be used to log data and is much faster than 'printk' over
-serial line. Please inspect the 'ktrace.h' file (at
-_base-foc/contrib/l4/pkg/l4sys/include/ARCH-*/ktrace.h_)
-that describes the complete interface. A very handy function there is
-
-!fiasco_tbuf_log_3val("My message", variable1, variable2, variable3);
-
-which stores a message and three variables in the trace buffer. The trace buffer
-can be inspected from within JDB by pressing 'T'.
-
-JDB can be accessed at any time by pressing the 'ESC' key. It can be used to
-inspect the state of all running threads and address spaces on the system. There
-is no recent JDB documentation available, but
-
-:Fiasco kernel debugger manual:
-
- [http://os.inf.tu-dresden.de/fiasco/doc/jdb.pdf]
-
-should be a good starting point. It is also possible to enter the debugger at
-any time from your program calling the 'enter_kdebug("My breakpoint")' function
-from within your code. The complete JDB interface can be found in
-_base-foc/contrib/l4/pkg/l4sys/include/ARCH-*/kdebug.h_.
-
-Note that the backtrace ('bt') command does not work out of the box on ARM
-platforms. We have a small patch for that in our Fiasco.OC development branch
-located at GitHub: [http://github.com/ssumpf/foc/tree/dev]
-
-
-The back end
-============
-
-To ease up the porting of drivers and interfacing Genode from C code, Genode offers a
-library called DDE kit. DDE kit provides access to common functions required
-by drivers like device memory, virtual memory with physical-address lookup,
-interrupt handling, timers, etc. Please inspect _os/include/dde_kit_ to see the
-complete interface description. You can also use 'grep -r dde_kit_ *' to see
-usage of the interface in Genode.
-
-As an example for using DDE kit we implement the 'kmalloc' call:
-
-!void *kmalloc(size_t size, gfp_t flags)
-!{
-! return dde_kit_simple_malloc(size);
-!}
-
-It is also possible to directly use Genode primitives from C++ files, the
-functions only have to be declared as 'extern "C"' so they can be called from C
-code.
-
-
-The environment
-===============
-
-Having a dummy environment we may now begin to actually execute driver code.
-
-Driver initialization
-~~~~~~~~~~~~~~~~~~~~~
-
-Most Linux drivers will have an initialization routine to register itself within
-the Linux kernel and do other initializations if necessary. In order to be
-initialized, the driver will register a function using the 'module_init' call.
-This registered function must be called before the driver is actually used. To
-be able to call the registered function from Genode, we define the 'module_init'
-macro in _lx_emul.h_ as follows:
-
-! #define module_init(fn) void module_##fn(void) { fn(); }
-
-when a driver now registers a function like
-
-! module_init(ehci_hcd_init);
-
-we would have to call
-
-! module_ehci_hcd_init();
-
-during driver startup. Having implemented the above, it is now time to start our
-ported driver on the target platform and check if the initialization function is
-successful. Any important dummy functions that are called must be implemented
-now. A dummy function that does not do device related things, like Linux book
-keeping, may not be implemented. Sometimes Linux checks the return values of
-functions we might not want to implement, in this case it is sufficient to simply
-adjust the return value of the affected function.
-
-Device probing
-~~~~~~~~~~~~~~
-Having the driver initialized, we will give the driver access to the device
-resources. This is performed in two steps. In the case of ARM SoC's we have to
-check in which state the boot loader (usually U-Boot) left the device. Sometimes
-devices are already setup by the boot loader and only a simple device reset is
-necessary to proceed. If the boot loader did not touch the device, we most
-likely have to check and setup all the necessary clocks on the platform and may
-have to perform other low level initializations like PHY setup.
-
-If the device is successfully (low level) initialized, we can hand it over to
-the driver by calling the 'probe' function of the driver. For ARM platforms the
-'probe' function takes a 'struct platform_device' as an argument and all
-important fields, like device resources and interrupt numbers, should be set to
-the correct values before calling 'probe'. During 'probe' the driver will most
-likely map and access device memory, request interrupts, and reset the device.
-All dummy functions that are related to these tasks should be implemented or
-ported at this point.
-
-When 'probe' returns successful, you may either test other driver functions by
-hand or start building the front-end.
-
-
-The front end
-=============
-
-An important design question is how the front end is attached to the driver. In
-some cases the front end may not use the driver directly, but other Linux
-subsystems that are ported or emulated by the environment. For example, the USB
-storage driver implements parts of the SCSI subsystem, which in turn is used
-by the front end. The whole decision depends on the kind of driver that is
-ported and on how much additional infrastructure is needed to actually make use
-of the data. Again an USB example: For USB HID, we needed to port the USB controller
-driver, the hub driver, the USB HID driver, and the generic HID driver in order
-to retrieve keyboard and mouse events from the HID driver.
-
-The last step in porting a device driver is to make it accessible to other
-Genode applications. Typically this is achieved by implementing one of Genode's
-session interfaces, like a NIC session for network adapters or a block session for
-block devices. You may also define your own session interfaces. The
-implementation of the session interface will most likely trigger driver calls,
-so you have to have to keep an eye on the dummy functions. Also make sure that calls to the
-driver actually do what they are supposed to, for example, some wrong return value
-of a dummy function may cause a function to return without performing any work.
-
-
-Notes on synchronization
-========================
-
-After some experiences with Linux drivers and multi-threading, we lately
-choose to have all Linux driver code executed by a single thread only. This way no Linux
-synchronization primitives have to be implemented and we simply don't have to
-worry about subtle pre- and postconditions of many functions (like "this
-function has to be called with lock 'x' being held").
-
-Unfortunately we cannot get rid of all threads within a device-driver server,
-there is at least one waiting for interrupts and one for the entry point that
-waits for client session requests. In order to synchronize these threads, we use
-Genode's signalling framework. So when, for example, the IRQ thread receives an
-interrupt it will send a signal. The Linux driver thread will at certain points
-wait for these signals (e.g., functions like 'schedule_timeout' or
-'wait_for_completion') and execute the right code depending on the kind of
-signal delivered or firmly speaking the signal context. For this to work, we use
-a class called 'Signal_dispatcher' (_base/include/base/signal.h_) which inherits
-from 'Signal_context'. More than one dispatcher can be bound to a signal
-receiver, while each dispatcher might do different work, like calling the
-Linux interrupt handler in the IRQ example.
-
-
diff --git a/doc/release_notes/24-11.txt b/doc/release_notes/24-11.txt
new file mode 100644
index 0000000000..6a9b77d72b
--- /dev/null
+++ b/doc/release_notes/24-11.txt
@@ -0,0 +1,579 @@
+
+
+ ===============================================
+ Release notes for the Genode OS Framework 24.11
+ ===============================================
+
+ Genode Labs
+
+
+
+During the discussion of this year's road-map roughly one year ago, the
+usability concerns of Sculpt OS stood out.
+Besides suspend/resume, which we addressed
+[https://genode.org/documentation/release-notes/24.05#Suspend_resume_infrastructure - earlier this year],
+multi-monitor support ranked highest on the list of desires. We are more than
+happy to wrap up the year with the realization of this feature.
+Section [Multi-monitor support] presents the many facets and outcomes of this
+intensive line of work.
+
+Over the course of 2024, our Goa SDK has received tremendous advances, which
+make the development, porting, debugging, and publishing of software for
+Genode - and Sculpt OS in particular - a breeze.
+So far however, the learning curve for getting started remained rather steep
+because the underlying concepts largely deviate from the beaten tracks known
+from traditional operating systems. Even though there is plenty of
+documentation, it is rather scattered and overwhelming.
+All the more happy we are to announce that the current release is accompanied
+by a new book "Genode Applications" that can be downloaded for free and
+provides a smooth gateway for application developers into the world of Genode
+(Section [New "Genode Applications" book]).
+
+Regarding hardware-related technical topics, the release focuses on the
+ARM-based i.MX SoC family, taking our ambition to run Sculpt OS on the MNT
+Pocket Reform laptop as guiding theme. Section [Device drivers and platforms]
+covers our driver and platform-related work in detail.
+
+
+New "Genode Applications" book
+##############################
+
+Complementary to our _Genode Foundations_ and _Genode Platforms_ books, we have
+been working on a new book that concentrates on application development.
+_Genode Applications_ centers on the Goa SDK that we introduced with
+[https://genode.org/documentation/release-notes/19.11#New_tooling_for_bridging_existing_build_systems_with_Genode - Genode 19.11]
+and which has seen significant improvements over the past year
+([https://genode.org/documentation/release-notes/23.08#Goa_tool_gets_usability_improvements_and_depot-index_publishing_support - 23.08],
+[https://genode.org/documentation/release-notes/24.02#Sculpt_OS_as_remote_test_target_for_the_Goa_SDK - 24.02],
+[https://genode.org/documentation/release-notes/24.08#Goa_SDK - 24.08]).
+
+:
+:
+:
+:
+
+The book intends to provide a beginner-friendly starting point for application
+development and porting for Genode and Sculpt OS in particular. It starts off
+with a getting-started tutorial for the Goa tool, and further recapitulates
+Genode's architecture and a subset of its libraries, components, and
+conventions such as the C runtime, VFS, NIC router, and package management.
+With these essentials in place, the book is topped off with instructions for
+application debugging and a collection of advanced tutorials.
+
+Aligned with the release of Sculpt 24.10, we updated the Goa tool with the
+corresponding depot archive versions. Furthermore, the Sculpt-integrated and
+updated _Goa testbed_ preset is now prepared for remote debugging.
+
+:
+
+:First revision of the Genode Applications document:
+
+ [https://genode.org/documentation/genode-applications-24-11.pdf]
+
+
+Multi-monitor support
+#####################
+
+Among the users of the Genode-based Sculpt OS, the flexible use of multiple
+monitors was certainly the most longed-after desire raised during our public
+road-map discussion roughly one year ago. We quickly identified that a
+profound solution cannot focus on piecemeal extensions of individual
+components but must embrace an architectural step forward. The step turned
+out being quite a leap.
+In fact, besides reconsidering the roles of display and input drivers in
+[https://genode.org/documentation/release-notes/20.08#The_GUI_stack__restacked - version 20.08],
+the GUI stack has remained largely unchanged since
+[https://genode.org/documentation/release-notes/14.08#New_GUI_architecture - version 14.08].
+So we took our multi-monitor ambitions as welcome opportunity to incorporate
+our experiences of the past ten years into a new design for the next ten
+years.
+
+
+Tickless GUI server and display drivers
+=======================================
+
+Up to now, the nitpicker GUI server as well as the display drivers used to
+operate in a strictly periodic fashion. At a rate of 10 milliseconds, the GUI
+server would route input events to the designated GUI clients and flush
+graphical changes of the GUI clients to the display driver.
+This simple mode of execution has benefits such as the natural ability of
+batching input events and the robustness of the GUI server against overload
+situations. However, in Sculpt OS, we observed that the fixed rate induces
+little but constant load into an otherwise idle system, rendering
+energy-saving regimes of modern CPUs less effective than they could be.
+This problem would become amplified in the presence of multiple output channels
+operating at independent frame rates. Moreover, with panel self-refresh
+support of recent Intel graphics devices, the notion of a fixed continuous
+frame rate has become antiquated.
+
+Hence, it was time to move to a tickless GUI-server design where the GUI
+server acts as a mere broker between events triggered by applications (e.g.,
+pushing pixels) and drivers (e.g., occurrence of input, scanout to a display).
+Depending on the behavior of its clients (GUI applications and drivers alike),
+the GUI server notifies the affected parties about events of interest but
+does not assert an active role.
+
+For example, if a display driver does not observe any changed pixels for 50
+ms, it goes to sleep. Once an application updates pixels affecting a display,
+the GUI server wakes up the respective display driver, which then polls the
+pixels at a driver-defined frame rate until observing when the pixels remain
+static for 50 ms. Vice versa, the point in time when a display driver requests
+updated pixels is reflected as a sync event to GUI applications visible on
+that display, enabling such applications to synchronize their output to the
+frame rate of the driver. The GUI server thereby asserts the role of steering
+the sleep cycles of drivers and applications. Unless anything happens on
+screen, neither the GUI server nor the display driver are active. When two
+applications are visible on distinct monitors, the change of one application
+does not induce any activity regarding the unrelated display. This allows for
+scaling up the number of monitors without increasing the idle CPU load.
+
+This change implies that the former practice of using sync signals as a
+time source for application-side animation timing is no longer viable.
+Sync signals occur only when a driver is active after all. GUI applications
+may best use sync signals for redraw scheduling but need to use a real time
+source as basis for calculating the progress of animations.
+
+
+Paving the ground for tearing-free motion
+=========================================
+
+Tearing artifacts during animations are rightfully frowned upon. It goes
+without saying that we strive to attain tearing-free motion in Genode. Two
+preconditions must be met. First, the GUI server must be able to get hold
+of a _consistent_ picture at any time. Second, the flushing of the picture
+to the display hardware must be timed with _vsync_ of the physical display.
+
+Up to now, the GUI stack was unable to meet the first precondition by design.
+If the picture is composed of multiple clients, the visual representation of
+each client must be present in a consistent state.
+The textures used as input of the compositing of the final picture are buffers
+shared between server and client. Even though clients traditionally employ
+double-buffering to hide intermediate drawing states, the final back-to-front
+copy into the shared buffer violated the consistency of the buffer during
+the client-side copy operation - when looking at the buffer from the server
+side. To overcome this deficiency, we have now equipped the GUI server with
+atomic blitting and panning operations, which support atomic updates in two
+fashions.
+
+_Atomic back-to-front blitting_ allows GUI clients that partially update their
+user interface - like regular application dialogs - to implement double
+buffering by placing both the back buffer and front buffer within the GUI
+session's shared buffer and configuring a view that shows only the front
+buffer. The new blit operation ('Framebuffer::Session::blit') allows the client
+to atomically flush pixels from the back buffer to the front buffer.
+
+_Atomic buffer flipping_ allows GUI clients that always update all pixels -
+like a media player or a game - to leverage panning
+('Framebuffer::Session::panning') to atomically redirect the displayed pixels to
+a different portion of the GUI session's shared buffer without any copy
+operation needed. The buffer contains two frames, the displayed one and the
+next one. Once the next frame is complete, the client changes the panning
+position to the portion containing the next frame.
+
+Almost all GUI clients of the Genode OS framework have been updated to use
+these new facilities.
+
+The vsync timing as the second precondition for tearing-free motion lies in
+the hands of the display driver, which can in principle capture pixel updates
+from the GUI server driven by vsync interrupts. In the presence of multiple
+monitors with different vsync rates, a GUI client may deliberately select
+a synchronization source ('Framebuffer::Session::sync_source'). That said,
+even though the interfaces are in place, vsync timing is not yet provided by
+the current display drivers.
+
+
+Mirrored and panoramic monitor setups
+=====================================
+
+A display driver interacts with the nitpicker GUI server as a capture client.
+One can think of a display driver as a screen-capturing application.
+Up until now, the nitpicker GUI server handed out the same picture to each
+capture client. So each client obtained a mirror of the same picture. By
+subjecting each client to a policy defining a window within a larger panorama,
+a driver creating one capture session per monitor becomes able to display the
+larger panorama spanning the connected displays. The assignment of capture
+clients to different parts of the panorama follows Genode's established
+label-based policy-selection approach as explained in the
+[https://github.com/genodelabs/genode/blob/master/repos/os/src/server/nitpicker/README - documentation]
+of the nitpicker GUI server.
+
+Special care has been taken to ensure that the pointer is always visible. It
+cannot be moved to any area that is not captured. Should the only capture
+client displaying the pointer disappear, the pointer is warped to the center
+of (any) remaining capture client.
+
+A mirrored monitor setup can in principle be attained by placing multiple
+capture clients at the same part of nitpicker's panorama. However, there is
+a better way: Our Intel display-driver component supports both discrete and
+merged output channels. The driver's configuration subsumes all connectors
+listed within a '' node as a single encompassing capture session at the
+GUI server. The mirroring of the picture is done by the hardware. Each
+connector declared outside the '' node is handled as a discrete capture
+session labeled after the corresponding connector. The driver's
+[https://github.com/genodelabs/genode/blob/master/repos/pc/src/driver/framebuffer/intel/pc/README - documentation]
+describes the configuration in detail.
+
+
+Sculpt OS integration
+=====================
+
+All the changes described above are featured in the recently released
+Sculpt OS version 24.10, which gives the user the ability to attain mirrored
+or panoramic monitor setups or a combination thereof by the means of manual
+configuration or by using interactive controls.
+
+[image sculpt_24_10_intel_fb]
+
+You can find the multi-monitor use of Sculpt OS covered by the
+[https://genode.org/documentation/articles/sculpt-24-10#Multi-monitor_support - documentation].
+
+
+Revised inter-component interfaces
+==================================
+
+Strict resource partitioning between GUI clients
+------------------------------------------------
+
+Even though Genode gives server components the opportunity to strictly operate
+on client-provided resources only, the two prominent GUI servers - nitpicker
+and the window manager (wm) - did not leverage these mechanisms to full
+extent. In particular the wm eschewed strict resource accounting by paying out
+of its own pocket. This deficiency has been rectified by the current release,
+thereby making the GUI stack much more robust against potential resource
+denial-of-service issues. Both the nitpicker GUI server and the window manager
+now account all allocations to the resource budgets of the respective clients.
+This change has the effect that GUI clients must now be equipped with the
+actual cap and RAM quotas needed.
+
+Note that not all central parts of the GUI stack operate on client-provided
+resources. In particular, a window decorator is a mere client of the window
+manager despite playing a role transcending multiple applications. As the
+costs needed for the decorations depend on the number of applications present
+on screen, the resources of the decorator must be dimensioned with a sensible
+upper bound. Fortunately, however, as the decorator is a plain client of the
+window manager, it can be restarted, replaced, and upgraded without affecting
+any application.
+
+
+Structured mode information for applications
+--------------------------------------------
+
+Up to now, GUI clients were able to request mode information via a plain
+RPC call that returned the dimensions and color depth of the display.
+Multi-monitor setups call for more flexibility, which prompted us to
+replace the mode information by XML-structured information delivered as
+an 'info' dataspace. This is in line with how meta information is handled
+in other modern session interfaces like the platform or USB sessions.
+The new representation gives us room to annotate information that could
+previously not be exposed to GUI clients, in particular:
+
+* The total panorama dimensions.
+* Captured areas within the panorama, which can be used by multi-monitor
+ aware GUI clients as intelligence for placing GUI views.
+* DPI information carried by 'width_mm' and 'height_mm' attributes.
+ This information is defined by the display driver and passed to the GUI
+ server as 'Capture::Connection::buffer' argument.
+* The closed state of a window interactively closed by the user.
+
+Note that the window manager (wm) virtualizes the information of the nitpicker
+GUI server. Instead of exposing nitpicker's panorama to its clients, the wm
+reports the logical screen hosting the client's window as panorama and the
+window size as a single captured rectangle within the panorama.
+
+
+Mouse grabbing
+--------------
+
+Since the inception of the nitpicker GUI server, its clients observed absolute
+pointer positions only. The GUI server unconditionally translated relative
+mouse-motion events to absolute motion events.
+To accommodate applications like games or a VM emulating a relative pointer
+device, we have now extended the GUI server(s) with the ability to selectively
+expose relative motion events while locking the absolute pointer position.
+This is usually called pointer grabbing. It goes without saying that the user
+must always retain a way to forcefully reassert control over the pointer
+without the cooperation of the application.
+
+The solution is the enhancement of the 'Input::Session' interface by a new RPC
+function that allows a client to request exclusive input. The nitpicker GUI
+server grants this request if the application owns the focus. In scenarios
+using the window manager (wm), the focus is always defined by the wm, which
+happens to intercept all input sessions of GUI applications. Hence, the wm is
+in the natural position of arbitrating the grabbing/ungrabbing of the pointer.
+For each GUI client, the wm records whether the client is interested in
+exclusive input but does not forward this request to nitpicker. Only if a GUI
+client receives the focus and has requested exclusive input, the wm enables
+exclusive input for this client at nitpicker when observing a mouse click on
+the application window. Whenever the user presses the global wm key (super),
+the wm forcefully releases the exclusive input at nitpicker until the user
+clicks into the client window the next time.
+
+Furthermore, an application may enable exclusive input transiently during a
+key sequence, e.g., when dragging the mouse while holding the mouse button.
+Transient exclusive input is revoked as soon as the last button/key is
+released. It thereby would in principle allow for GUI controls like knobs to
+lock the pointer position while the user adjusts the value by moving the mouse
+while the mouse button is held. So the pointer retains its original position
+at the knob.
+
+While operating in exclusive input mode, there is no useful notion of an
+absolute pointer position at the nitpicker GUI server. Hence, nitpicker hides
+GUI domains that use the pointer position as coordinate origin. Thereby, the
+mouse cursor automatically disappears while the pointer is grabbed.
+
+
+Current state and ongoing work
+==============================
+
+All the advances described above are in full effect in the recently released
+version 24.10 of [https://genode.org/download/sculpt - Sculpt OS]. All
+components hosted in Genode's main and world repositories have been updated
+accordingly, including Genode-specific components like the widget toolkit
+used by the administrative user interface of Sculpt OS, window decorators,
+over Qt5 and Qt6, to SDL and SDL2.
+
+[image multiple_monitors]
+
+Current work is underway to implement multi-monitor window management and to
+make multiple monitors seamlessly available to guest OSes hosted in VirtualBox.
+Furthermore, the Intel display driver is currently getting equipped with the
+ability to use vsync interrupts for driving the interaction with the GUI
+server, taking the final step to attain tearing-free motion.
+
+
+Device drivers and platforms
+############################
+
+Linux device-driver environment (DDE)
+=====================================
+
+With our
+[https://genode.org/documentation/release-notes/24.08#Linux_device-driver_environment__DDE_ - recent]
+update of the DDE Linux kernel to version 6.6 for PC platforms and as a
+prerequisite to support the MNT Pocket Reform, we have adapted all drivers for
+the i.MX5/6/7/8 platforms to Linux kernel version 6.6.47. The list of drivers
+includes Wifi, NIC, display, GPU, USB and SD-card.
+
+
+MNT Pocket Reform
+~~~~~~~~~~~~~~~~~
+
+The [https://shop.mntre.com/products/mnt-pocket-reform - MNT Pocket Reform] is
+a Mini Laptop by MNT aiming to be modular, upgradable, and repairable while
+being assembled completely using open-source hardware. Being modular implies
+that a range of CPU modules is available for the MNT Pocket. Some of these
+chips, like the Rockchip based modules, are not officially supported by
+Genode, yet. But there is a choice of an i.MX8MP based module available which
+fits nicely into Genode's i.MX infrastructure.
+
+Genode already supports the MNT Reform 2 i.MX8MQ based
+[https://genodians.org/skalk/2020-06-29-mnt-reform - laptop]. So an update from
+MQ to MP doesn't sound like a big issue because only one letter changed,
+right? It turns out that there are more changes to the platform than mere
+adjustments of I/O resources and interrupt numbers. Additionally, the MNT
+Reform team offers quite a large patch set for each supported Linux kernel
+version. Luckily there is
+[https://source.mnt.re/reform/reform-debian-packages/-/tree/main/linux/patches6.6?ref_type=heads - one]
+for our just updated Linux 6.6 kernel. With this patch set, we were able to
+produce a Linux source tree (imx_linux) that we now take as basis for driver
+development on Genode. Note that these Linux kernel sources are shared by all
+supported i.MX platforms. Of course, additional patch series were necessary to
+include device-tree sources from other vendor kernels, for instance from
+Compulab.
+
+With the development environment in place and after putting lots of effort in,
+we ultimately achieved initial Genode support for the MNT Pocket Reform with
+Genode 24.11.
+
+On the device-driver side of things, we did not have to port lots of new
+drivers but were able to extend drivers already available for the i.MX8MQ
+platform. In particular these drivers are for the wired network card, USB host
+controller, display, and SD card.
+
+For the wireless network device that is found on the i.MX8MP SoM in the MNT
+Pocket Reform, we needed to port a new driver. It has a Qualcomm QCA9377
+chipset and is attached via SDIO. Unfortunately the available _ath10k_ driver
+in the vanilla kernel does not work properly with such a device and therefore
+is also not used in the regular Linux kernel for the MNT Pocket Reform. A
+slightly adapted external QCACLD2 reference driver is used instead. So we
+followed suit by incorporating this particular driver in our _imx_linux_
+source tree as well.
+
+[image sculpt_mnt_pocket]
+ Sculpt OS running on the MNT Pocket Reform
+
+Being the initial enablement, there are still some limitations.
+For example, the display of the MNT Pocket is physically
+[https://mntre.com/documentation/pocket-reform-handbook.pdf - rotated] by 90
+degrees. So, we had to find a way to accommodate for that. Unfortunately,
+there seems to be no hardware support other than using the GPU to perform
+a fast rotation. With GPU support still missing on this system, we had to
+resort to perform the rotation in software on the CPU, which is obviously
+far from optimal.
+Those early inefficiencies notwithstanding, Sculpt OS has become able to run
+on the MNT Pocket Reform. We will provide a preview image that exercises the
+available features soon.
+
+
+Platform driver for i.MX 8M Plus
+================================
+
+While enabling support for the MNT Pocket Reform (Section [MNT Pocket Reform]),
+it was necessary to adjust the i.MX8MP specific platform driver, which was
+originally introduced in the previous
+[https://genode.org/documentation/release-notes/24.08#Improvements_for_NXP_s_i.MX_family - release 24.08]
+to drive the Compulab i.MX 8M Plus IOT Gateway.
+
+Some of the I/O pin configurations necessary to set up the SoC properly are
+statically compiled into this driver because they do not change at runtime.
+However, the pin configuration is specific to the actual board. Therefore, the
+i.MX8MP platform driver now needs to distinguish between different boards (IOT
+Gateway and MNT Pocket) by evaluating the 'platform_info' ROM provided by
+core.
+
+Moreover, while working on different drivers, we detected a few missing clocks
+that were added to the platform driver. It turned out that some clocks that we
+initially turned off to save energy, have to be enabled to ensure the
+liveliness of the ARM Trusted Firmware (ATF) and thereby the platform. Also,
+we had to adapt the communication in between ATF and our platform driver to
+control power-domains. The first version of the i.MX8MP platform driver shared
+the ATF power-domains protocol with the i.MX8MQ version. However, the
+power-domain enumerations of the different firmwares varies also and we
+adapted that.
+
+Finally, the watchdog hardware is now served by the platform driver in a
+recurrent way. Originally our driver used the watchdog only to implement reset
+functionality. But in case of the MNT Pocket Reform, the watchdog hardware is
+already armed by the bootloader. Therefore, it needs to get served in time, to
+prevent the system from rebooting. As a consequence, the platform driver is
+mandatory on this platform if it needs to run longer than a minute.
+
+
+Wifi management rework
+======================
+
+Our management interface in the wifi driver served us well over the years
+and concealed the underlying complexity of the wireless stack. At the same
+time it gained some complexity itself to satisfy a variety of use-cases.
+Thus, we took the past release cycle as opportunity to rework the management
+layer to reduce its complexity by streamlining the interaction between
+various parts, like the manager layer itself, 'wpa_supplicant' as well as
+the device driver in order to provide a sound foundation for future
+adaptions.
+Included is also an update of the 'wpa_supplicant' to version 2.11.
+
+The following segments detail the changes made to the configuration options as
+they were altered quite a bit to no longer mix different tasks (e.g. joining a
+network and scanning for hidden networks) while removing obsolete options.
+
+At the top-level '' node, the following alterations were made:
+
+* The 'log_level' attribute was added and configures the supplicant's
+ verbosity. Valid values correspond to levels used by the supplicant
+ and are as follows: 'excessive', 'msgdump', 'debug', 'info', 'warning',
+ and 'error'. The default value is 'error' and configures the least
+ amount of verbosity. This option was introduced to ease the investigation
+ of connectivity issues.
+
+* The 'bgscan' attribute may be used to configure the way the
+ supplicant performs background-scanning to steer or rather optimize
+ roaming decision within the same network. The default value is set
+ to 'simple:30:-70:600'. The attribute is forwarded unmodified to the WPA
+ supplicant and thus provides the syntax supported by the supplicant
+ implementation. It can be disabled by specifying an empty value, e.g.
+ 'bgscan=""'.
+
+* The 'connected_scan_interval' attribute was removed as this functionality
+ is now covered by background scanning.
+
+* The 'verbose_state' attribute was removed altogether and similar
+ functionality is now covered by the 'verbose' attribute.
+
+The network management received the following changes:
+
+* Every configured network, denoted by a '' node, is now implicitly
+ considered an option for joining. The 'auto_connect' attribute was
+ removed and a '' node must be renamed or removed to deactivate
+ automatic connection establishment.
+
+* The intent to scan for a hidden network is now managed by the newly
+ introduced '' node that like the '' node has
+ an 'ssid' attribute. If the specified SSID is valid, it is incorporated
+ into the scan request to actively probe for this network. As the node
+ requests explicit scanning only, a corresponding '' node is
+ required to actually connect to the hidden network.
+ The 'explicit_scan' attribute of the '' node has been removed.
+
+The following exemplary configuration shows how to configure the driver
+for attempting to join two different networks where one of them is hidden.
+The initial scan interval is set 10 seconds and the signal quality will be
+updated every 30 seconds while connected to a network.
+
+!
+!
+!
+!
+!
+
+For more information please consult the driver's
+[https://github.com/genodelabs/genode/blob/master/repos/dde_linux/src/driver/wifi/README - documentation]
+that now features a best-practices section explaining how the driver should be
+operated at best, and highlights the difference between a managed (as used in
+Sculpt OS) and a user-generated configuration.
+
+
+Audio driver updated to OpenBSD 7.6
+===================================
+
+With this release, we updated our OpenBSD-based audio driver to a more recent
+revision that correlates to version 7.6. It supports newer devices, e.g. Alder
+Lake-N, and includes a fix for using message-signaled interrupts (MSI) with
+HDA devices as found in AMD-based systems.
+
+
+AVX and hardware-based AES in virtual machines
+==============================================
+
+The current release adds support for requesting and transferring the AVX FPU
+state via Genode's VM-session interface. With this prerequisite fulfilled, we
+enabled the announcement of the AVX feature to guest VMs in our port of
+VirtualBox6.
+
+Additionally, we enabled the announcement of AES and RDRAND CPU features to
+guest VMs to further improve the utilization of the hardware.
+
+
+Build system and tools
+######################
+
+Extended depot-tool safeguards
+------------------------------
+
+When using the run tool's '--depot-auto-update' feature while switching
+between different git topic branches with committed recipe hashes, a binary
+archive present in the depot may accidentally not match its ingredients
+because the depot/build tool's 'REBUILD=' mode - as used by the depot
+auto-update mechanism - merely looks at the archive versions. This situation
+is arguably rare. But when it occurs, its reach and effects are hard to
+predict. To rule out this corner case early, the depot/build tool has now been
+extended by recording the hashes of the ingredients of binary archives. When
+skipping a rebuild because the desired version presumably already exists as a
+binary archive, the recorded hashes are compared to the current state of the
+ingredients (src and api archives). Thereby inconsistencies are promptly
+reported to the user.
+
+Users of the depot tool will notice .hash files appearing alongside src and
+api archives. Those files contain the hash value of the content of the
+respective archive. Each binary archive built is now also accompanied by
+a .hash file, which contains a list of hash values of the ingredients that went
+into the binary archive. Thanks to these .hash files, the consistency between
+binaries and their ingredients can be checked quickly.
+
+_As a note of caution, when switching to the Genode 24.11 with existing depot,_
+_one will possibly need to remove existing depot archives (as listed by the_
+_diagnostic messages) because the existing archives are not accompanied by_
+_.hash files yet._
diff --git a/repos/base-fiasco/recipes/src/base-fiasco/hash b/repos/base-fiasco/recipes/src/base-fiasco/hash
index f8029e84b9..7b847c95e6 100644
--- a/repos/base-fiasco/recipes/src/base-fiasco/hash
+++ b/repos/base-fiasco/recipes/src/base-fiasco/hash
@@ -1 +1 @@
-2024-08-28 8f1db0e604a283f5d3aafea61d38d6852ee91911
+2024-12-10 408b474f632eefaaa19db35812a9aa94a48e6bdb
diff --git a/repos/base-fiasco/src/core/include/platform_thread.h b/repos/base-fiasco/src/core/include/platform_thread.h
index 0d753a7fe0..aabf222984 100644
--- a/repos/base-fiasco/src/core/include/platform_thread.h
+++ b/repos/base-fiasco/src/core/include/platform_thread.h
@@ -61,8 +61,9 @@ class Core::Platform_thread : Interface
/**
* Constructor
*/
- Platform_thread(Platform_pd &pd, size_t, const char *name,
- unsigned, Affinity::Location, addr_t)
+ Platform_thread(Platform_pd &pd, Rpc_entrypoint &, Ram_allocator &,
+ Region_map &, size_t, const char *name, unsigned,
+ Affinity::Location, addr_t)
: _name(name), _pd(pd) { }
/**
diff --git a/repos/base-fiasco/src/core/io_mem_session_support.cc b/repos/base-fiasco/src/core/io_mem_session_support.cc
index 55bcfab142..e55aa04ae3 100644
--- a/repos/base-fiasco/src/core/io_mem_session_support.cc
+++ b/repos/base-fiasco/src/core/io_mem_session_support.cc
@@ -38,8 +38,11 @@ static inline bool can_use_super_page(addr_t, size_t)
}
-addr_t Io_mem_session_component::_map_local(addr_t phys_base, size_t size)
+Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const phys_base,
+ size_t const size_in)
{
+ size_t const size = size_in;
+
auto map_io_region = [] (addr_t phys_base, addr_t local_base, size_t size)
{
using namespace Fiasco;
@@ -91,14 +94,16 @@ addr_t Io_mem_session_component::_map_local(addr_t phys_base, size_t size)
size_t align = (size >= get_super_page_size()) ? get_super_page_size_log2()
: get_page_size_log2();
- return platform().region_alloc().alloc_aligned(size, align).convert(
+ return platform().region_alloc().alloc_aligned(size, align).convert(
[&] (void *ptr) {
addr_t const core_local_base = (addr_t)ptr;
map_io_region(phys_base, core_local_base, size);
- return core_local_base; },
+ return Map_local_result { .core_local_addr = core_local_base, .success = true };
+ },
- [&] (Range_allocator::Alloc_error) -> addr_t {
+ [&] (Range_allocator::Alloc_error) {
error("core-local mapping of memory-mapped I/O range failed");
- return 0; });
+ return Map_local_result();
+ });
}
diff --git a/repos/base-fiasco/src/core/pager.cc b/repos/base-fiasco/src/core/pager.cc
index e6c1d9257f..cba6f257af 100644
--- a/repos/base-fiasco/src/core/pager.cc
+++ b/repos/base-fiasco/src/core/pager.cc
@@ -103,3 +103,6 @@ Untyped_capability Pager_entrypoint::_pager_object_cap(unsigned long badge)
{
return Capability_space::import(native_thread().l4id, Rpc_obj_key(badge));
}
+
+
+void Core::init_page_fault_handling(Rpc_entrypoint &) { }
diff --git a/repos/base-fiasco/src/core/thread_start.cc b/repos/base-fiasco/src/core/thread_start.cc
index 48871704ac..53b4302d94 100644
--- a/repos/base-fiasco/src/core/thread_start.cc
+++ b/repos/base-fiasco/src/core/thread_start.cc
@@ -20,7 +20,6 @@
/* core includes */
#include
-#include
using namespace Core;
diff --git a/repos/base-foc/include/foc/thread_state.h b/repos/base-foc/include/foc/thread_state.h
index 19d25d1723..b1b7a7be25 100644
--- a/repos/base-foc/include/foc/thread_state.h
+++ b/repos/base-foc/include/foc/thread_state.h
@@ -26,7 +26,7 @@ namespace Genode { struct Foc_thread_state; }
struct Genode::Foc_thread_state : Thread_state
{
Foc::l4_cap_idx_t kcap { Foc::L4_INVALID_CAP }; /* thread's gate cap in its PD */
- uint16_t id { }; /* ID of gate capability */
+ uint32_t id { }; /* ID of gate capability */
addr_t utcb { }; /* thread's UTCB in its PD */
};
diff --git a/repos/base-foc/recipes/src/base-foc-imx6q_sabrelite/hash b/repos/base-foc/recipes/src/base-foc-imx6q_sabrelite/hash
index 9c6fbc231c..ef89a5e1a5 100644
--- a/repos/base-foc/recipes/src/base-foc-imx6q_sabrelite/hash
+++ b/repos/base-foc/recipes/src/base-foc-imx6q_sabrelite/hash
@@ -1 +1 @@
-2024-08-28 deb70ebec813a19ba26a28cd94fa7d25bbe52e78
+2024-12-10 4247239f4d3ce9a840be368ac9e054e8064c01c6
diff --git a/repos/base-foc/recipes/src/base-foc-imx7d_sabre/hash b/repos/base-foc/recipes/src/base-foc-imx7d_sabre/hash
index 121f52746c..fba642577e 100644
--- a/repos/base-foc/recipes/src/base-foc-imx7d_sabre/hash
+++ b/repos/base-foc/recipes/src/base-foc-imx7d_sabre/hash
@@ -1 +1 @@
-2024-08-28 a4ae12d703c38248ac22905163479000020e0bb0
+2024-12-10 39609d3553422b8c7c6acff2db845c67c5f8912b
diff --git a/repos/base-foc/recipes/src/base-foc-pbxa9/hash b/repos/base-foc/recipes/src/base-foc-pbxa9/hash
index cb9030ecd7..3e12c37bf0 100644
--- a/repos/base-foc/recipes/src/base-foc-pbxa9/hash
+++ b/repos/base-foc/recipes/src/base-foc-pbxa9/hash
@@ -1 +1 @@
-2024-08-28 4c4d4d5d96bc345947e90c42559e45fec4dcc4c0
+2024-12-10 7867db59531dc9086e76b74800125ee61ccc310e
diff --git a/repos/base-foc/recipes/src/base-foc-pc/hash b/repos/base-foc/recipes/src/base-foc-pc/hash
index 837cb151a0..1cbe570570 100644
--- a/repos/base-foc/recipes/src/base-foc-pc/hash
+++ b/repos/base-foc/recipes/src/base-foc-pc/hash
@@ -1 +1 @@
-2024-08-28 b0160be55c422f860753dbd375f04ff8f7ffc7e9
+2024-12-10 3fc7c1b2cae2b9af835c97bf384b10411ec9c511
diff --git a/repos/base-foc/recipes/src/base-foc-rpi3/hash b/repos/base-foc/recipes/src/base-foc-rpi3/hash
index 0843a59f28..ae24ab1091 100644
--- a/repos/base-foc/recipes/src/base-foc-rpi3/hash
+++ b/repos/base-foc/recipes/src/base-foc-rpi3/hash
@@ -1 +1 @@
-2024-08-28 3e92e9cf1ec41d5de0bfa754ff48c63476e60d67
+2024-12-10 68ee5bc5640e1d32c33f46072256d5b1c71bef9b
diff --git a/repos/base-foc/src/core/include/cap_id_alloc.h b/repos/base-foc/src/core/include/cap_id_alloc.h
index 39c34ca571..6cbeaed546 100644
--- a/repos/base-foc/src/core/include/cap_id_alloc.h
+++ b/repos/base-foc/src/core/include/cap_id_alloc.h
@@ -30,17 +30,15 @@ class Core::Cap_id_allocator
{
public:
- using id_t = uint16_t;
-
- enum { ID_MASK = 0xffff };
+ using id_t = unsigned;
private:
enum {
- CAP_ID_RANGE = ~0UL,
- CAP_ID_MASK = ~3UL,
- CAP_ID_NUM_MAX = CAP_ID_MASK >> 2,
- CAP_ID_OFFSET = 1 << 2
+ CAP_ID_OFFSET = 1 << 2,
+ CAP_ID_MASK = CAP_ID_OFFSET - 1,
+ CAP_ID_RANGE = 1u << 28,
+ ID_MASK = CAP_ID_RANGE - 1,
};
Synced_range_allocator _id_alloc;
diff --git a/repos/base-foc/src/core/include/platform_thread.h b/repos/base-foc/src/core/include/platform_thread.h
index 4d2d94dfd6..e2b2d681e5 100644
--- a/repos/base-foc/src/core/include/platform_thread.h
+++ b/repos/base-foc/src/core/include/platform_thread.h
@@ -75,8 +75,8 @@ class Core::Platform_thread : Interface
/**
* Constructor for non-core threads
*/
- Platform_thread(Platform_pd &, size_t, const char *name, unsigned priority,
- Affinity::Location, addr_t);
+ Platform_thread(Platform_pd &, Rpc_entrypoint &, Ram_allocator &, Region_map &,
+ size_t, const char *name, unsigned priority, Affinity::Location, addr_t);
/**
* Constructor for core main-thread
diff --git a/repos/base-foc/src/core/include/vm_session_component.h b/repos/base-foc/src/core/include/vm_session_component.h
index 17c3a1f01e..15c8de65c8 100644
--- a/repos/base-foc/src/core/include/vm_session_component.h
+++ b/repos/base-foc/src/core/include/vm_session_component.h
@@ -125,7 +125,7 @@ class Core::Vm_session_component
** Vm session interface **
**************************/
- Capability create_vcpu(Thread_capability);
+ Capability create_vcpu(Thread_capability) override;
void attach_pic(addr_t) override { /* unused on Fiasco.OC */ }
void attach(Dataspace_capability, addr_t, Attach_attr) override; /* vm_session_common.cc */
diff --git a/repos/base-foc/src/core/io_mem_session_support.cc b/repos/base-foc/src/core/io_mem_session_support.cc
index b7fdaea88d..778ffb7e69 100644
--- a/repos/base-foc/src/core/io_mem_session_support.cc
+++ b/repos/base-foc/src/core/io_mem_session_support.cc
@@ -6,7 +6,7 @@
*/
/*
- * Copyright (C) 2006-2017 Genode Labs GmbH
+ * Copyright (C) 2006-2024 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
@@ -21,31 +21,37 @@
using namespace Core;
-void Io_mem_session_component::_unmap_local(addr_t base, size_t, addr_t)
+void Io_mem_session_component::_unmap_local(addr_t base, size_t size, addr_t)
{
+ if (!base)
+ return;
+
+ unmap_local(base, size >> 12);
platform().region_alloc().free(reinterpret_cast(base));
}
-addr_t Io_mem_session_component::_map_local(addr_t base, size_t size)
+Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const base,
+ size_t const size)
{
/* align large I/O dataspaces on a super-page boundary within core */
size_t alignment = (size >= get_super_page_size()) ? get_super_page_size_log2()
: get_page_size_log2();
- /* find appropriate region for mapping */
- return platform().region_alloc().alloc_aligned(size, (unsigned)alignment).convert(
+ /* find appropriate region and map it locally */
+ return platform().region_alloc().alloc_aligned(size, (unsigned)alignment).convert(
[&] (void *local_base) {
if (!map_local_io(base, (addr_t)local_base, size >> get_page_size_log2())) {
- error("map_local_io failed");
+ error("map_local_io failed ", Hex_range(base, size));
platform().region_alloc().free(local_base, base);
- return 0UL;
+ return Map_local_result();
}
- return (addr_t)local_base;
+ return Map_local_result { .core_local_addr = addr_t(local_base),
+ .success = true };
},
[&] (Range_allocator::Alloc_error) {
error("allocation of virtual memory for local I/O mapping failed");
- return 0UL; });
+ return Map_local_result(); });
}
diff --git a/repos/base-foc/src/core/pager.cc b/repos/base-foc/src/core/pager.cc
index 891967266d..102df00cb3 100644
--- a/repos/base-foc/src/core/pager.cc
+++ b/repos/base-foc/src/core/pager.cc
@@ -153,3 +153,6 @@ Pager_capability Pager_entrypoint::manage(Pager_object &obj)
},
[&] (Cpu_session::Create_thread_error) { return Pager_capability(); });
}
+
+
+void Core::init_page_fault_handling(Rpc_entrypoint &) { }
diff --git a/repos/base-foc/src/core/platform.cc b/repos/base-foc/src/core/platform.cc
index 15f4ed5779..e56a0135e9 100644
--- a/repos/base-foc/src/core/platform.cc
+++ b/repos/base-foc/src/core/platform.cc
@@ -18,6 +18,7 @@
#include
#include
#include
+#include
#include
/* base-internal includes */
@@ -342,6 +343,76 @@ void Core::Platform::_setup_irq_alloc()
}
+struct Acpi_rsdp : public Genode::Mmio<32>
+{
+ using Mmio<32>::Mmio;
+
+ struct Signature : Register< 0, 64> { };
+ struct Revision : Register<15, 8> { };
+ struct Rsdt : Register<16, 32> { };
+ struct Length : Register<20, 32> { };
+ struct Xsdt : Register<24, 64> { };
+
+ bool valid() const
+ {
+ const char sign[] = "RSD PTR ";
+ return read() == *(Genode::uint64_t *)sign;
+ }
+
+} __attribute__((packed));
+
+
+static void add_acpi_rsdp(auto ®ion_alloc, auto &xml)
+{
+ using namespace Foc;
+ using Foc::L4::Kip::Mem_desc;
+
+ l4_kernel_info_t const &kip = sigma0_map_kip();
+ Mem_desc const * const desc = Mem_desc::first(&kip);
+
+ if (!desc)
+ return;
+
+ for (unsigned i = 0; i < Mem_desc::count(&kip); ++i) {
+ if (desc[i].type() != Mem_desc::Mem_type::Info ||
+ desc[i].sub_type() != Mem_desc::Info_sub_type::Info_acpi_rsdp)
+ continue;
+
+ auto offset = desc[i].start() & 0xffful;
+ auto pages = align_addr(offset + desc[i].size(), 12) >> 12;
+
+ region_alloc.alloc_aligned(pages * 4096, 12).with_result([&] (void *core_local_ptr) {
+
+ if (!map_local_io(desc[i].start(), (addr_t)core_local_ptr, pages))
+ return;
+
+ Byte_range_ptr const ptr((char *)(addr_t(core_local_ptr) + offset),
+ pages * 4096 - offset);
+ auto const rsdp = Acpi_rsdp(ptr);
+
+ if (!rsdp.valid())
+ return;
+
+ xml.node("acpi", [&] {
+ xml.attribute("revision", rsdp.read());
+ if (rsdp.read())
+ xml.attribute("rsdt", String<32>(Hex(rsdp.read())));
+ if (rsdp.read())
+ xml.attribute("xsdt", String<32>(Hex(rsdp.read())));
+ });
+
+ unmap_local(addr_t(core_local_ptr), pages);
+ region_alloc.free(core_local_ptr);
+
+ pages = 0;
+ }, [&] (Range_allocator::Alloc_error) { });
+
+ if (!pages)
+ return;
+ }
+}
+
+
void Core::Platform::_setup_basics()
{
using namespace Foc;
@@ -412,6 +483,10 @@ void Core::Platform::_setup_basics()
/* image is accessible by core */
add_region(Region(img_start, img_end), _core_address_ranges());
+
+ /* requested as I/O memory by the VESA driver and ACPI (rsdp search) */
+ _io_mem_alloc.add_range (0, 0x2000);
+ ram_alloc() .remove_range(0, 0x2000);
}
@@ -517,7 +592,10 @@ Core::Platform::Platform()
xml.node("affinity-space", [&] {
xml.attribute("width", affinity_space().width());
- xml.attribute("height", affinity_space().height()); });
+ xml.attribute("height", affinity_space().height());
+ });
+
+ add_acpi_rsdp(region_alloc(), xml);
});
}
);
diff --git a/repos/base-foc/src/core/platform_thread.cc b/repos/base-foc/src/core/platform_thread.cc
index d24ace8782..378f232671 100644
--- a/repos/base-foc/src/core/platform_thread.cc
+++ b/repos/base-foc/src/core/platform_thread.cc
@@ -18,7 +18,6 @@
/* core includes */
#include
#include
-#include
/* Fiasco.OC includes */
#include
@@ -210,7 +209,7 @@ Foc_thread_state Platform_thread::state()
s = _pager_obj->state.state;
s.kcap = _gate.remote;
- s.id = (uint16_t)_gate.local.local_name();
+ s.id = Cap_index::id_t(_gate.local.local_name());
s.utcb = _utcb;
return s;
@@ -278,7 +277,8 @@ void Platform_thread::_finalize_construction()
}
-Platform_thread::Platform_thread(Platform_pd &pd, size_t, const char *name, unsigned prio,
+Platform_thread::Platform_thread(Platform_pd &pd, Rpc_entrypoint &, Ram_allocator &,
+ Region_map &, size_t, const char *name, unsigned prio,
Affinity::Location location, addr_t)
:
_name(name),
diff --git a/repos/base-foc/src/core/rpc_cap_factory.cc b/repos/base-foc/src/core/rpc_cap_factory.cc
index fa0c2ea5a0..caf7a226a9 100644
--- a/repos/base-foc/src/core/rpc_cap_factory.cc
+++ b/repos/base-foc/src/core/rpc_cap_factory.cc
@@ -38,7 +38,7 @@ using namespace Core;
Cap_index_allocator &Genode::cap_idx_alloc()
{
- static Cap_index_allocator_tpl alloc;
+ static Cap_index_allocator_tpl alloc;
return alloc;
}
@@ -190,7 +190,7 @@ Cap_id_allocator::Cap_id_allocator(Allocator &alloc)
:
_id_alloc(&alloc)
{
- _id_alloc.add_range(CAP_ID_OFFSET, CAP_ID_RANGE);
+ _id_alloc.add_range(CAP_ID_OFFSET, unsigned(CAP_ID_RANGE) - unsigned(CAP_ID_OFFSET));
}
@@ -213,7 +213,7 @@ void Cap_id_allocator::free(id_t id)
Mutex::Guard lock_guard(_mutex);
if (id < CAP_ID_RANGE)
- _id_alloc.free((void*)(id & CAP_ID_MASK), CAP_ID_OFFSET);
+ _id_alloc.free((void*)(addr_t(id & CAP_ID_MASK)), CAP_ID_OFFSET);
}
diff --git a/repos/base-foc/src/core/spec/x86/platform_services.cc b/repos/base-foc/src/core/spec/x86/platform_services.cc
index 8e4888c65a..051c42bc31 100644
--- a/repos/base-foc/src/core/spec/x86/platform_services.cc
+++ b/repos/base-foc/src/core/spec/x86/platform_services.cc
@@ -12,7 +12,6 @@
*/
/* core includes */
-#include
#include
#include
#include
@@ -23,15 +22,16 @@
void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &heap,
Registry &services,
- Trace::Source_registry &trace_sources)
+ Trace::Source_registry &trace_sources,
+ Ram_allocator &core_ram,
+ Region_map &core_rm,
+ Range_allocator &io_port_ranges)
{
- static Vm_root vm_root(ep, heap, core_env().ram_allocator(),
- core_env().local_rm(), trace_sources);
+ static Vm_root vm_root(ep, heap, core_ram, core_rm, trace_sources);
static Core_service vm(services, vm_root);
- static Io_port_root io_root(*core_env().pd_session(),
- platform().io_port_alloc(), heap);
+ static Io_port_root io_root(io_port_ranges, heap);
static Core_service io_port(services, io_root);
}
diff --git a/repos/base-foc/src/core/thread_start.cc b/repos/base-foc/src/core/thread_start.cc
index 98f731f670..48a8d7afa5 100644
--- a/repos/base-foc/src/core/thread_start.cc
+++ b/repos/base-foc/src/core/thread_start.cc
@@ -22,7 +22,6 @@
/* core includes */
#include
-#include
/* Fiasco.OC includes */
#include
diff --git a/repos/base-foc/src/include/base/internal/capability_data.h b/repos/base-foc/src/include/base/internal/capability_data.h
index 2f083cf190..8c8481192a 100644
--- a/repos/base-foc/src/include/base/internal/capability_data.h
+++ b/repos/base-foc/src/include/base/internal/capability_data.h
@@ -30,12 +30,13 @@ class Genode::Native_capability::Data : public Avl_node
{
public:
- using id_t = uint16_t;
+ using id_t = unsigned;
+
+ constexpr static id_t INVALID_ID = ~0u;
private:
- constexpr static uint16_t INVALID_ID = ~0;
- constexpr static uint16_t UNUSED = 0;
+ constexpr static id_t UNUSED = 0;
uint8_t _ref_cnt; /* reference counter */
id_t _id; /* global capability id */
@@ -46,8 +47,8 @@ class Genode::Native_capability::Data : public Avl_node
bool valid() const { return _id != INVALID_ID; }
bool used() const { return _id != UNUSED; }
- uint16_t id() const { return _id; }
- void id(uint16_t id) { _id = id; }
+ id_t id() const { return _id; }
+ void id(id_t id) { _id = id; }
uint8_t inc();
uint8_t dec();
addr_t kcap() const;
diff --git a/repos/base-foc/src/lib/base/cap_map.cc b/repos/base-foc/src/lib/base/cap_map.cc
index f97b45766a..f076a9d5c5 100644
--- a/repos/base-foc/src/lib/base/cap_map.cc
+++ b/repos/base-foc/src/lib/base/cap_map.cc
@@ -3,11 +3,11 @@
* \author Stefan Kalkowski
* \date 2010-12-06
*
- * This is a Fiasco.OC-specific addition to the process enviroment.
+ * This is a Fiasco.OC-specific addition to the process environment.
*/
/*
- * Copyright (C) 2010-2017 Genode Labs GmbH
+ * Copyright (C) 2010-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
@@ -59,7 +59,7 @@ static volatile int _cap_index_spinlock = SPINLOCK_UNLOCKED;
bool Cap_index::higher(Cap_index *n) { return n->_id > _id; }
-Cap_index* Cap_index::find_by_id(uint16_t id)
+Cap_index* Cap_index::find_by_id(id_t id)
{
if (_id == id) return this;
@@ -116,8 +116,8 @@ Cap_index* Capability_map::insert(Cap_index::id_t id)
{
Spin_lock::Guard guard(_lock);
- ASSERT(!_tree.first() || !_tree.first()->find_by_id(id),
- "Double insertion in cap_map()!");
+ if (_tree.first() && _tree.first()->find_by_id(id))
+ return { };
Cap_index * const i = cap_idx_alloc().alloc_range(1);
if (i) {
@@ -184,9 +184,16 @@ Cap_index* Capability_map::insert_map(Cap_index::id_t id, addr_t kcap)
_tree.insert(i);
/* map the given cap to our registry entry */
- l4_task_map(L4_BASE_TASK_CAP, L4_BASE_TASK_CAP,
- l4_obj_fpage(kcap, 0, L4_FPAGE_RWX),
- i->kcap() | L4_ITEM_MAP | L4_MAP_ITEM_GRANT);
+ auto const msg = l4_task_map(L4_BASE_TASK_CAP, L4_BASE_TASK_CAP,
+ l4_obj_fpage(kcap, 0, L4_FPAGE_RWX),
+ i->kcap() | L4_ITEM_MAP | L4_MAP_ITEM_GRANT);
+
+ if (l4_error(msg)) {
+ _tree.remove(i);
+ cap_idx_alloc().free(i, 1);
+ return 0;
+ }
+
return i;
}
diff --git a/repos/base-foc/src/lib/base/ipc.cc b/repos/base-foc/src/lib/base/ipc.cc
index 1b6985d9c6..f096586385 100644
--- a/repos/base-foc/src/lib/base/ipc.cc
+++ b/repos/base-foc/src/lib/base/ipc.cc
@@ -55,9 +55,6 @@ static inline bool ipc_error(l4_msgtag_t tag, bool print)
}
-static constexpr Cap_index::id_t INVALID_BADGE = 0xffff;
-
-
/**
* Representation of a capability during UTCB marshalling/unmarshalling
*/
@@ -116,7 +113,7 @@ static int extract_msg_from_utcb(l4_msgtag_t tag,
Cap_index::id_t const badge = (Cap_index::id_t)(*msg_words++);
- if (badge == INVALID_BADGE)
+ if (badge == Cap_index::INVALID_ID)
continue;
/* received a delegated capability */
@@ -227,7 +224,7 @@ static l4_msgtag_t copy_msgbuf_to_utcb(Msgbuf_base &snd_msg,
for (unsigned i = 0; i < num_caps; i++) {
/* store badge as normal message word */
- *msg_words++ = caps[i].valid ? caps[i].badge : INVALID_BADGE;
+ *msg_words++ = caps[i].valid ? caps[i].badge : Cap_index::INVALID_ID;
/* setup flexpage for valid capability to delegate */
if (caps[i].valid) {
diff --git a/repos/base-foc/src/lib/base/x86/vm.cc b/repos/base-foc/src/lib/base/x86/vm.cc
index 58f3d94c86..9d9685a94d 100644
--- a/repos/base-foc/src/lib/base/x86/vm.cc
+++ b/repos/base-foc/src/lib/base/x86/vm.cc
@@ -42,7 +42,6 @@ namespace Foc {
using namespace Genode;
using Exit_config = Vm_connection::Exit_config;
-using Call_with_state = Vm_connection::Call_with_state;
enum Virt { VMX, SVM, UNKNOWN };
@@ -72,8 +71,7 @@ struct Foc_native_vcpu_rpc : Rpc_client, Noncopyable
Capability _create_vcpu(Vm_connection &vm,
Thread_capability &cap)
{
- return vm.with_upgrade([&] {
- return vm.call(cap); });
+ return vm.create_vcpu(cap);
}
public:
@@ -400,6 +398,7 @@ struct Foc_vcpu : Thread, Noncopyable
if (state.fpu.charged()) {
state.fpu.charge([&] (Vcpu_state::Fpu::State &fpu) {
asm volatile ("fxrstor %0" : : "m" (fpu) : "memory");
+ return 512;
});
} else
asm volatile ("fxrstor %0" : : "m" (_fpu_vcpu) : "memory");
@@ -412,6 +411,7 @@ struct Foc_vcpu : Thread, Noncopyable
state.fpu.charge([&] (Vcpu_state::Fpu::State &fpu) {
asm volatile ("fxsave %0" : "=m" (fpu) :: "memory");
asm volatile ("fxsave %0" : "=m" (_fpu_vcpu) :: "memory");
+ return 512;
});
asm volatile ("fxrstor %0" : : "m" (_fpu_ep) : "memory");
@@ -1340,7 +1340,7 @@ struct Foc_vcpu : Thread, Noncopyable
_wake_up.up();
}
- void with_state(Call_with_state &cw)
+ void with_state(auto const &fn)
{
if (!_dispatching) {
if (Thread::myself() != _ep_handler) {
@@ -1373,7 +1373,7 @@ struct Foc_vcpu : Thread, Noncopyable
_state_ready.down();
}
- if (cw.call_with_state(_vcpu_state)
+ if (fn(_vcpu_state)
|| _extra_dispatch_up)
resume();
@@ -1415,7 +1415,10 @@ static enum Virt virt_type(Env &env)
** vCPU API **
**************/
-void Vm_connection::Vcpu::_with_state(Call_with_state &cw) { static_cast(_native_vcpu).vcpu.with_state(cw); }
+void Vm_connection::Vcpu::_with_state(With_state::Ft const &fn)
+{
+ static_cast(_native_vcpu).vcpu.with_state(fn);
+}
Vm_connection::Vcpu::Vcpu(Vm_connection &vm, Allocator &alloc,
diff --git a/repos/base-hw/include/kernel/interface.h b/repos/base-hw/include/kernel/interface.h
index 0925f0c6dc..0bef5bffa8 100644
--- a/repos/base-hw/include/kernel/interface.h
+++ b/repos/base-hw/include/kernel/interface.h
@@ -382,13 +382,10 @@ namespace Kernel {
* Halt processing of a signal context synchronously
*
* \param context capability ID of the targeted signal context
- *
- * \retval 0 suceeded
- * \retval -1 failed
*/
- inline int kill_signal_context(capid_t const context)
+ inline void kill_signal_context(capid_t const context)
{
- return (int)call(call_id_kill_signal_context(), context);
+ call(call_id_kill_signal_context(), context);
}
/**
diff --git a/repos/base-hw/src/core/spec/x86_64/port_io.h b/repos/base-hw/include/spec/x86_64/port_io.h
similarity index 72%
rename from repos/base-hw/src/core/spec/x86_64/port_io.h
rename to repos/base-hw/include/spec/x86_64/port_io.h
index ebfcf2095c..5075448a14 100644
--- a/repos/base-hw/src/core/spec/x86_64/port_io.h
+++ b/repos/base-hw/include/spec/x86_64/port_io.h
@@ -11,13 +11,15 @@
* under the terms of the GNU Affero General Public License version 3.
*/
-#ifndef _CORE__SPEC__X86_64__PORT_IO_H_
-#define _CORE__SPEC__X86_64__PORT_IO_H_
+#ifndef _INCLUDE__SPEC__X86_64__PORT_IO_H_
+#define _INCLUDE__SPEC__X86_64__PORT_IO_H_
-/* core includes */
-#include
+#include
-namespace Core {
+namespace Hw {
+
+ using Genode::uint8_t;
+ using Genode::uint16_t;
/**
* Read byte from I/O port
@@ -38,4 +40,4 @@ namespace Core {
}
}
-#endif /* _CORE__SPEC__X86_64__PORT_IO_H_ */
+#endif /* _INCLUDE__SPEC__X86_64__PORT_IO_H_ */
diff --git a/repos/base-hw/lib/mk/core-hw.inc b/repos/base-hw/lib/mk/core-hw.inc
index 5ff6634311..832c4de40d 100644
--- a/repos/base-hw/lib/mk/core-hw.inc
+++ b/repos/base-hw/lib/mk/core-hw.inc
@@ -46,7 +46,6 @@ SRC_CC += ram_dataspace_factory.cc
SRC_CC += signal_transmitter_noinit.cc
SRC_CC += thread_start.cc
SRC_CC += env.cc
-SRC_CC += region_map_support.cc
SRC_CC += pager.cc
SRC_CC += _main.cc
SRC_CC += kernel/cpu.cc
@@ -55,13 +54,16 @@ SRC_CC += kernel/ipc_node.cc
SRC_CC += kernel/irq.cc
SRC_CC += kernel/main.cc
SRC_CC += kernel/object.cc
-SRC_CC += kernel/signal_receiver.cc
+SRC_CC += kernel/signal.cc
SRC_CC += kernel/thread.cc
SRC_CC += kernel/timer.cc
SRC_CC += capability.cc
SRC_CC += stack_area_addr.cc
SRC_CC += heartbeat.cc
+BOARD ?= unknown
+CC_OPT_platform += -DBOARD_NAME="\"$(BOARD)\""
+
# provide Genode version information
include $(BASE_DIR)/src/core/version.inc
diff --git a/repos/base-hw/lib/mk/spec/x86_64/core-hw-pc.mk b/repos/base-hw/lib/mk/spec/x86_64/core-hw-pc.mk
index 872990a6c4..ab6f733f84 100644
--- a/repos/base-hw/lib/mk/spec/x86_64/core-hw-pc.mk
+++ b/repos/base-hw/lib/mk/spec/x86_64/core-hw-pc.mk
@@ -22,12 +22,9 @@ SRC_CC += kernel/vm_thread_on.cc
SRC_CC += spec/x86_64/virtualization/kernel/vm.cc
SRC_CC += spec/x86_64/virtualization/kernel/svm.cc
SRC_CC += spec/x86_64/virtualization/kernel/vmx.cc
-SRC_CC += spec/x86_64/virtualization/vm_session_component.cc
-SRC_CC += vm_session_common.cc
-SRC_CC += vm_session_component.cc
SRC_CC += kernel/lock.cc
SRC_CC += spec/x86_64/pic.cc
-SRC_CC += spec/x86_64/pit.cc
+SRC_CC += spec/x86_64/timer.cc
SRC_CC += spec/x86_64/kernel/thread_exception.cc
SRC_CC += spec/x86_64/platform_support.cc
SRC_CC += spec/x86_64/virtualization/platform_services.cc
diff --git a/repos/base-hw/recipes/src/base-hw-pbxa9/hash b/repos/base-hw/recipes/src/base-hw-pbxa9/hash
index b5ccec7f13..42afb46388 100644
--- a/repos/base-hw/recipes/src/base-hw-pbxa9/hash
+++ b/repos/base-hw/recipes/src/base-hw-pbxa9/hash
@@ -1 +1 @@
-2024-08-28 de31628804f8541b6c0cf5a43ed621432befd5cb
+2024-12-10 ca4eabba0cf0313545712015ae6e9ebb4d968b2a
diff --git a/repos/base-hw/recipes/src/base-hw-pc/hash b/repos/base-hw/recipes/src/base-hw-pc/hash
index 8f549a54cd..0e8a145703 100644
--- a/repos/base-hw/recipes/src/base-hw-pc/hash
+++ b/repos/base-hw/recipes/src/base-hw-pc/hash
@@ -1 +1 @@
-2024-11-08-j 84d5a44cde007081915979748933030b05113be5
+2024-12-10 dad50ef2ab70aa5a7bd316ad116bfb1d59c5df5c
diff --git a/repos/base-hw/recipes/src/base-hw-virt_qemu_arm_v7a/hash b/repos/base-hw/recipes/src/base-hw-virt_qemu_arm_v7a/hash
index bcd440379c..5c22b9792a 100644
--- a/repos/base-hw/recipes/src/base-hw-virt_qemu_arm_v7a/hash
+++ b/repos/base-hw/recipes/src/base-hw-virt_qemu_arm_v7a/hash
@@ -1 +1 @@
-2024-08-28 73ea0cda27023fee8a56c5c104f85875e0ce2597
+2024-12-10 58d8cb90d04a52f53a9797d964568dc0d1e7c45d
diff --git a/repos/base-hw/recipes/src/base-hw-virt_qemu_arm_v8a/hash b/repos/base-hw/recipes/src/base-hw-virt_qemu_arm_v8a/hash
index 36f1d2dd44..5968b234b6 100644
--- a/repos/base-hw/recipes/src/base-hw-virt_qemu_arm_v8a/hash
+++ b/repos/base-hw/recipes/src/base-hw-virt_qemu_arm_v8a/hash
@@ -1 +1 @@
-2024-08-28 268365a21014538c4524a43c86f1e4b1b9709a96
+2024-12-10 1a5d21d207bb12797d285e1c3173cdaec7559afe
diff --git a/repos/base-hw/recipes/src/base-hw_content.inc b/repos/base-hw/recipes/src/base-hw_content.inc
index 17240b4489..74b6f28336 100644
--- a/repos/base-hw/recipes/src/base-hw_content.inc
+++ b/repos/base-hw/recipes/src/base-hw_content.inc
@@ -200,6 +200,7 @@ generalize_target_names: $(CONTENT)
# supplement BOARD definition that normally comes form the build dir
sed -i "s/\?= unknown/:= $(BOARD)/" src/core/hw/target.mk
sed -i "s/\?= unknown/:= $(BOARD)/" src/bootstrap/hw/target.mk
+ sed -i "s/\?= unknown/:= $(BOARD)/" lib/mk/core-hw.inc
# discharge targets when building for mismatching architecture
sed -i "1aREQUIRES := $(ARCH)" src/core/hw/target.mk
sed -i "1aREQUIRES := $(ARCH)" src/bootstrap/hw/target.mk
diff --git a/repos/base-hw/src/bootstrap/init.cc b/repos/base-hw/src/bootstrap/init.cc
index 9fcda2fec3..e29fe4b91b 100644
--- a/repos/base-hw/src/bootstrap/init.cc
+++ b/repos/base-hw/src/bootstrap/init.cc
@@ -16,7 +16,6 @@
/* base includes */
#include
-#include
using namespace Genode;
@@ -26,13 +25,23 @@ size_t bootstrap_stack_size = STACK_SIZE;
uint8_t bootstrap_stack[Board::NR_OF_CPUS][STACK_SIZE]
__attribute__((aligned(get_page_size())));
-Bootstrap::Platform & Bootstrap::platform() {
- return *unmanaged_singleton(); }
+
+Bootstrap::Platform & Bootstrap::platform()
+{
+ /*
+ * Don't use static local variable because cmpxchg cannot be executed
+ * w/o MMU on ARMv6.
+ */
+ static long _obj[(sizeof(Bootstrap::Platform)+sizeof(long))/sizeof(long)];
+ static Bootstrap::Platform *ptr;
+ if (!ptr)
+ ptr = construct_at(_obj);
+
+ return *ptr;
+}
extern "C" void init() __attribute__ ((noreturn));
-
-
extern "C" void init()
{
Bootstrap::Platform & p = Bootstrap::platform();
diff --git a/repos/base-hw/src/bootstrap/log.cc b/repos/base-hw/src/bootstrap/log.cc
index fedee7e2cd..4877c33e1b 100644
--- a/repos/base-hw/src/bootstrap/log.cc
+++ b/repos/base-hw/src/bootstrap/log.cc
@@ -20,7 +20,6 @@
#include
#include
#include
-#include
#include
@@ -55,7 +54,11 @@ struct Buffer
};
-Genode::Log &Genode::Log::log() { return unmanaged_singleton()->log; }
+Genode::Log &Genode::Log::log()
+{
+ static Buffer buffer { };
+ return buffer.log;
+}
void Genode::raw_write_string(char const *str) { log(str); }
diff --git a/repos/base-hw/src/bootstrap/platform.h b/repos/base-hw/src/bootstrap/platform.h
index f9aa11657b..8fb1b5743f 100644
--- a/repos/base-hw/src/bootstrap/platform.h
+++ b/repos/base-hw/src/bootstrap/platform.h
@@ -27,6 +27,7 @@ namespace Bootstrap {
using Genode::addr_t;
using Genode::size_t;
+ using Genode::uint32_t;
using Boot_info = Hw::Boot_info<::Board::Boot_info>;
using Hw::Mmio_space;
using Hw::Mapping;
diff --git a/repos/base-hw/src/bootstrap/spec/x86_64/multiboot2.h b/repos/base-hw/src/bootstrap/spec/x86_64/multiboot2.h
index 3fb5fdf4ad..0f1fc8bd60 100644
--- a/repos/base-hw/src/bootstrap/spec/x86_64/multiboot2.h
+++ b/repos/base-hw/src/bootstrap/spec/x86_64/multiboot2.h
@@ -73,7 +73,8 @@ class Genode::Multiboot2_info : Mmio<0x8>
Multiboot2_info(addr_t mbi) : Mmio({(char *)mbi, Mmio::SIZE}) { }
void for_each_tag(auto const &mem_fn,
- auto const &acpi_fn,
+ auto const &acpi_rsdp_v1_fn,
+ auto const &acpi_rsdp_v2_fn,
auto const &fb_fn,
auto const &systab64_fn)
{
@@ -103,6 +104,7 @@ class Genode::Multiboot2_info : Mmio<0x8>
if (tag.read() == Tag::Type::ACPI_RSDP_V1 ||
tag.read() == Tag::Type::ACPI_RSDP_V2) {
+
size_t const sizeof_tag = 1UL << Tag::LOG2_SIZE;
addr_t const rsdp_addr = tag_addr + sizeof_tag;
@@ -113,10 +115,12 @@ class Genode::Multiboot2_info : Mmio<0x8>
Hw::Acpi_rsdp rsdp_v1;
memset (&rsdp_v1, 0, sizeof(rsdp_v1));
memcpy (&rsdp_v1, rsdp, 20);
- acpi_fn(rsdp_v1);
+ acpi_rsdp_v1_fn(rsdp_v1);
+ } else
+ if (sizeof(*rsdp) <= tag.read() - sizeof_tag) {
+ /* ACPI RSDP v2 */
+ acpi_rsdp_v2_fn(*rsdp);
}
- if (sizeof(*rsdp) <= tag.read() - sizeof_tag)
- acpi_fn(*rsdp);
}
if (tag.read() == Tag::Type::FRAMEBUFFER) {
diff --git a/repos/base-hw/src/bootstrap/spec/x86_64/platform.cc b/repos/base-hw/src/bootstrap/spec/x86_64/platform.cc
index ed38b1d5e5..8ec2e444ac 100644
--- a/repos/base-hw/src/bootstrap/spec/x86_64/platform.cc
+++ b/repos/base-hw/src/bootstrap/spec/x86_64/platform.cc
@@ -18,10 +18,12 @@
#include
#include
#include
+#include
#include
#include
#include
+#include
using namespace Genode;
@@ -61,11 +63,113 @@ static Hw::Acpi_rsdp search_rsdp(addr_t area, addr_t area_size)
}
}
- Hw::Acpi_rsdp invalid;
+ Hw::Acpi_rsdp invalid { };
return invalid;
}
+static uint32_t calibrate_tsc_frequency(addr_t fadt_addr)
+{
+ uint32_t const default_freq = 2'400'000;
+
+ if (!fadt_addr) {
+ warning("FADT not found, returning fixed TSC frequency of ", default_freq, "kHz");
+ return default_freq;
+ }
+
+ uint32_t const sleep_ms = 10;
+
+ Hw::Acpi_fadt fadt(reinterpret_cast(fadt_addr));
+
+ uint32_t const freq = fadt.calibrate_freq_khz(sleep_ms, []() { return Hw::Tsc::rdtsc(); });
+
+ if (!freq) {
+ warning("Unable to calibrate TSC, returning fixed TSC frequency of ", default_freq, "kHz");
+ return default_freq;
+ }
+
+ return freq;
+}
+
+
+static Hw::Local_apic::Calibration calibrate_lapic_frequency(addr_t fadt_addr)
+{
+ uint32_t const default_freq = TIMER_MIN_TICKS_PER_MS;
+
+ if (!fadt_addr) {
+ warning("FADT not found, setting minimum Local APIC frequency of ", default_freq, "kHz");
+ return { default_freq, 1 };
+ }
+
+ uint32_t const sleep_ms = 10;
+
+ Hw::Acpi_fadt fadt(reinterpret_cast(fadt_addr));
+
+ Hw::Local_apic lapic(Hw::Cpu_memory_map::lapic_phys_base());
+
+ auto const result =
+ lapic.calibrate_divider([&] {
+ return fadt.calibrate_freq_khz(sleep_ms, [&] {
+ return lapic.read(); }, true); });
+
+ if (!result.freq_khz) {
+ warning("FADT not found, setting minimum Local APIC frequency of ", default_freq, "kHz");
+ return { default_freq, 1 };
+ }
+
+ return result;
+}
+
+
+static void disable_pit()
+{
+ using Hw::outb;
+
+ enum {
+ /* PIT constants */
+ PIT_CH0_DATA = 0x40,
+ PIT_MODE = 0x43,
+ };
+
+ /*
+ * Disable PIT timer channel. This is necessary since BIOS sets up
+ * channel 0 to fire periodically.
+ */
+ outb(PIT_MODE, 0x30);
+ outb(PIT_CH0_DATA, 0);
+ outb(PIT_CH0_DATA, 0);
+}
+
+
+/*
+ * Enable dispatch serializing lfence instruction on AMD processors
+ *
+ * See Software techniques for managing speculation on AMD processors
+ * Revision 5.09.23
+ * Mitigation G-2
+ */
+static void amd_enable_serializing_lfence()
+{
+ using Cpu = Hw::X86_64_cpu;
+
+ if (Hw::Vendor::get_vendor_id() != Hw::Vendor::Vendor_id::AMD)
+ return;
+
+ unsigned const family = Hw::Vendor::get_family();
+
+ /*
+ * In family 0Fh and 11h, lfence is always dispatch serializing and
+ * "AMD plans support for this MSR and access to this bit for all future
+ * processors." from family 14h on.
+ */
+ if ((family == 0x10) || (family == 0x12) || (family >= 0x14)) {
+ Cpu::Amd_lfence::access_t amd_lfence = Cpu::Amd_lfence::read();
+ Cpu::Amd_lfence::Enable_dispatch_serializing::set(amd_lfence);
+ Cpu::Amd_lfence::write(amd_lfence);
+ }
+}
+
+
Bootstrap::Platform::Board::Board()
:
core_mmio(Memory_region { 0, 0x1000 },
@@ -143,10 +247,14 @@ Bootstrap::Platform::Board::Board()
lambda(base, size);
},
- [&] (Hw::Acpi_rsdp const &rsdp) {
- /* prefer higher acpi revisions */
- if (!acpi_rsdp.valid() || acpi_rsdp.revision < rsdp.revision)
- acpi_rsdp = rsdp;
+ [&] (Hw::Acpi_rsdp const &rsdp_v1) {
+ /* only use ACPI RSDP v1 if nothing available/valid by now */
+ if (!acpi_rsdp.valid())
+ acpi_rsdp = rsdp_v1;
+ },
+ [&] (Hw::Acpi_rsdp const &rsdp_v2) {
+ /* prefer v2 ever, override stored previous rsdp v1 potentially */
+ acpi_rsdp = rsdp_v2;
},
[&] (Hw::Framebuffer const &fb) {
info.framebuffer = fb;
@@ -246,6 +354,21 @@ Bootstrap::Platform::Board::Board()
cpus = !cpus ? 1 : max_cpus;
}
+ /*
+ * Enable serializing lfence on supported AMD processors
+ *
+ * For APs this will be set up later, but we need it already to obtain
+ * the most acurate results when calibrating the TSC frequency.
+ */
+ amd_enable_serializing_lfence();
+
+ auto r = calibrate_lapic_frequency(info.acpi_fadt);
+ info.lapic_freq_khz = r.freq_khz;
+ info.lapic_div = r.div;
+ info.tsc_freq_khz = calibrate_tsc_frequency(info.acpi_fadt);
+
+ disable_pit();
+
/* copy 16 bit boot code for AP CPUs and for ACPI resume */
addr_t ap_code_size = (addr_t)&_start - (addr_t)&_ap;
memcpy((void *)AP_BOOT_CODE_PAGE, &_ap, ap_code_size);
@@ -315,9 +438,12 @@ unsigned Bootstrap::Platform::enable_mmu()
if (board.cpus <= 1)
return (unsigned)cpu_id;
- if (!Cpu::IA32_apic_base::Bsp::get(lapic_msr))
+ if (!Cpu::IA32_apic_base::Bsp::get(lapic_msr)) {
/* AP - done */
+ /* enable serializing lfence on supported AMD processors. */
+ amd_enable_serializing_lfence();
return (unsigned)cpu_id;
+ }
/* BSP - we're primary CPU - wake now all other CPUs */
diff --git a/repos/base-hw/src/core/board/pc/board.h b/repos/base-hw/src/core/board/pc/board.h
index f07a971d08..1a9327fbfd 100644
--- a/repos/base-hw/src/core/board/pc/board.h
+++ b/repos/base-hw/src/core/board/pc/board.h
@@ -21,7 +21,7 @@
/* base-hw core includes */
#include
-#include
+#include
#include
namespace Board {
diff --git a/repos/base-hw/src/core/core_region_map.cc b/repos/base-hw/src/core/core_region_map.cc
index 8965688c8a..f56d302efb 100644
--- a/repos/base-hw/src/core/core_region_map.cc
+++ b/repos/base-hw/src/core/core_region_map.cc
@@ -82,4 +82,11 @@ Core_region_map::attach(Dataspace_capability ds_cap, Attr const &attr)
}
-void Core_region_map::detach(addr_t) { }
+void Core_region_map::detach(addr_t core_local_addr)
+{
+ size_t size = platform_specific().region_alloc_size_at((void *)core_local_addr);
+
+ unmap_local(core_local_addr, size >> get_page_size_log2());
+
+ platform().region_alloc().free((void *)core_local_addr);
+}
diff --git a/repos/base-hw/src/core/guest_memory.h b/repos/base-hw/src/core/guest_memory.h
new file mode 100644
index 0000000000..5cfe658043
--- /dev/null
+++ b/repos/base-hw/src/core/guest_memory.h
@@ -0,0 +1,275 @@
+/*
+ * \brief Guest memory abstraction
+ * \author Stefan Kalkowski
+ * \author Benjamin Lamowski
+ * \date 2024-11-25
+ */
+
+/*
+ * Copyright (C) 2015-2024 Genode Labs GmbH
+ *
+ * This file is part of the Genode OS framework, which is distributed
+ * under the terms of the GNU Affero General Public License version 3.
+ */
+
+#ifndef _CORE__GUEST_MEMORY_H_
+#define _CORE__GUEST_MEMORY_H_
+
+/* base includes */
+#include
+#include
+#include
+#include
+
+/* core includes */
+#include
+#include
+
+namespace Core { class Guest_memory; }
+
+using namespace Core;
+
+
+class Core::Guest_memory
+{
+ private:
+
+ using Avl_region = Allocator_avl_tpl;
+
+ using Attach_attr = Genode::Vm_session::Attach_attr;
+
+ Sliced_heap _sliced_heap;
+ Avl_region _map { &_sliced_heap };
+
+ uint8_t _remaining_print_count { 10 };
+
+ void _with_region(addr_t const addr, auto const &fn)
+ {
+ Rm_region *region = _map.metadata((void *)addr);
+ if (region)
+ fn(*region);
+ else
+ if (_remaining_print_count) {
+ error(__PRETTY_FUNCTION__, " unknown region");
+ _remaining_print_count--;
+ }
+ }
+
+ public:
+
+ enum class Attach_result {
+ OK,
+ INVALID_DS,
+ OUT_OF_RAM,
+ OUT_OF_CAPS,
+ REGION_CONFLICT,
+ };
+
+
+ Attach_result attach(Region_map_detach &rm_detach,
+ Dataspace_component &dsc,
+ addr_t const guest_phys,
+ Attach_attr attr,
+ auto const &map_fn)
+ {
+ /*
+ * unsupported - deny otherwise arbitrary physical
+ * memory can be mapped to a VM
+ */
+ if (dsc.managed())
+ return Attach_result::INVALID_DS;
+
+ if (guest_phys & 0xffful || attr.offset & 0xffful ||
+ attr.size & 0xffful)
+ return Attach_result::INVALID_DS;
+
+ if (!attr.size) {
+ attr.size = dsc.size();
+
+ if (attr.offset < attr.size)
+ attr.size -= attr.offset;
+ }
+
+ if (attr.size > dsc.size())
+ attr.size = dsc.size();
+
+ if (attr.offset >= dsc.size() ||
+ attr.offset > dsc.size() - attr.size)
+ return Attach_result::INVALID_DS;
+
+ using Alloc_error = Range_allocator::Alloc_error;
+
+ Attach_result const retval = _map.alloc_addr(attr.size, guest_phys).convert(
+
+ [&] (void *) {
+
+ Rm_region::Attr const region_attr
+ {
+ .base = guest_phys,
+ .size = attr.size,
+ .write = dsc.writeable() && attr.writeable,
+ .exec = attr.executable,
+ .off = attr.offset,
+ .dma = false,
+ };
+
+ /* store attachment info in meta data */
+ try {
+ _map.construct_metadata((void *)guest_phys,
+ dsc, rm_detach, region_attr);
+
+ } catch (Allocator_avl_tpl::Assign_metadata_failed) {
+ if (_remaining_print_count) {
+ error("failed to store attachment info");
+ _remaining_print_count--;
+ }
+ return Attach_result::INVALID_DS;
+ }
+
+ Rm_region ®ion = *_map.metadata((void *)guest_phys);
+
+ /* inform dataspace about attachment */
+ dsc.attached_to(region);
+
+ return Attach_result::OK;
+ },
+
+ [&] (Alloc_error error) {
+
+ switch (error) {
+
+ case Alloc_error::OUT_OF_RAM:
+ return Attach_result::OUT_OF_RAM;
+ case Alloc_error::OUT_OF_CAPS:
+ return Attach_result::OUT_OF_CAPS;
+ case Alloc_error::DENIED:
+ {
+ /*
+ * Handle attach after partial detach
+ */
+ Rm_region *region_ptr = _map.metadata((void *)guest_phys);
+ if (!region_ptr)
+ return Attach_result::REGION_CONFLICT;
+
+ Rm_region ®ion = *region_ptr;
+
+ bool conflict = false;
+ region.with_dataspace([&] (Dataspace_component &dataspace) {
+ (void)dataspace;
+ if (!(dsc.cap() == dataspace.cap()))
+ conflict = true;
+ });
+ if (conflict)
+ return Attach_result::REGION_CONFLICT;
+
+ if (guest_phys < region.base() ||
+ guest_phys > region.base() + region.size() - 1)
+ return Attach_result::REGION_CONFLICT;
+ }
+
+ };
+
+ return Attach_result::OK;
+ }
+ );
+
+ if (retval == Attach_result::OK) {
+ addr_t phys_addr = dsc.phys_addr() + attr.offset;
+ size_t size = attr.size;
+
+ map_fn(guest_phys, phys_addr, size);
+ }
+
+ return retval;
+ }
+
+
+ void detach(addr_t guest_phys,
+ size_t size,
+ auto const &unmap_fn)
+ {
+ if (!size || (guest_phys & 0xffful) || (size & 0xffful)) {
+ if (_remaining_print_count) {
+ warning("vm_session: skipping invalid memory detach addr=",
+ (void *)guest_phys, " size=", (void *)size);
+ _remaining_print_count--;
+ }
+ return;
+ }
+
+ addr_t const guest_phys_end = guest_phys + (size - 1);
+ addr_t addr = guest_phys;
+ do {
+ Rm_region *region = _map.metadata((void *)addr);
+
+ /* walk region holes page-by-page */
+ size_t iteration_size = 0x1000;
+
+ if (region) {
+ iteration_size = region->size();
+ detach_at(region->base(), unmap_fn);
+ }
+
+ if (addr >= guest_phys_end - (iteration_size - 1))
+ break;
+
+ addr += iteration_size;
+ } while (true);
+ }
+
+
+ Guest_memory(Constrained_ram_allocator &constrained_md_ram_alloc,
+ Region_map ®ion_map)
+ :
+ _sliced_heap(constrained_md_ram_alloc, region_map)
+ {
+ /* configure managed VM area */
+ _map.add_range(0UL, ~0UL);
+ }
+
+ ~Guest_memory()
+ {
+ /* detach all regions */
+ while (true) {
+ addr_t out_addr = 0;
+
+ if (!_map.any_block_addr(&out_addr))
+ break;
+
+ detach_at(out_addr, [](addr_t, size_t) { });
+ }
+ }
+
+
+ void detach_at(addr_t addr,
+ auto const &unmap_fn)
+ {
+ _with_region(addr, [&] (Rm_region ®ion) {
+
+ if (!region.reserved())
+ reserve_and_flush(addr, unmap_fn);
+
+ /* free the reserved region */
+ _map.free(reinterpret_cast(region.base()));
+ });
+ }
+
+
+ void reserve_and_flush(addr_t addr,
+ auto const &unmap_fn)
+ {
+ _with_region(addr, [&] (Rm_region ®ion) {
+
+ /* inform dataspace */
+ region.with_dataspace([&] (Dataspace_component &dataspace) {
+ dataspace.detached_from(region);
+ });
+
+ region.mark_as_reserved();
+
+ unmap_fn(region.base(), region.size());
+ });
+ }
+};
+
+#endif /* _CORE__GUEST_MEMORY_H_ */
diff --git a/repos/base-hw/src/core/io_mem_session_support.cc b/repos/base-hw/src/core/io_mem_session_support.cc
index a25f199fd8..42092b1e6a 100644
--- a/repos/base-hw/src/core/io_mem_session_support.cc
+++ b/repos/base-hw/src/core/io_mem_session_support.cc
@@ -21,5 +21,7 @@ using namespace Core;
void Io_mem_session_component::_unmap_local(addr_t, size_t, addr_t) { }
-addr_t Io_mem_session_component::_map_local(addr_t base, size_t) { return base; }
-
+Io_mem_session_component::Map_local_result Io_mem_session_component::_map_local(addr_t const base, size_t)
+{
+ return { .core_local_addr = base, .success = true };
+}
diff --git a/repos/base-hw/src/core/irq_session_component.cc b/repos/base-hw/src/core/irq_session_component.cc
index 6c33684f91..daa4a71c96 100644
--- a/repos/base-hw/src/core/irq_session_component.cc
+++ b/repos/base-hw/src/core/irq_session_component.cc
@@ -18,7 +18,7 @@
/* core includes */
#include
#include
-#include
+#include
/* base-internal includes */
#include
diff --git a/repos/base-hw/src/core/kernel/core_interface.h b/repos/base-hw/src/core/kernel/core_interface.h
index 4e606ee1be..a57c0acf6e 100644
--- a/repos/base-hw/src/core/kernel/core_interface.h
+++ b/repos/base-hw/src/core/kernel/core_interface.h
@@ -66,6 +66,7 @@ namespace Kernel {
constexpr Call_arg call_id_set_cpu_state() { return 125; }
constexpr Call_arg call_id_exception_state() { return 126; }
constexpr Call_arg call_id_single_step() { return 127; }
+ constexpr Call_arg call_id_ack_pager_signal() { return 128; }
/**
* Invalidate TLB entries for the `pd` in region `addr`, `sz`
@@ -137,10 +138,9 @@ namespace Kernel {
* \retval 0 suceeded
* \retval !=0 failed
*/
- inline int start_thread(Thread & thread, unsigned const cpu_id,
- Pd & pd, Native_utcb & utcb)
+ inline int start_thread(Thread & thread, Pd & pd, Native_utcb & utcb)
{
- return (int)call(call_id_start_thread(), (Call_arg)&thread, cpu_id,
+ return (int)call(call_id_start_thread(), (Call_arg)&thread,
(Call_arg)&pd, (Call_arg)&utcb);
}
@@ -148,13 +148,16 @@ namespace Kernel {
/**
* Set or unset the handler of an event that can be triggered by a thread
*
- * \param thread pointer to thread kernel object
+ * \param thread reference to thread kernel object
+ * \param pager reference to pager kernel object
* \param signal_context_id capability id of the page-fault handler
*/
- inline void thread_pager(Thread & thread,
+ inline void thread_pager(Thread &thread,
+ Thread &pager,
capid_t const signal_context_id)
{
- call(call_id_thread_pager(), (Call_arg)&thread, signal_context_id);
+ call(call_id_thread_pager(), (Call_arg)&thread, (Call_arg)&pager,
+ signal_context_id);
}
@@ -203,6 +206,18 @@ namespace Kernel {
{
call(call_id_single_step(), (Call_arg)&thread, (Call_arg)&on);
}
+
+ /**
+ * Acknowledge a signal transmitted to a pager
+ *
+ * \param context signal context to acknowledge
+ * \param thread reference to faulting thread kernel object
+ * \param resolved whether fault got resolved
+ */
+ inline void ack_pager_signal(capid_t const context, Thread &thread, bool resolved)
+ {
+ call(call_id_ack_pager_signal(), context, (Call_arg)&thread, resolved);
+ }
}
#endif /* _CORE__KERNEL__CORE_INTERFACE_H_ */
diff --git a/repos/base-hw/src/core/kernel/cpu.cc b/repos/base-hw/src/core/kernel/cpu.cc
index 0e0c686984..7964230ebf 100644
--- a/repos/base-hw/src/core/kernel/cpu.cc
+++ b/repos/base-hw/src/core/kernel/cpu.cc
@@ -27,35 +27,35 @@
using namespace Kernel;
-/*************
- ** Cpu_job **
- *************/
+/*****************
+ ** Cpu_context **
+ *****************/
-void Cpu_job::_activate_own_share() { _cpu->schedule(this); }
+void Cpu_context::_activate() { _cpu().schedule(*this); }
-void Cpu_job::_deactivate_own_share()
+void Cpu_context::_deactivate()
{
- assert(_cpu->id() == Cpu::executing_id());
- _cpu->scheduler().unready(*this);
+ assert(_cpu().id() == Cpu::executing_id());
+ _cpu().scheduler().unready(*this);
}
-void Cpu_job::_yield()
+void Cpu_context::_yield()
{
- assert(_cpu->id() == Cpu::executing_id());
- _cpu->scheduler().yield();
+ assert(_cpu().id() == Cpu::executing_id());
+ _cpu().scheduler().yield();
}
-void Cpu_job::_interrupt(Irq::Pool &user_irq_pool, unsigned const /* cpu_id */)
+void Cpu_context::_interrupt(Irq::Pool &user_irq_pool)
{
/* let the IRQ controller take a pending IRQ for handling, if any */
unsigned irq_id;
- if (_cpu->pic().take_request(irq_id))
+ if (_cpu().pic().take_request(irq_id))
- /* let the CPU of this job handle the IRQ if it is a CPU-local one */
- if (!_cpu->handle_if_cpu_local_interrupt(irq_id)) {
+ /* let the CPU of this context handle the IRQ if it is a CPU-local one */
+ if (!_cpu().handle_if_cpu_local_interrupt(irq_id)) {
/* it isn't a CPU-local IRQ, so, it must be a user IRQ */
User_irq * irq = User_irq::object(user_irq_pool, irq_id);
@@ -64,38 +64,37 @@ void Cpu_job::_interrupt(Irq::Pool &user_irq_pool, unsigned const /* cpu_id */)
}
/* let the IRQ controller finish the currently taken IRQ */
- _cpu->pic().finish_request();
+ _cpu().pic().finish_request();
}
-void Cpu_job::affinity(Cpu &cpu)
+void Cpu_context::affinity(Cpu &cpu)
{
- _cpu = &cpu;
- _cpu->scheduler().insert(*this);
+ _cpu().scheduler().remove(*this);
+ _cpu_ptr = &cpu;
+ _cpu().scheduler().insert(*this);
}
-void Cpu_job::quota(unsigned const q)
+void Cpu_context::quota(unsigned const q)
{
- if (_cpu)
- _cpu->scheduler().quota(*this, q);
- else
- Context::quota(q);
+ _cpu().scheduler().quota(*this, q);
}
-Cpu_job::Cpu_job(Priority const p, unsigned const q)
+Cpu_context::Cpu_context(Cpu &cpu,
+ Priority const priority,
+ unsigned const quota)
:
- Context(p, q), _cpu(0)
-{ }
-
-
-Cpu_job::~Cpu_job()
+ Context(priority, quota), _cpu_ptr(&cpu)
{
- if (!_cpu)
- return;
+ _cpu().scheduler().insert(*this);
+}
- _cpu->scheduler().remove(*this);
+
+Cpu_context::~Cpu_context()
+{
+ _cpu().scheduler().remove(*this);
}
@@ -112,19 +111,17 @@ Cpu::Idle_thread::Idle_thread(Board::Address_space_id_allocator &addr_space_id_a
Cpu &cpu,
Pd &core_pd)
:
- Thread { addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd,
- Priority::min(), 0, "idle", Thread::IDLE }
+ Thread { addr_space_id_alloc, user_irq_pool, cpu_pool, cpu,
+ core_pd, Priority::min(), 0, "idle", Thread::IDLE }
{
regs->ip = (addr_t)&idle_thread_main;
-
- affinity(cpu);
Thread::_pd = &core_pd;
}
-void Cpu::schedule(Job * const job)
+void Cpu::schedule(Context &context)
{
- _scheduler.ready(job->context());
+ _scheduler.ready(static_cast(context));
if (_id != executing_id() && _scheduler.need_to_schedule())
trigger_ip_interrupt();
}
@@ -142,33 +139,34 @@ bool Cpu::handle_if_cpu_local_interrupt(unsigned const irq_id)
}
-Cpu_job & Cpu::schedule()
+Cpu::Context & Cpu::handle_exception_and_schedule()
{
- /* update scheduler */
- Job & old_job = scheduled_job();
- old_job.exception(*this);
+ Context &context = current_context();
+ context.exception();
if (_state == SUSPEND || _state == HALT)
return _halt_job;
+ /* update schedule if necessary */
if (_scheduler.need_to_schedule()) {
_timer.process_timeouts();
_scheduler.update(_timer.time());
time_t t = _scheduler.current_time_left();
_timer.set_timeout(&_timeout, t);
time_t duration = _timer.schedule_timeout();
- old_job.update_execution_time(duration);
+ context.update_execution_time(duration);
}
- /* return new job */
- return scheduled_job();
+ /* return current context */
+ return current_context();
}
addr_t Cpu::stack_start()
{
return Abi::stack_align(Hw::Mm::cpu_local_memory().base +
- (1024*1024*_id) + (64*1024));
+ (Hw::Mm::CPU_LOCAL_MEMORY_SLOT_SIZE*_id)
+ + Hw::Mm::KERNEL_STACK_SIZE);
}
diff --git a/repos/base-hw/src/core/kernel/cpu.h b/repos/base-hw/src/core/kernel/cpu.h
index 83f2d5b7cc..e7bf8ff30c 100644
--- a/repos/base-hw/src/core/kernel/cpu.h
+++ b/repos/base-hw/src/core/kernel/cpu.h
@@ -39,9 +39,11 @@ namespace Kernel {
class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
public Genode::List::Element
{
- private:
+ public:
- using Job = Cpu_job;
+ using Context = Cpu_context;
+
+ private:
/**
* Inter-processor-interrupt object of the cpu
@@ -83,16 +85,14 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
Pd &core_pd);
};
- struct Halt_job : Job
+ struct Halt_job : Cpu_context
{
- Halt_job() : Job (0, 0) { }
+ Halt_job(Cpu &cpu)
+ : Cpu_context(cpu, 0, 0) { }
- void exception(Kernel::Cpu &) override { }
-
- void proceed(Kernel::Cpu &) override;
-
- Kernel::Cpu_job* helping_destination() override { return this; }
- } _halt_job { };
+ void exception() override { }
+ void proceed() override;
+ } _halt_job { *this };
enum State { RUN, HALT, SUSPEND };
@@ -143,14 +143,14 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
bool handle_if_cpu_local_interrupt(unsigned const irq_id);
/**
- * Schedule 'job' at this CPU
+ * Schedule 'context' at this CPU
*/
- void schedule(Job * const job);
+ void schedule(Context& context);
/**
- * Return the job that should be executed at next
+ * Return the context that should be executed next
*/
- Cpu_job& schedule();
+ Context& handle_exception_and_schedule();
Board::Pic & pic() { return _pic; }
Timer & timer() { return _timer; }
@@ -158,10 +158,10 @@ class Kernel::Cpu : public Core::Cpu, private Irq::Pool,
addr_t stack_start();
/**
- * Returns the currently active job
+ * Returns the currently scheduled context
*/
- Job & scheduled_job() {
- return *static_cast(&_scheduler.current())->helping_destination(); }
+ Context & current_context() {
+ return static_cast(_scheduler.current().helping_destination()); }
unsigned id() const { return _id; }
Scheduler &scheduler() { return _scheduler; }
diff --git a/repos/base-hw/src/core/kernel/cpu_context.h b/repos/base-hw/src/core/kernel/cpu_context.h
index 8c7444ac3d..ad062bc097 100644
--- a/repos/base-hw/src/core/kernel/cpu_context.h
+++ b/repos/base-hw/src/core/kernel/cpu_context.h
@@ -22,46 +22,39 @@
namespace Kernel {
class Cpu;
-
- /**
- * Context of a job (thread, VM, idle) that shall be executed by a CPU
- */
- class Cpu_job;
+ class Cpu_context;
}
-class Kernel::Cpu_job : private Scheduler::Context
+/**
+ * Context (thread, vcpu) that shall be executed by a CPU
+ */
+class Kernel::Cpu_context : private Scheduler::Context
{
private:
- friend class Cpu; /* static_cast from 'Scheduler::Context' to 'Cpu_job' */
+ friend class Cpu;
time_t _execution_time { 0 };
+ Cpu *_cpu_ptr;
/*
* Noncopyable
*/
- Cpu_job(Cpu_job const &);
- Cpu_job &operator = (Cpu_job const &);
+ Cpu_context(Cpu_context const &);
+ Cpu_context &operator = (Cpu_context const &);
protected:
- Cpu * _cpu;
+ Cpu &_cpu() const { return *_cpu_ptr; }
/**
- * Handle interrupt exception that occured during execution on CPU 'id'
+ * Handle interrupt exception
*/
- void _interrupt(Irq::Pool &user_irq_pool, unsigned const id);
+ void _interrupt(Irq::Pool &user_irq_pool);
- /**
- * Activate our own CPU-share
- */
- void _activate_own_share();
-
- /**
- * Deactivate our own CPU-share
- */
- void _deactivate_own_share();
+ void _activate();
+ void _deactivate();
/**
* Yield the currently scheduled CPU share of this context
@@ -69,55 +62,37 @@ class Kernel::Cpu_job : private Scheduler::Context
void _yield();
/**
- * Return wether we are allowed to help job 'j' with our CPU-share
+ * Return possibility to help context 'j' scheduling-wise
*/
- bool _helping_possible(Cpu_job const &j) const { return j._cpu == _cpu; }
+ bool _helping_possible(Cpu_context const &j) const {
+ return j._cpu_ptr == _cpu_ptr; }
+
+ void _help(Cpu_context &context) { Context::help(context); }
+
+ using Context::ready;
+ using Context::helping_finished;
public:
using Context = Scheduler::Context;
using Priority = Scheduler::Priority;
- /**
- * Handle exception that occured during execution on CPU 'id'
- */
- virtual void exception(Cpu & cpu) = 0;
+ Cpu_context(Cpu &cpu,
+ Priority const priority,
+ unsigned const quota);
+
+ virtual ~Cpu_context();
/**
- * Continue execution on CPU 'id'
- */
- virtual void proceed(Cpu & cpu) = 0;
-
- /**
- * Return which job currently uses our CPU-share
- */
- virtual Cpu_job * helping_destination() = 0;
-
- /**
- * Construct a job with scheduling priority 'p' and time quota 'q'
- */
- Cpu_job(Priority const p, unsigned const q);
-
- /**
- * Destructor
- */
- virtual ~Cpu_job();
-
- /**
- * Link job to CPU 'cpu'
+ * Link context to CPU 'cpu'
*/
void affinity(Cpu &cpu);
/**
- * Set CPU quota of the job to 'q'
+ * Set CPU quota of the context to 'q'
*/
void quota(unsigned const q);
- /**
- * Return wether our CPU-share is currently active
- */
- bool own_share_active() { return Context::ready(); }
-
/**
* Update total execution time
*/
@@ -128,14 +103,15 @@ class Kernel::Cpu_job : private Scheduler::Context
*/
time_t execution_time() const { return _execution_time; }
+ /**
+ * Handle exception that occured during execution of this context
+ */
+ virtual void exception() = 0;
- /***************
- ** Accessors **
- ***************/
-
- void cpu(Cpu &cpu) { _cpu = &cpu; }
-
- Context &context() { return *this; }
+ /**
+ * Continue execution of this context
+ */
+ virtual void proceed() = 0;
};
#endif /* _CORE__KERNEL__CPU_CONTEXT_H_ */
diff --git a/repos/base-hw/src/core/kernel/inter_processor_work.h b/repos/base-hw/src/core/kernel/inter_processor_work.h
index f2791ccac7..3a4d078a65 100644
--- a/repos/base-hw/src/core/kernel/inter_processor_work.h
+++ b/repos/base-hw/src/core/kernel/inter_processor_work.h
@@ -11,8 +11,8 @@
* under the terms of the GNU Affero General Public License version 3.
*/
-#ifndef _CORE__KERNEL__SMP_H_
-#define _CORE__KERNEL__SMP_H_
+#ifndef _CORE__KERNEL__INTER_PROCESSOR_WORK_H_
+#define _CORE__KERNEL__INTER_PROCESSOR_WORK_H_
#include
@@ -32,11 +32,11 @@ class Kernel::Inter_processor_work : Genode::Interface
{
public:
- virtual void execute(Cpu &) = 0;
+ virtual void execute(Cpu & cpu) = 0;
protected:
Genode::List_element _le { this };
};
-#endif /* _CORE__KERNEL__SMP_H_ */
+#endif /* _CORE__KERNEL__INTER_PROCESSOR_WORK_H_ */
diff --git a/repos/base-hw/src/core/kernel/ipc_node.cc b/repos/base-hw/src/core/kernel/ipc_node.cc
index f06b557c36..e323d21e75 100644
--- a/repos/base-hw/src/core/kernel/ipc_node.cc
+++ b/repos/base-hw/src/core/kernel/ipc_node.cc
@@ -57,19 +57,13 @@ void Ipc_node::_cancel_send()
}
-bool Ipc_node::_helping() const
-{
- return _out.state == Out::SEND_HELPING && _out.node;
-}
-
-
bool Ipc_node::ready_to_send() const
{
return _out.state == Out::READY && !_in.waiting();
}
-void Ipc_node::send(Ipc_node &node, bool help)
+void Ipc_node::send(Ipc_node &node)
{
node._in.queue.enqueue(_queue_item);
@@ -78,13 +72,7 @@ void Ipc_node::send(Ipc_node &node, bool help)
node._thread.ipc_await_request_succeeded();
}
_out.node = &node;
- _out.state = help ? Out::SEND_HELPING : Out::SEND;
-}
-
-
-Thread &Ipc_node::helping_destination()
-{
- return _helping() ? _out.node->helping_destination() : _thread;
+ _out.state = Out::SEND;
}
diff --git a/repos/base-hw/src/core/kernel/ipc_node.h b/repos/base-hw/src/core/kernel/ipc_node.h
index df9f3d7d19..6bdd899eaf 100644
--- a/repos/base-hw/src/core/kernel/ipc_node.h
+++ b/repos/base-hw/src/core/kernel/ipc_node.h
@@ -50,14 +50,14 @@ class Kernel::Ipc_node
struct Out
{
- enum State { READY, SEND, SEND_HELPING, DESTRUCT };
+ enum State { READY, SEND, DESTRUCT };
State state { READY };
Ipc_node *node { nullptr };
bool sending() const
{
- return state == SEND_HELPING || state == SEND;
+ return state == SEND;
}
};
@@ -76,11 +76,6 @@ class Kernel::Ipc_node
*/
void _cancel_send();
- /**
- * Return wether this IPC node is helping another one
- */
- bool _helping() const;
-
/**
* Noncopyable
*/
@@ -102,28 +97,8 @@ class Kernel::Ipc_node
* Send a message and wait for the according reply
*
* \param node targeted IPC node
- * \param help wether the request implies a helping relationship
*/
- void send(Ipc_node &node, bool help);
-
- /**
- * Return final destination of the helping-chain
- * this IPC node is part of, or its own thread otherwise
- */
- Thread &helping_destination();
-
- /**
- * Call 'fn' of type 'void (Ipc_node *)' for each helper
- */
- void for_each_helper(auto const &fn)
- {
- _in.queue.for_each([fn] (Queue_item &item) {
- Ipc_node &node { item.object() };
-
- if (node._helping())
- fn(node._thread);
- });
- }
+ void send(Ipc_node &node);
/**
* Return whether this IPC node is ready to wait for messages
diff --git a/repos/base-hw/src/core/kernel/irq.h b/repos/base-hw/src/core/kernel/irq.h
index 9b8be0bf65..bdfb858fc1 100644
--- a/repos/base-hw/src/core/kernel/irq.h
+++ b/repos/base-hw/src/core/kernel/irq.h
@@ -20,7 +20,7 @@
#include
/* core includes */
-#include
+#include
namespace Board {
@@ -161,9 +161,7 @@ class Kernel::User_irq : public Kernel::Irq
*/
void occurred() override
{
- if (_context.can_submit(1)) {
- _context.submit(1);
- }
+ _context.submit(1);
disable();
}
diff --git a/repos/base-hw/src/core/kernel/main.cc b/repos/base-hw/src/core/kernel/main.cc
index e14b17f1c5..a66f1405ce 100644
--- a/repos/base-hw/src/core/kernel/main.cc
+++ b/repos/base-hw/src/core/kernel/main.cc
@@ -63,16 +63,16 @@ Kernel::Main *Kernel::Main::_instance;
void Kernel::Main::_handle_kernel_entry()
{
- Cpu &cpu = _cpu_pool.cpu(Cpu::executing_id());
- Cpu_job * new_job;
+ Cpu::Context * context;
{
Lock::Guard guard(_data_lock);
- new_job = &cpu.schedule();
+ context =
+ &_cpu_pool.cpu(Cpu::executing_id()).handle_exception_and_schedule();
}
- new_job->proceed(cpu);
+ context->proceed();
}
diff --git a/repos/base-hw/src/core/kernel/scheduler.cc b/repos/base-hw/src/core/kernel/scheduler.cc
index 5dbd7f2c18..da8d8accb2 100644
--- a/repos/base-hw/src/core/kernel/scheduler.cc
+++ b/repos/base-hw/src/core/kernel/scheduler.cc
@@ -19,6 +19,38 @@
using namespace Kernel;
+void Scheduler::Context::help(Scheduler::Context &c)
+{
+ _destination = &c;
+ c._helper_list.insert(&_helper_le);
+}
+
+
+void Scheduler::Context::helping_finished()
+{
+ if (!_destination)
+ return;
+
+ _destination->_helper_list.remove(&_helper_le);
+ _destination = nullptr;
+}
+
+
+Scheduler::Context& Scheduler::Context::helping_destination()
+{
+ return (_destination) ? _destination->helping_destination() : *this;
+}
+
+
+Scheduler::Context::~Context()
+{
+ helping_finished();
+
+ for (Context::List_element *h = _helper_list.first(); h; h = h->next())
+ h->object()->helping_finished();
+}
+
+
void Scheduler::_consumed(unsigned const time)
{
if (_super_period_left > time) {
@@ -149,7 +181,10 @@ void Scheduler::update(time_t time)
void Scheduler::ready(Context &c)
{
- assert(!c.ready() && &c != &_idle);
+ assert(&c != &_idle);
+
+ if (c.ready())
+ return;
c._ready = true;
@@ -170,23 +205,33 @@ void Scheduler::ready(Context &c)
_slack_list.insert_head(&c._slack_le);
if (!keep_current && _state == UP_TO_DATE) _state = OUT_OF_DATE;
+
+ for (Context::List_element *helper = c._helper_list.first();
+ helper; helper = helper->next())
+ if (!helper->object()->ready()) ready(*helper->object());
}
void Scheduler::unready(Context &c)
{
- assert(c.ready() && &c != &_idle);
+ assert(&c != &_idle);
+
+ if (!c.ready())
+ return;
if (&c == _current && _state == UP_TO_DATE) _state = OUT_OF_DATE;
c._ready = false;
_slack_list.remove(&c._slack_le);
- if (!c._quota)
- return;
+ if (c._quota) {
+ _rpl[c._priority].remove(&c._priotized_le);
+ _upl[c._priority].insert_tail(&c._priotized_le);
+ }
- _rpl[c._priority].remove(&c._priotized_le);
- _upl[c._priority].insert_tail(&c._priotized_le);
+ for (Context::List_element *helper = c._helper_list.first();
+ helper; helper = helper->next())
+ if (helper->object()->ready()) unready(*helper->object());
}
diff --git a/repos/base-hw/src/core/kernel/scheduler.h b/repos/base-hw/src/core/kernel/scheduler.h
index 7727b24995..4f4af83714 100644
--- a/repos/base-hw/src/core/kernel/scheduler.h
+++ b/repos/base-hw/src/core/kernel/scheduler.h
@@ -65,6 +65,7 @@ class Kernel::Scheduler
friend class Scheduler_test::Context;
using List_element = Genode::List_element;
+ using List = Genode::List;
unsigned _priority;
unsigned _quota;
@@ -74,10 +75,20 @@ class Kernel::Scheduler
List_element _slack_le { this };
unsigned _slack_time_left { 0 };
+ List_element _helper_le { this };
+ List _helper_list {};
+ Context *_destination { nullptr };
+
bool _ready { false };
void _reset() { _priotized_time_left = _quota; }
+ /**
+ * Noncopyable
+ */
+ Context(const Context&) = delete;
+ Context& operator=(const Context&) = delete;
+
public:
Context(Priority const priority,
@@ -85,9 +96,14 @@ class Kernel::Scheduler
:
_priority(priority.value),
_quota(quota) { }
+ ~Context();
bool ready() const { return _ready; }
void quota(unsigned const q) { _quota = q; }
+
+ void help(Context &c);
+ void helping_finished();
+ Context& helping_destination();
};
private:
diff --git a/repos/base-hw/src/core/kernel/signal_receiver.cc b/repos/base-hw/src/core/kernel/signal.cc
similarity index 83%
rename from repos/base-hw/src/core/kernel/signal_receiver.cc
rename to repos/base-hw/src/core/kernel/signal.cc
index 5c99894103..ad5017386a 100644
--- a/repos/base-hw/src/core/kernel/signal_receiver.cc
+++ b/repos/base-hw/src/core/kernel/signal.cc
@@ -1,18 +1,19 @@
/*
* \brief Kernel backend for asynchronous inter-process communication
* \author Martin Stein
+ * \author Stefan Kalkowski
* \date 2012-11-30
*/
/*
- * Copyright (C) 2012-2019 Genode Labs GmbH
+ * Copyright (C) 2012-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
/* core includes */
-#include
+#include
#include
using namespace Kernel;
@@ -26,7 +27,7 @@ void Signal_handler::cancel_waiting()
{
if (_receiver) {
_receiver->_handler_cancelled(*this);
- _receiver = 0;
+ _receiver = nullptr;
}
}
@@ -71,28 +72,20 @@ void Signal_context::_deliverable()
void Signal_context::_delivered()
{
_submits = 0;
- _ack = 0;
+ _ack = false;
}
-void Signal_context::_killer_cancelled() { _killer = 0; }
-
-
-bool Signal_context::can_submit(unsigned const n) const
-{
- if (_killed || _submits >= (unsigned)~0 - n)
- return false;
-
- return true;
-}
+void Signal_context::_killer_cancelled() { _killer = nullptr; }
void Signal_context::submit(unsigned const n)
{
- if (_killed || _submits >= (unsigned)~0 - n)
+ if (_killed)
return;
- _submits += n;
+ if (_submits < ((unsigned)~0 - n))
+ _submits += n;
if (_ack)
_deliverable();
@@ -105,32 +98,19 @@ void Signal_context::ack()
return;
if (!_killed) {
- _ack = 1;
+ _ack = true;
_deliverable();
return;
}
if (_killer) {
- _killer->_context = 0;
+ _killer->_context = nullptr;
_killer->_thread.signal_context_kill_done();
- _killer = 0;
+ _killer = nullptr;
}
}
-bool Signal_context::can_kill() const
-{
- /* check if in a kill operation or already killed */
- if (_killed) {
- if (_ack)
- return true;
-
- return false;
- }
- return true;
-}
-
-
void Signal_context::kill(Signal_context_killer &k)
{
/* check if in a kill operation or already killed */
@@ -139,13 +119,13 @@ void Signal_context::kill(Signal_context_killer &k)
/* kill directly if there is no unacknowledged delivery */
if (_ack) {
- _killed = 1;
+ _killed = true;
return;
}
/* wait for delivery acknowledgement */
_killer = &k;
- _killed = 1;
+ _killed = true;
_killer->_context = this;
_killer->_thread.signal_context_kill_pending();
}
@@ -231,24 +211,17 @@ void Signal_receiver::_add_context(Signal_context &c) {
_contexts.enqueue(c._contexts_fe); }
-bool Signal_receiver::can_add_handler(Signal_handler const &h) const
+
+bool Signal_receiver::add_handler(Signal_handler &h)
{
if (h._receiver)
return false;
- return true;
-}
-
-
-void Signal_receiver::add_handler(Signal_handler &h)
-{
- if (h._receiver)
- return;
-
_handlers.enqueue(h._handlers_fe);
h._receiver = this;
h._thread.signal_wait_for_signal();
_listen();
+ return true;
}
diff --git a/repos/base-hw/src/core/kernel/signal_receiver.h b/repos/base-hw/src/core/kernel/signal.h
similarity index 91%
rename from repos/base-hw/src/core/kernel/signal_receiver.h
rename to repos/base-hw/src/core/kernel/signal.h
index f5b2df09f8..3fa729e481 100644
--- a/repos/base-hw/src/core/kernel/signal_receiver.h
+++ b/repos/base-hw/src/core/kernel/signal.h
@@ -1,18 +1,19 @@
/*
* \brief Kernel backend for asynchronous inter-process communication
* \author Martin Stein
+ * \author Stefan Kalkowski
* \date 2012-11-30
*/
/*
- * Copyright (C) 2012-2017 Genode Labs GmbH
+ * Copyright (C) 2012-2025 Genode Labs GmbH
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU Affero General Public License version 3.
*/
-#ifndef _CORE__KERNEL__SIGNAL_RECEIVER_H_
-#define _CORE__KERNEL__SIGNAL_RECEIVER_H_
+#ifndef _CORE__KERNEL__SIGNAL_H_
+#define _CORE__KERNEL__SIGNAL_H_
/* Genode includes */
#include
@@ -158,20 +159,14 @@ class Kernel::Signal_context
*
* \param r receiver that the context shall be assigned to
* \param imprint userland identification of the context
- *
- * \throw Assign_to_receiver_failed
*/
- Signal_context(Signal_receiver & r, addr_t const imprint);
+ Signal_context(Signal_receiver &, addr_t const imprint);
/**
* Submit the signal
*
* \param n number of submits
- *
- * \retval 0 succeeded
- * \retval -1 failed
*/
- bool can_submit(unsigned const n) const;
void submit(unsigned const n);
/**
@@ -182,12 +177,8 @@ class Kernel::Signal_context
/**
* Destruct context or prepare to do it as soon as delivery is done
*
- * \param killer object that shall receive progress reports
- *
- * \retval 0 succeeded
- * \retval -1 failed
+ * \param k object that shall receive progress reports
*/
- bool can_kill() const;
void kill(Signal_context_killer &k);
/**
@@ -272,8 +263,7 @@ class Kernel::Signal_receiver
* \retval 0 succeeded
* \retval -1 failed
*/
- bool can_add_handler(Signal_handler const &h) const;
- void add_handler(Signal_handler &h);
+ bool add_handler(Signal_handler &h);
/**
* Syscall to create a signal receiver
diff --git a/repos/base-hw/src/core/kernel/thread.cc b/repos/base-hw/src/core/kernel/thread.cc
index b4febf8070..f749276bed 100644
--- a/repos/base-hw/src/core/kernel/thread.cc
+++ b/repos/base-hw/src/core/kernel/thread.cc
@@ -33,45 +33,42 @@ extern "C" void _core_start(void);
using namespace Kernel;
-void Thread::_ipc_alloc_recv_caps(unsigned cap_count)
+Thread::Ipc_alloc_result Thread::_ipc_alloc_recv_caps(unsigned cap_count)
{
using Allocator = Genode::Allocator;
+ using Result = Ipc_alloc_result;
Allocator &slab = pd().platform_pd().capability_slab();
for (unsigned i = 0; i < cap_count; i++) {
if (_obj_id_ref_ptr[i] != nullptr)
continue;
- slab.try_alloc(sizeof(Object_identity_reference)).with_result(
+ Result const result =
+ slab.try_alloc(sizeof(Object_identity_reference)).convert(
[&] (void *ptr) {
- _obj_id_ref_ptr[i] = ptr; },
+ _obj_id_ref_ptr[i] = ptr;
+ return Result::OK; },
[&] (Allocator::Alloc_error e) {
- switch (e) {
- case Allocator::Alloc_error::DENIED:
-
- /*
- * Slab is exhausted, reflect condition to the client.
- */
- throw Genode::Out_of_ram();
-
- case Allocator::Alloc_error::OUT_OF_CAPS:
- case Allocator::Alloc_error::OUT_OF_RAM:
-
- /*
- * These conditions cannot happen because the slab
- * does not try to grow automatically. It is
- * explicitely expanded by the client as response to
- * the 'Out_of_ram' condition above.
- */
+ /*
+ * Conditions other than DENIED cannot happen because the slab
+ * does not try to grow automatically. It is explicitely
+ * expanded by the client as response to the EXHAUSTED return
+ * value.
+ */
+ if (e != Allocator::Alloc_error::DENIED)
Genode::raw("unexpected recv_caps allocation failure");
- }
+
+ return Result::EXHAUSTED;
}
);
+ if (result == Result::EXHAUSTED)
+ return result;
}
_ipc_rcv_caps = cap_count;
+ return Result::OK;
}
@@ -87,11 +84,20 @@ void Thread::_ipc_free_recv_caps()
}
-void Thread::_ipc_init(Genode::Native_utcb &utcb, Thread &starter)
+Thread::Ipc_alloc_result Thread::_ipc_init(Genode::Native_utcb &utcb, Thread &starter)
{
_utcb = &utcb;
- _ipc_alloc_recv_caps((unsigned)(starter._utcb->cap_cnt()));
- ipc_copy_msg(starter);
+
+ switch (_ipc_alloc_recv_caps((unsigned)(starter._utcb->cap_cnt()))) {
+
+ case Ipc_alloc_result::OK:
+ ipc_copy_msg(starter);
+ break;
+
+ case Ipc_alloc_result::EXHAUSTED:
+ return Ipc_alloc_result::EXHAUSTED;
+ }
+ return Ipc_alloc_result::OK;
}
@@ -163,7 +169,7 @@ Thread::Destroy::Destroy(Thread & caller, Core::Kernel_object & to_delet
:
caller(caller), thread_to_destroy(to_delete)
{
- thread_to_destroy->_cpu->work_list().insert(&_le);
+ thread_to_destroy->_cpu().work_list().insert(&_le);
caller._become_inactive(AWAITS_RESTART);
}
@@ -171,7 +177,7 @@ Thread::Destroy::Destroy(Thread & caller, Core::Kernel_object & to_delet
void
Thread::Destroy::execute(Cpu &)
{
- thread_to_destroy->_cpu->work_list().remove(&_le);
+ thread_to_destroy->_cpu().work_list().remove(&_le);
thread_to_destroy.destruct();
caller._restart();
}
@@ -233,7 +239,8 @@ void Thread::ipc_send_request_succeeded()
assert(_state == AWAITS_IPC);
user_arg_0(0);
_state = ACTIVE;
- if (!Cpu_job::own_share_active()) { _activate_used_shares(); }
+ _activate();
+ helping_finished();
}
@@ -242,7 +249,8 @@ void Thread::ipc_send_request_failed()
assert(_state == AWAITS_IPC);
user_arg_0(-1);
_state = ACTIVE;
- if (!Cpu_job::own_share_active()) { _activate_used_shares(); }
+ _activate();
+ helping_finished();
}
@@ -262,32 +270,16 @@ void Thread::ipc_await_request_failed()
}
-void Thread::_deactivate_used_shares()
-{
- Cpu_job::_deactivate_own_share();
- _ipc_node.for_each_helper([&] (Thread &thread) {
- thread._deactivate_used_shares(); });
-}
-
-
-void Thread::_activate_used_shares()
-{
- Cpu_job::_activate_own_share();
- _ipc_node.for_each_helper([&] (Thread &thread) {
- thread._activate_used_shares(); });
-}
-
-
void Thread::_become_active()
{
- if (_state != ACTIVE && !_paused) { _activate_used_shares(); }
+ if (_state != ACTIVE && !_paused) Cpu_context::_activate();
_state = ACTIVE;
}
void Thread::_become_inactive(State const s)
{
- if (_state == ACTIVE && !_paused) { _deactivate_used_shares(); }
+ if (_state == ACTIVE && !_paused) Cpu_context::_deactivate();
_state = s;
}
@@ -295,17 +287,13 @@ void Thread::_become_inactive(State const s)
void Thread::_die() { _become_inactive(DEAD); }
-Cpu_job * Thread::helping_destination() {
- return &_ipc_node.helping_destination(); }
-
-
size_t Thread::_core_to_kernel_quota(size_t const quota) const
{
using Genode::Cpu_session;
/* we assert at timer construction that cpu_quota_us in ticks fits size_t */
size_t const ticks = (size_t)
- _cpu->timer().us_to_ticks(Kernel::cpu_quota_us);
+ _cpu().timer().us_to_ticks(Kernel::cpu_quota_us);
return Cpu_session::quota_lim_downscale(quota, ticks);
}
@@ -313,24 +301,26 @@ size_t Thread::_core_to_kernel_quota(size_t const quota) const
void Thread::_call_thread_quota()
{
Thread * const thread = (Thread *)user_arg_1();
- thread->Cpu_job::quota((unsigned)(_core_to_kernel_quota(user_arg_2())));
+ thread->Cpu_context::quota((unsigned)(_core_to_kernel_quota(user_arg_2())));
}
void Thread::_call_start_thread()
{
- /* lookup CPU */
- Cpu & cpu = _cpu_pool.cpu((unsigned)user_arg_2());
user_arg_0(0);
Thread &thread = *(Thread*)user_arg_1();
assert(thread._state == AWAITS_START);
- thread.affinity(cpu);
-
/* join protection domain */
- thread._pd = (Pd *) user_arg_3();
- thread._ipc_init(*(Native_utcb *)user_arg_4(), *this);
+ thread._pd = (Pd *) user_arg_2();
+ switch (thread._ipc_init(*(Native_utcb *)user_arg_3(), *this)) {
+ case Ipc_alloc_result::OK:
+ break;
+ case Ipc_alloc_result::EXHAUSTED:
+ user_arg_0(-2);
+ return;
+ }
/*
* Sanity check core threads!
@@ -344,7 +334,8 @@ void Thread::_call_start_thread()
* semantic changes, and additional core threads are started
* across cpu cores.
*/
- if (thread._pd == &_core_pd && cpu.id() != _cpu_pool.primary_cpu().id())
+ if (thread._pd == &_core_pd &&
+ thread._cpu().id() != _cpu_pool.primary_cpu().id())
Genode::raw("Error: do not start core threads"
" on CPU cores different than boot cpu");
@@ -355,8 +346,8 @@ void Thread::_call_start_thread()
void Thread::_call_pause_thread()
{
Thread &thread = *reinterpret_cast(user_arg_1());
- if (thread._state == ACTIVE && !thread._paused) {
- thread._deactivate_used_shares(); }
+ if (thread._state == ACTIVE && !thread._paused)
+ thread._deactivate();
thread._paused = true;
}
@@ -365,8 +356,8 @@ void Thread::_call_pause_thread()
void Thread::_call_resume_thread()
{
Thread &thread = *reinterpret_cast(user_arg_1());
- if (thread._state == ACTIVE && thread._paused) {
- thread._activate_used_shares(); }
+ if (thread._state == ACTIVE && thread._paused)
+ thread._activate();
thread._paused = false;
}
@@ -394,6 +385,7 @@ void Thread::_call_restart_thread()
_die();
return;
}
+
user_arg_0(thread._restart());
}
@@ -401,7 +393,10 @@ void Thread::_call_restart_thread()
bool Thread::_restart()
{
assert(_state == ACTIVE || _state == AWAITS_RESTART);
- if (_state != AWAITS_RESTART) { return false; }
+
+ if (_state == ACTIVE && _exception_state == NO_EXCEPTION)
+ return false;
+
_exception_state = NO_EXCEPTION;
_become_active();
return true;
@@ -439,7 +434,7 @@ void Thread::_cancel_blocking()
void Thread::_call_yield_thread()
{
- Cpu_job::_yield();
+ Cpu_context::_yield();
}
@@ -449,12 +444,11 @@ void Thread::_call_delete_thread()
*(Core::Kernel_object*)user_arg_1();
/**
- * Delete a thread immediately if it has no cpu assigned yet,
- * or it is assigned to this cpu, or the assigned cpu did not scheduled it.
+ * Delete a thread immediately if it is assigned to this cpu,
+ * or the assigned cpu did not scheduled it.
*/
- if (!to_delete->_cpu ||
- (to_delete->_cpu->id() == Cpu::executing_id() ||
- &to_delete->_cpu->scheduled_job() != &*to_delete)) {
+ if (to_delete->_cpu().id() == Cpu::executing_id() ||
+ &to_delete->_cpu().current_context() != &*to_delete) {
_call_delete();
return;
}
@@ -463,7 +457,7 @@ void Thread::_call_delete_thread()
* Construct a cross-cpu work item and send an IPI
*/
_destroy.construct(*this, to_delete);
- to_delete->_cpu->trigger_ip_interrupt();
+ to_delete->_cpu().trigger_ip_interrupt();
}
@@ -472,8 +466,8 @@ void Thread::_call_delete_pd()
Core::Kernel_object & pd =
*(Core::Kernel_object*)user_arg_1();
- if (_cpu->active(pd->mmu_regs))
- _cpu->switch_to(_core_pd.mmu_regs);
+ if (_cpu().active(pd->mmu_regs))
+ _cpu().switch_to(_core_pd.mmu_regs);
_call_delete();
}
@@ -482,7 +476,14 @@ void Thread::_call_delete_pd()
void Thread::_call_await_request_msg()
{
if (_ipc_node.ready_to_wait()) {
- _ipc_alloc_recv_caps((unsigned)user_arg_1());
+
+ switch (_ipc_alloc_recv_caps((unsigned)user_arg_1())) {
+ case Ipc_alloc_result::OK:
+ break;
+ case Ipc_alloc_result::EXHAUSTED:
+ user_arg_0(-2);
+ return;
+ }
_ipc_node.wait();
if (_ipc_node.waiting()) {
_become_inactive(AWAITS_IPC);
@@ -498,7 +499,7 @@ void Thread::_call_await_request_msg()
void Thread::_call_timeout()
{
- Timer & t = _cpu->timer();
+ Timer & t = _cpu().timer();
_timeout_sigid = (Kernel::capid_t)user_arg_2();
t.set_timeout(this, t.us_to_ticks(user_arg_1()));
}
@@ -506,13 +507,13 @@ void Thread::_call_timeout()
void Thread::_call_timeout_max_us()
{
- user_ret_time(_cpu->timer().timeout_max_us());
+ user_ret_time(_cpu().timer().timeout_max_us());
}
void Thread::_call_time()
{
- Timer & t = _cpu->timer();
+ Timer & t = _cpu().timer();
user_ret_time(t.ticks_to_us(t.time()));
}
@@ -521,11 +522,8 @@ void Thread::timeout_triggered()
{
Signal_context * const c =
pd().cap_tree().find(_timeout_sigid);
- if (!c || !c->can_submit(1)) {
- Genode::raw(*this, ": failed to submit timeout signal");
- return;
- }
- c->submit(1);
+ if (c) c->submit(1);
+ else Genode::warning(*this, ": failed to submit timeout signal");
}
@@ -539,19 +537,26 @@ void Thread::_call_send_request_msg()
_become_inactive(DEAD);
return;
}
- bool const help = Cpu_job::_helping_possible(*dst);
+ bool const help = Cpu_context::_helping_possible(*dst);
oir = oir->find(dst->pd());
if (!_ipc_node.ready_to_send()) {
Genode::raw("IPC send request: bad state");
} else {
- _ipc_alloc_recv_caps((unsigned)user_arg_2());
- _ipc_capid = oir ? oir->capid() : cap_id_invalid();
- _ipc_node.send(dst->_ipc_node, help);
+ switch (_ipc_alloc_recv_caps((unsigned)user_arg_2())) {
+ case Ipc_alloc_result::OK:
+ break;
+ case Ipc_alloc_result::EXHAUSTED:
+ user_arg_0(-2);
+ return;
+ }
+ _ipc_capid = oir ? oir->capid() : cap_id_invalid();
+ _ipc_node.send(dst->_ipc_node);
}
_state = AWAITS_IPC;
- if (!help || !dst->own_share_active()) { _deactivate_used_shares(); }
+ if (help) Cpu_context::_help(*dst);
+ if (!help || !dst->ready()) _deactivate();
}
@@ -568,7 +573,9 @@ void Thread::_call_pager()
{
/* override event route */
Thread &thread = *(Thread *)user_arg_1();
- thread._pager = pd().cap_tree().find((Kernel::capid_t)user_arg_2());
+ Thread &pager = *(Thread *)user_arg_2();
+ Signal_context &sc = *pd().cap_tree().find((Kernel::capid_t)user_arg_3());
+ thread._fault_context.construct(pager, sc);
}
@@ -592,12 +599,11 @@ void Thread::_call_await_signal()
return;
}
/* register handler at the receiver */
- if (!r->can_add_handler(_signal_handler)) {
+ if (!r->add_handler(_signal_handler)) {
Genode::raw("failed to register handler at signal receiver");
user_arg_0(-1);
return;
}
- r->add_handler(_signal_handler);
user_arg_0(0);
}
@@ -614,11 +620,10 @@ void Thread::_call_pending_signal()
}
/* register handler at the receiver */
- if (!r->can_add_handler(_signal_handler)) {
+ if (!r->add_handler(_signal_handler)) {
user_arg_0(-1);
return;
}
- r->add_handler(_signal_handler);
if (_state == AWAITS_SIGNAL) {
_cancel_blocking();
@@ -653,20 +658,7 @@ void Thread::_call_submit_signal()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find((Kernel::capid_t)user_arg_1());
- if(!c) {
- /* cannot submit unknown signal context */
- user_arg_0(-1);
- return;
- }
-
- /* trigger signal context */
- if (!c->can_submit((unsigned)user_arg_2())) {
- Genode::raw("failed to submit signal context");
- user_arg_0(-1);
- return;
- }
- c->submit((unsigned)user_arg_2());
- user_arg_0(0);
+ if(c) c->submit((unsigned)user_arg_2());
}
@@ -674,13 +666,8 @@ void Thread::_call_ack_signal()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find((Kernel::capid_t)user_arg_1());
- if (!c) {
- Genode::raw(*this, ": cannot ack unknown signal context");
- return;
- }
-
- /* acknowledge */
- c->ack();
+ if (c) c->ack();
+ else Genode::warning(*this, ": cannot ack unknown signal context");
}
@@ -688,19 +675,8 @@ void Thread::_call_kill_signal_context()
{
/* lookup signal context */
Signal_context * const c = pd().cap_tree().find((Kernel::capid_t)user_arg_1());
- if (!c) {
- Genode::raw(*this, ": cannot kill unknown signal context");
- user_arg_0(-1);
- return;
- }
-
- /* kill signal context */
- if (!c->can_kill()) {
- Genode::raw("failed to kill signal context");
- user_arg_0(-1);
- return;
- }
- c->kill(_signal_context_killer);
+ if (c) c->kill(_signal_context_killer);
+ else Genode::warning(*this, ": cannot kill unknown signal context");
}
@@ -719,7 +695,7 @@ void Thread::_call_new_irq()
(Genode::Irq_session::Polarity) (user_arg_3() & 0b11);
_call_new((unsigned)user_arg_2(), trigger, polarity, *c,
- _cpu->pic(), _user_irq_pool);
+ _cpu().pic(), _user_irq_pool);
}
@@ -820,10 +796,27 @@ void Thread::_call_single_step() {
}
+void Thread::_call_ack_pager_signal()
+{
+ Signal_context * const c = pd().cap_tree().find((Kernel::capid_t)user_arg_1());
+ if (!c)
+ Genode::raw(*this, ": cannot ack unknown signal context");
+ else
+ c->ack();
+
+ Thread &thread = *(Thread*)user_arg_2();
+ thread.helping_finished();
+
+ bool resolved = user_arg_3() ||
+ thread._exception_state == NO_EXCEPTION;
+ if (resolved) thread._restart();
+ else thread._become_inactive(AWAITS_RESTART);
+}
+
+
+
void Thread::_call()
{
- try {
-
/* switch over unrestricted kernel calls */
unsigned const call_id = (unsigned)user_arg_0();
switch (call_id) {
@@ -863,13 +856,15 @@ void Thread::_call()
switch (call_id) {
case call_id_new_thread():
_call_new(_addr_space_id_alloc, _user_irq_pool, _cpu_pool,
- _core_pd, (unsigned) user_arg_2(),
- (unsigned) _core_to_kernel_quota(user_arg_3()),
- (char const *) user_arg_4(), USER);
+ _cpu_pool.cpu((unsigned)user_arg_2()),
+ _core_pd, (unsigned) user_arg_3(),
+ (unsigned) _core_to_kernel_quota(user_arg_4()),
+ (char const *) user_arg_5(), USER);
return;
case call_id_new_core_thread():
_call_new(_addr_space_id_alloc, _user_irq_pool, _cpu_pool,
- _core_pd, (char const *) user_arg_2());
+ _cpu_pool.cpu((unsigned)user_arg_2()),
+ _core_pd, (char const *) user_arg_3());
return;
case call_id_thread_quota(): _call_thread_quota(); return;
case call_id_delete_thread(): _call_delete_thread(); return;
@@ -902,40 +897,70 @@ void Thread::_call()
case call_id_set_cpu_state(): _call_set_cpu_state(); return;
case call_id_exception_state(): _call_exception_state(); return;
case call_id_single_step(): _call_single_step(); return;
+ case call_id_ack_pager_signal(): _call_ack_pager_signal(); return;
default:
Genode::raw(*this, ": unknown kernel call");
_die();
return;
}
- } catch (Genode::Allocator::Out_of_memory &e) { user_arg_0(-2); }
+}
+
+
+void Thread::_signal_to_pager()
+{
+ if (!_fault_context.constructed()) {
+ Genode::warning(*this, " could not send signal to pager");
+ _die();
+ return;
+ }
+
+ /* first signal to pager to wake it up */
+ _fault_context->sc.submit(1);
+
+ /* only help pager thread if runnable and scheduler allows it */
+ bool const help = Cpu_context::_helping_possible(_fault_context->pager)
+ && (_fault_context->pager._state == ACTIVE);
+ if (help) Cpu_context::_help(_fault_context->pager);
+ else _become_inactive(AWAITS_RESTART);
}
void Thread::_mmu_exception()
{
- _become_inactive(AWAITS_RESTART);
+ using namespace Genode;
+ using Genode::log;
+
_exception_state = MMU_FAULT;
Cpu::mmu_fault(*regs, _fault);
_fault.ip = regs->ip;
if (_fault.type == Thread_fault::UNKNOWN) {
- Genode::raw(*this, " raised unhandled MMU fault ", _fault);
+ Genode::warning(*this, " raised unhandled MMU fault ", _fault);
+ _die();
return;
}
- if (_type != USER)
- Genode::raw(*this, " raised a fault, which should never happen ",
- _fault);
+ if (_type != USER) {
+ error(*this, " raised a fault, which should never happen ",
+ _fault);
+ log("Register dump: ", *regs);
+ log("Backtrace:");
- if (_pager && _pager->can_submit(1)) {
- _pager->submit(1);
+ Const_byte_range_ptr const stack {
+ (char const*)Hw::Mm::core_stack_area().base,
+ Hw::Mm::core_stack_area().size };
+ regs->for_each_return_address(stack, [&] (void **p) {
+ log(*p); });
+ _die();
+ return;
}
+
+ _signal_to_pager();
}
void Thread::_exception()
{
- _become_inactive(AWAITS_RESTART);
_exception_state = EXCEPTION;
if (_type != USER) {
@@ -943,18 +968,14 @@ void Thread::_exception()
_die();
}
- if (_pager && _pager->can_submit(1)) {
- _pager->submit(1);
- } else {
- Genode::raw(*this, " could not send signal to pager on exception");
- _die();
- }
+ _signal_to_pager();
}
Thread::Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
+ Cpu &cpu,
Pd &core_pd,
unsigned const priority,
unsigned const quota,
@@ -962,7 +983,7 @@ Thread::Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Type type)
:
Kernel::Object { *this },
- Cpu_job { priority, quota },
+ Cpu_context { cpu, priority, quota },
_addr_space_id_alloc { addr_space_id_alloc },
_user_irq_pool { user_irq_pool },
_cpu_pool { cpu_pool },
@@ -999,8 +1020,8 @@ Core_main_thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Cpu_pool &cpu_pool,
Pd &core_pd)
:
- Core_object(
- core_pd, addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd, "core")
+ Core_object(core_pd, addr_space_id_alloc, user_irq_pool, cpu_pool,
+ cpu_pool.primary_cpu(), core_pd, "core")
{
using namespace Core;
@@ -1016,7 +1037,6 @@ Core_main_thread(Board::Address_space_id_allocator &addr_space_id_alloc,
regs->sp = (addr_t)&__initial_stack_base[0] + DEFAULT_STACK_SIZE;
regs->ip = (addr_t)&_core_start;
- affinity(_cpu_pool.primary_cpu());
_utcb = &_utcb_instance;
Thread::_pd = &core_pd;
_become_active();
diff --git a/repos/base-hw/src/core/kernel/thread.h b/repos/base-hw/src/core/kernel/thread.h
index 5feedebdf8..74fc257250 100644
--- a/repos/base-hw/src/core/kernel/thread.h
+++ b/repos/base-hw/src/core/kernel/thread.h
@@ -20,7 +20,7 @@
/* base-hw core includes */
#include
#include
-#include
+#include
#include
#include
#include
@@ -53,7 +53,7 @@ struct Kernel::Thread_fault
/**
* Kernel back-end for userland execution-contexts
*/
-class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
+class Kernel::Thread : private Kernel::Object, public Cpu_context, private Timeout
{
public:
@@ -173,7 +173,15 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
size_t _ipc_rcv_caps { 0 };
Genode::Native_utcb *_utcb { nullptr };
Pd *_pd { nullptr };
- Signal_context *_pager { nullptr };
+
+ struct Fault_context
+ {
+ Thread &pager;
+ Signal_context ≻
+ };
+
+ Genode::Constructible _fault_context {};
+
Thread_fault _fault { };
State _state;
Signal_handler _signal_handler { *this };
@@ -216,21 +224,16 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
*/
void _become_inactive(State const s);
- /**
- * Activate our CPU-share and those of our helpers
- */
- void _activate_used_shares();
-
- /**
- * Deactivate our CPU-share and those of our helpers
- */
- void _deactivate_used_shares();
-
/**
* Suspend unrecoverably from execution
*/
void _die();
+ /**
+ * In case of fault, signal to pager, and help or block
+ */
+ void _signal_to_pager();
+
/**
* Handle an exception thrown by the memory management unit
*/
@@ -306,6 +309,7 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
void _call_set_cpu_state();
void _call_exception_state();
void _call_single_step();
+ void _call_ack_pager_signal();
template
void _call_new(auto &&... args)
@@ -322,9 +326,13 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
kobj.destruct();
}
- void _ipc_alloc_recv_caps(unsigned rcv_cap_count);
+ enum Ipc_alloc_result { OK, EXHAUSTED };
+
+ [[nodiscard]] Ipc_alloc_result _ipc_alloc_recv_caps(unsigned rcv_cap_count);
+
void _ipc_free_recv_caps();
- void _ipc_init(Genode::Native_utcb &utcb, Thread &callee);
+
+ [[nodiscard]] Ipc_alloc_result _ipc_init(Genode::Native_utcb &utcb, Thread &callee);
public:
@@ -341,6 +349,7 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
+ Cpu &cpu,
Pd &core_pd,
unsigned const priority,
unsigned const quota,
@@ -355,11 +364,12 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
Thread(Board::Address_space_id_allocator &addr_space_id_alloc,
Irq::Pool &user_irq_pool,
Cpu_pool &cpu_pool,
+ Cpu &cpu,
Pd &core_pd,
char const *const label)
:
- Thread(addr_space_id_alloc, user_irq_pool, cpu_pool, core_pd,
- Scheduler::Priority::min(), 0, label, CORE)
+ Thread(addr_space_id_alloc, user_irq_pool, cpu_pool, cpu,
+ core_pd, Scheduler::Priority::min(), 0, label, CORE)
{ }
~Thread();
@@ -396,13 +406,14 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
* \retval capability id of the new kernel object
*/
static capid_t syscall_create(Core::Kernel_object &t,
+ unsigned const cpu_id,
unsigned const priority,
size_t const quota,
char const * const label)
{
return (capid_t)call(call_id_new_thread(), (Call_arg)&t,
- (Call_arg)priority, (Call_arg)quota,
- (Call_arg)label);
+ (Call_arg)cpu_id, (Call_arg)priority,
+ (Call_arg)quota, (Call_arg)label);
}
/**
@@ -414,10 +425,11 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
* \retval capability id of the new kernel object
*/
static capid_t syscall_create(Core::Kernel_object &t,
+ unsigned const cpu_id,
char const * const label)
{
return (capid_t)call(call_id_new_core_thread(), (Call_arg)&t,
- (Call_arg)label);
+ (Call_arg)cpu_id, (Call_arg)label);
}
/**
@@ -454,13 +466,12 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
void signal_receive_signal(void * const base, size_t const size);
- /*************
- ** Cpu_job **
- *************/
+ /*****************
+ ** Cpu_context **
+ *****************/
- void exception(Cpu & cpu) override;
- void proceed(Cpu & cpu) override;
- Cpu_job * helping_destination() override;
+ void exception() override;
+ void proceed() override;
/*************
diff --git a/repos/base-hw/src/core/kernel/vm.h b/repos/base-hw/src/core/kernel/vm.h
index b742822f7e..7b82c81803 100644
--- a/repos/base-hw/src/core/kernel/vm.h
+++ b/repos/base-hw/src/core/kernel/vm.h
@@ -18,7 +18,7 @@
/* core includes */
#include
#include
-#include
+#include
#include
@@ -31,7 +31,7 @@ namespace Kernel {
}
-class Kernel::Vm : private Kernel::Object, public Cpu_job
+class Kernel::Vm : private Kernel::Object, public Cpu_context
{
public:
@@ -66,7 +66,7 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
void _pause_vcpu()
{
if (_scheduled != INACTIVE)
- Cpu_job::_deactivate_own_share();
+ Cpu_context::_deactivate();
_scheduled = INACTIVE;
}
@@ -135,7 +135,7 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
void run()
{
_sync_from_vmm();
- if (_scheduled != ACTIVE) Cpu_job::_activate_own_share();
+ if (_scheduled != ACTIVE) Cpu_context::_activate();
_scheduled = ACTIVE;
}
@@ -146,13 +146,12 @@ class Kernel::Vm : private Kernel::Object, public Cpu_job
}
- /*************
- ** Cpu_job **
- *************/
+ /*****************
+ ** Cpu_context **
+ *****************/
- void exception(Cpu & cpu) override;
- void proceed(Cpu & cpu) override;
- Cpu_job * helping_destination() override { return this; }
+ void exception() override;
+ void proceed() override;
};
#endif /* _CORE__KERNEL__VM_H_ */
diff --git a/repos/base-hw/src/core/pager.cc b/repos/base-hw/src/core/pager.cc
index 4fcd3d651e..98855756a8 100644
--- a/repos/base-hw/src/core/pager.cc
+++ b/repos/base-hw/src/core/pager.cc
@@ -19,9 +19,30 @@
/* base-internal includes */
#include
+#include
using namespace Core;
+static unsigned _nr_of_cpus = 0;
+static void *_pager_thread_memory = nullptr;
+
+
+void Core::init_pager_thread_per_cpu_memory(unsigned const cpus, void * mem)
+{
+ _nr_of_cpus = cpus;
+ _pager_thread_memory = mem;
+}
+
+
+void Core::init_page_fault_handling(Rpc_entrypoint &) { }
+
+
+/*************
+ ** Mapping **
+ *************/
+
+void Mapping::prepare_map_operation() const { }
+
/***************
** Ipc_pager **
@@ -51,13 +72,11 @@ void Pager_object::wake_up()
}
-void Pager_object::start_paging(Kernel_object & receiver)
+void Pager_object::start_paging(Kernel_object &receiver,
+ Platform_thread &pager_thread)
{
- using Object = Kernel_object;
- using Entry = Object_pool::Entry;
-
create(*receiver, (unsigned long)this);
- Entry::cap(Object::_cap);
+ _pager_thread = &pager_thread;
}
@@ -75,11 +94,11 @@ void Pager_object::print(Output &out) const
Pager_object::Pager_object(Cpu_session_capability cpu_session_cap,
Thread_capability thread_cap, addr_t const badge,
- Affinity::Location, Session_label const &,
+ Affinity::Location location, Session_label const &,
Cpu_session::Name const &)
:
- Object_pool::Entry(Kernel_object::_cap),
- _badge(badge), _cpu_session_cap(cpu_session_cap), _thread_cap(thread_cap)
+ _badge(badge), _location(location),
+ _cpu_session_cap(cpu_session_cap), _thread_cap(thread_cap)
{ }
@@ -87,27 +106,115 @@ Pager_object::Pager_object(Cpu_session_capability cpu_session_cap,
** Pager_entrypoint **
**********************/
-void Pager_entrypoint::dissolve(Pager_object &o)
+void Pager_entrypoint::Thread::entry()
{
- Kernel::kill_signal_context(Capability_space::capid(o.cap()));
- remove(&o);
+ while (1) {
+
+ /* receive fault */
+ if (Kernel::await_signal(Capability_space::capid(_kobj.cap())))
+ continue;
+
+ Pager_object *po = *(Pager_object**)Thread::myself()->utcb()->data();
+ if (!po)
+ continue;
+
+ Untyped_capability cap = po->cap();
+
+ /* fetch fault data */
+ Platform_thread * const pt = (Platform_thread *)po->badge();
+ if (!pt) {
+ warning("failed to get platform thread of faulter");
+ Kernel::ack_signal(Capability_space::capid(cap));
+ continue;
+ }
+
+ if (pt->exception_state() ==
+ Kernel::Thread::Exception_state::EXCEPTION) {
+ if (!po->submit_exception_signal())
+ warning("unresolvable exception: "
+ "pd='", pt->pd().label(), "', "
+ "thread='", pt->label(), "', "
+ "ip=", Hex(pt->state().cpu.ip));
+ pt->fault_resolved(cap, false);
+ continue;
+ }
+
+ _fault = pt->fault_info();
+
+ /* try to resolve fault directly via local region managers */
+ if (po->pager(*this) == Pager_object::Pager_result::STOP) {
+ pt->fault_resolved(cap, false);
+ continue;
+ }
+
+ /* apply mapping that was determined by the local region managers */
+ {
+ Locked_ptr locked_ptr(pt->address_space());
+ if (!locked_ptr.valid()) {
+ pt->fault_resolved(cap, false);
+ continue;
+ }
+
+ Hw::Address_space * as = static_cast(&*locked_ptr);
+
+ Cache cacheable = Genode::CACHED;
+ if (!_mapping.cached)
+ cacheable = Genode::UNCACHED;
+ if (_mapping.write_combined)
+ cacheable = Genode::WRITE_COMBINED;
+
+ Hw::Page_flags const flags {
+ .writeable = _mapping.writeable ? Hw::RW : Hw::RO,
+ .executable = _mapping.executable ? Hw::EXEC : Hw::NO_EXEC,
+ .privileged = Hw::USER,
+ .global = Hw::NO_GLOBAL,
+ .type = _mapping.io_mem ? Hw::DEVICE : Hw::RAM,
+ .cacheable = cacheable
+ };
+
+ as->insert_translation(_mapping.dst_addr, _mapping.src_addr,
+ 1UL << _mapping.size_log2, flags);
+ }
+
+ pt->fault_resolved(cap, true);
+ }
}
-Pager_entrypoint::Pager_entrypoint(Rpc_cap_factory &)
+Pager_entrypoint::Thread::Thread(Affinity::Location cpu)
:
- Thread(Weight::DEFAULT_WEIGHT, "pager_ep", PAGER_EP_STACK_SIZE,
- Type::NORMAL),
-
+ Genode::Thread(Weight::DEFAULT_WEIGHT, "pager_ep", PAGER_EP_STACK_SIZE, cpu),
_kobj(_kobj.CALLED_FROM_CORE)
{
start();
}
+void Pager_entrypoint::dissolve(Pager_object &o)
+{
+ Kernel::kill_signal_context(Capability_space::capid(o.cap()));
+}
+
+
Pager_capability Pager_entrypoint::manage(Pager_object &o)
{
- o.start_paging(_kobj);
- insert(&o);
+ unsigned const cpu = o.location().xpos();
+ if (cpu >= _cpus) {
+ error("Invalid location of pager object ", cpu);
+ } else {
+ o.start_paging(_threads[cpu]._kobj,
+ *_threads[cpu].native_thread().platform_thread);
+ }
+
return reinterpret_cap_cast(o.cap());
}
+
+
+Pager_entrypoint::Pager_entrypoint(Rpc_cap_factory &)
+:
+ _cpus(_nr_of_cpus),
+ _threads((Thread*)_pager_thread_memory)
+{
+ for (unsigned i = 0; i < _cpus; i++)
+ construct_at((void*)&_threads[i], Affinity::Location(i, 0));
+}
diff --git a/repos/base-hw/src/core/pager.h b/repos/base-hw/src/core/pager.h
index 29f650e187..3932bb22fe 100644
--- a/repos/base-hw/src/core/pager.h
+++ b/repos/base-hw/src/core/pager.h
@@ -17,12 +17,11 @@
/* Genode includes */
#include
#include
-#include
#include
#include
/* core includes */
-#include
+#include
#include
#include
#include
@@ -30,6 +29,9 @@
namespace Core {
+ class Platform;
+ class Platform_thread;
+
/**
* Interface used by generic region_map code
*/
@@ -53,6 +55,10 @@ namespace Core {
using Pager_capability = Capability;
enum { PAGER_EP_STACK_SIZE = sizeof(addr_t) * 2048 };
+
+ extern void init_page_fault_handling(Rpc_entrypoint &);
+
+ void init_pager_thread_per_cpu_memory(unsigned const cpus, void * mem);
}
@@ -93,17 +99,17 @@ class Core::Ipc_pager
};
-class Core::Pager_object : private Object_pool::Entry,
- private Kernel_object
+class Core::Pager_object : private Kernel_object
{
friend class Pager_entrypoint;
- friend class Object_pool;
private:
unsigned long const _badge;
+ Affinity::Location _location;
Cpu_session_capability _cpu_session_cap;
Thread_capability _thread_cap;
+ Platform_thread *_pager_thread { nullptr };
/**
* User-level signal handler registered for this pager object via
@@ -111,6 +117,12 @@ class Core::Pager_object : private Object_pool::Entry,
*/
Signal_context_capability _exception_sigh { };
+ /*
+ * Noncopyable
+ */
+ Pager_object(const Pager_object&) = delete;
+ Pager_object& operator=(const Pager_object&) = delete;
+
public:
/**
@@ -123,11 +135,15 @@ class Core::Pager_object : private Object_pool::Entry,
Affinity::Location, Session_label const&,
Cpu_session::Name const&);
+ virtual ~Pager_object() {}
+
/**
* User identification of pager object
*/
unsigned long badge() const { return _badge; }
+ Affinity::Location location() { return _location; }
+
/**
* Resume faulter
*/
@@ -158,7 +174,8 @@ class Core::Pager_object : private Object_pool::Entry,
*
* \param receiver signal receiver that receives the page faults
*/
- void start_paging(Kernel_object & receiver);
+ void start_paging(Kernel_object &receiver,
+ Platform_thread &pager_thread);
/**
* Called when a page-fault finally could not be resolved
@@ -167,6 +184,11 @@ class Core::Pager_object : private Object_pool::Entry,
void print(Output &out) const;
+ void with_pager(auto const &fn)
+ {
+ if (_pager_thread) fn(*_pager_thread);
+ }
+
/******************
** Pure virtual **
@@ -192,24 +214,44 @@ class Core::Pager_object : private Object_pool::Entry,
Cpu_session_capability cpu_session_cap() const { return _cpu_session_cap; }
Thread_capability thread_cap() const { return _thread_cap; }
- using Object_pool::Entry::cap;
+ Untyped_capability cap() {
+ return Kernel_object::_cap; }
};
-class Core::Pager_entrypoint : public Object_pool,
- public Thread,
- private Ipc_pager
+class Core::Pager_entrypoint
{
private:
- Kernel_object _kobj;
+ friend class Platform;
+
+ class Thread : public Genode::Thread,
+ private Ipc_pager
+ {
+ private:
+
+ friend class Pager_entrypoint;
+
+ Kernel_object _kobj;
+
+ public:
+
+ explicit Thread(Affinity::Location);
+
+
+ /**********************
+ ** Thread interface **
+ **********************/
+
+ void entry() override;
+ };
+
+ unsigned const _cpus;
+ Thread *_threads;
public:
- /**
- * Constructor
- */
- Pager_entrypoint(Rpc_cap_factory &);
+ explicit Pager_entrypoint(Rpc_cap_factory &);
/**
* Associate pager object 'obj' with entry point
@@ -220,13 +262,6 @@ class Core::Pager_entrypoint : public Object_pool,
* Dissolve pager object 'obj' from entry point
*/
void dissolve(Pager_object &obj);
-
-
- /**********************
- ** Thread interface **
- **********************/
-
- void entry() override;
};
#endif /* _CORE__PAGER_H_ */
diff --git a/repos/base-hw/src/core/phys_allocated.h b/repos/base-hw/src/core/phys_allocated.h
new file mode 100644
index 0000000000..dd640930b5
--- /dev/null
+++ b/repos/base-hw/src/core/phys_allocated.h
@@ -0,0 +1,79 @@
+/*
+ * \brief Allocate an object with a physical address
+ * \author Norman Feske
+ * \author Benjamin Lamowski
+ * \date 2024-12-02
+ */
+
+/*
+ * Copyright (C) 2024 Genode Labs GmbH
+ *
+ * This file is part of the Genode OS framework, which is distributed
+ * under the terms of the GNU Affero General Public License version 3.
+ */
+
+#ifndef _CORE__PHYS_ALLOCATED_H_
+#define _CORE__PHYS_ALLOCATED_H_
+
+/* base includes */
+#include
+#include
+#include
+
+/* core-local includes */
+#include
+
+namespace Core {
+ template
+ class Phys_allocated;
+}
+
+using namespace Core;
+
+
+template
+class Core::Phys_allocated : Genode::Noncopyable
+{
+ private:
+
+ Rpc_entrypoint &_ep;
+ Ram_allocator &_ram;
+ Region_map &_rm;
+
+ Attached_ram_dataspace _ds { _ram, _rm, sizeof(T) };
+ public:
+
+ T &obj = *_ds.local_addr();
+
+ Phys_allocated(Rpc_entrypoint &ep,
+ Ram_allocator &ram,
+ Region_map &rm)
+ :
+ _ep(ep), _ram(ram), _rm(rm)
+ {
+ construct_at(&obj);
+ }
+
+ Phys_allocated(Rpc_entrypoint &ep,
+ Ram_allocator &ram,
+ Region_map &rm,
+ auto const &construct_fn)
+ :
+ _ep(ep), _ram(ram), _rm(rm)
+ {
+ construct_fn(*this, &obj);
+ }
+
+ ~Phys_allocated() { obj.~T(); }
+
+ addr_t phys_addr() {
+ addr_t phys_addr { };
+ _ep.apply(_ds.cap(), [&](Dataspace_component *dsc) {
+ phys_addr = dsc->phys_addr();
+ });
+
+ return phys_addr;
+ }
+};
+
+#endif /* _CORE__PHYS_ALLOCATED_H_ */
diff --git a/repos/base-hw/src/core/platform.cc b/repos/base-hw/src/core/platform.cc
index 4df49e25f3..5e2cf3a982 100644
--- a/repos/base-hw/src/core/platform.cc
+++ b/repos/base-hw/src/core/platform.cc
@@ -19,6 +19,7 @@
/* base-hw core includes */
#include
+#include
#include
#include
#include
@@ -31,7 +32,6 @@
/* base internal includes */
#include
#include
-#include
/* base includes */
#include
@@ -60,8 +60,9 @@ Hw::Page_table::Allocator & Platform::core_page_table_allocator()
using Allocator = Hw::Page_table::Allocator;
using Array = Allocator::Array;
addr_t virt_addr = Hw::Mm::core_page_tables().base + sizeof(Hw::Page_table);
- return *unmanaged_singleton(_boot_info().table_allocator,
- virt_addr);
+
+ static Array::Allocator alloc { _boot_info().table_allocator, virt_addr };
+ return alloc;
}
@@ -70,6 +71,7 @@ addr_t Platform::core_main_thread_phys_utcb()
return core_phys_addr(_boot_info().core_main_thread_utcb);
}
+
void Platform::_init_io_mem_alloc()
{
/* add entire adress space minus the RAM memory regions */
@@ -81,8 +83,9 @@ void Platform::_init_io_mem_alloc()
Hw::Memory_region_array const & Platform::_core_virt_regions()
{
- return *unmanaged_singleton(
- Hw::Memory_region(stack_area_virtual_base(), stack_area_virtual_size()));
+ static Hw::Memory_region_array array {
+ Hw::Memory_region(stack_area_virtual_base(), stack_area_virtual_size()) };
+ return array;
}
@@ -161,6 +164,9 @@ void Platform::_init_platform_info()
xml.attribute("acpi", true);
xml.attribute("msi", true);
});
+ xml.node("board", [&] {
+ xml.attribute("name", BOARD_NAME);
+ });
_init_additional_platform_info(xml);
xml.node("affinity-space", [&] {
xml.attribute("width", affinity_space().width());
@@ -248,6 +254,10 @@ Platform::Platform()
);
}
+ unsigned const cpus = _boot_info().cpus;
+ size_t size = cpus * sizeof(Pager_entrypoint::Thread);
+ init_pager_thread_per_cpu_memory(cpus, _core_mem_alloc.alloc(size));
+
class Idle_thread_trace_source : public Trace::Source::Info_accessor,
private Trace::Control,
private Trace::Source
diff --git a/repos/base-hw/src/core/platform.h b/repos/base-hw/src/core/platform.h
index de24fa5150..62fd61469e 100644
--- a/repos/base-hw/src/core/platform.h
+++ b/repos/base-hw/src/core/platform.h
@@ -119,6 +119,18 @@ class Core::Platform : public Platform_generic
static addr_t core_page_table();
static Hw::Page_table::Allocator & core_page_table_allocator();
+ /**
+ * Determine size of a core local mapping required for a
+ * Core_region_map::detach().
+ */
+ size_t region_alloc_size_at(void * addr)
+ {
+ using Size_at_error = Allocator_avl::Size_at_error;
+
+ return (_core_mem_alloc.virt_alloc())()->size_at(addr).convert(
+ [ ] (size_t s) { return s; },
+ [ ] (Size_at_error) { return 0U; });
+ }
/********************************
** Platform_generic interface **
diff --git a/repos/base-hw/src/core/platform_pd.cc b/repos/base-hw/src/core/platform_pd.cc
index b96e1ee8b2..97d3e93977 100644
--- a/repos/base-hw/src/core/platform_pd.cc
+++ b/repos/base-hw/src/core/platform_pd.cc
@@ -60,6 +60,13 @@ bool Hw::Address_space::insert_translation(addr_t virt, addr_t phys,
_tt.insert_translation(virt, phys, size, flags, _tt_alloc);
return true;
} catch(Hw::Out_of_tables &) {
+
+ /* core/kernel's page-tables should never get flushed */
+ if (_tt_phys == Platform::core_page_table()) {
+ error("core's page-table allocator is empty!");
+ return false;
+ }
+
flush(platform().vm_start(), platform().vm_size());
}
}
diff --git a/repos/base-hw/src/core/platform_thread.cc b/repos/base-hw/src/core/platform_thread.cc
index 7ae23acd04..c538306fa9 100644
--- a/repos/base-hw/src/core/platform_thread.cc
+++ b/repos/base-hw/src/core/platform_thread.cc
@@ -15,7 +15,6 @@
/* core includes */
#include
#include
-#include
#include
#include
@@ -30,6 +29,48 @@
using namespace Core;
+addr_t Platform_thread::Utcb::_attach(Region_map &core_rm)
+{
+ Region_map::Attr attr { };
+ attr.writeable = true;
+ return core_rm.attach(_ds, attr).convert(
+ [&] (Region_map::Range range) { return range.start; },
+ [&] (Region_map::Attach_error) {
+ error("failed to attach UTCB of new thread within core");
+ return 0ul; });
+}
+
+
+static addr_t _alloc_core_local_utcb(addr_t core_addr)
+{
+ /*
+ * All non-core threads use the typical dataspace/rm_session
+ * mechanisms to allocate and attach its UTCB.
+ * But for the very first core threads, we need to use plain
+ * physical and virtual memory allocators to create/attach its
+ * UTCBs. Therefore, we've to allocate and map those here.
+ */
+ return platform().ram_alloc().try_alloc(sizeof(Native_utcb)).convert(
+
+ [&] (void *utcb_phys) {
+ map_local((addr_t)utcb_phys, core_addr,
+ sizeof(Native_utcb) / get_page_size());
+ return addr_t(utcb_phys);
+ },
+ [&] (Range_allocator::Alloc_error) {
+ error("failed to allocate UTCB for core/kernel thread!");
+ return 0ul;
+ });
+}
+
+
+Platform_thread::Utcb::Utcb(addr_t core_addr)
+:
+ core_addr(core_addr),
+ phys_addr(_alloc_core_local_utcb(core_addr))
+{ }
+
+
void Platform_thread::_init() { }
@@ -37,21 +78,6 @@ Weak_ptr& Platform_thread::address_space() {
return _address_space; }
-Platform_thread::~Platform_thread()
-{
- /* detach UTCB of main threads */
- if (_main_thread) {
- Locked_ptr locked_ptr(_address_space);
- if (locked_ptr.valid())
- locked_ptr->flush((addr_t)_utcb_pd_addr, sizeof(Native_utcb),
- Address_space::Core_local_addr{0});
- }
-
- /* free UTCB */
- core_env().pd_session()->free(_utcb);
-}
-
-
void Platform_thread::quota(size_t const quota)
{
_quota = (unsigned)quota;
@@ -64,65 +90,57 @@ Platform_thread::Platform_thread(Label const &label, Native_utcb &utcb)
_label(label),
_pd(_kernel_main_get_core_platform_pd()),
_pager(nullptr),
- _utcb_core_addr(&utcb),
- _utcb_pd_addr(&utcb),
+ _utcb((addr_t)&utcb),
_main_thread(false),
_location(Affinity::Location()),
- _kobj(_kobj.CALLED_FROM_CORE, _label.string())
-{
- /* create UTCB for a core thread */
- platform().ram_alloc().try_alloc(sizeof(Native_utcb)).with_result(
-
- [&] (void *utcb_phys) {
- map_local((addr_t)utcb_phys, (addr_t)_utcb_core_addr,
- sizeof(Native_utcb) / get_page_size());
- },
- [&] (Range_allocator::Alloc_error) {
- error("failed to allocate UTCB");
- /* XXX distinguish error conditions */
- throw Out_of_ram();
- }
- );
-}
+ _kobj(_kobj.CALLED_FROM_CORE, _location.xpos(), _label.string())
+{ }
Platform_thread::Platform_thread(Platform_pd &pd,
+ Rpc_entrypoint &ep,
+ Ram_allocator &ram,
+ Region_map &core_rm,
size_t const quota,
Label const &label,
unsigned const virt_prio,
Affinity::Location const location,
- addr_t const utcb)
+ addr_t /* utcb */)
:
_label(label),
_pd(pd),
_pager(nullptr),
- _utcb_pd_addr((Native_utcb *)utcb),
+ _utcb(ep, ram, core_rm),
_priority(_scale_priority(virt_prio)),
_quota((unsigned)quota),
_main_thread(!pd.has_any_thread),
_location(location),
- _kobj(_kobj.CALLED_FROM_CORE, _priority, _quota, _label.string())
+ _kobj(_kobj.CALLED_FROM_CORE, _location.xpos(),
+ _priority, _quota, _label.string())
{
- try {
- _utcb = core_env().pd_session()->alloc(sizeof(Native_utcb), CACHED);
- } catch (...) {
- error("failed to allocate UTCB");
- throw Out_of_ram();
- }
-
- Region_map::Attr attr { };
- attr.writeable = true;
- core_env().rm_session()->attach(_utcb, attr).with_result(
- [&] (Region_map::Range range) {
- _utcb_core_addr = (Native_utcb *)range.start; },
- [&] (Region_map::Attach_error) {
- error("failed to attach UTCB of new thread within core"); });
-
_address_space = pd.weak_ptr();
pd.has_any_thread = true;
}
+Platform_thread::~Platform_thread()
+{
+ /* core/kernel threads have no dataspace, but plain memory as UTCB */
+ if (!_utcb._ds.valid()) {
+ error("UTCB of core/kernel thread gets destructed!");
+ return;
+ }
+
+ /* detach UTCB of main threads */
+ if (_main_thread) {
+ Locked_ptr locked_ptr(_address_space);
+ if (locked_ptr.valid())
+ locked_ptr->flush(user_utcb_main_thread(), sizeof(Native_utcb),
+ Address_space::Core_local_addr{0});
+ }
+}
+
+
void Platform_thread::affinity(Affinity::Location const &)
{
/* yet no migration support, don't claim wrong location, e.g. for tracing */
@@ -137,36 +155,23 @@ void Platform_thread::start(void * const ip, void * const sp)
/* attach UTCB in case of a main thread */
if (_main_thread) {
- /* lookup dataspace component for physical address */
- auto lambda = [&] (Dataspace_component *dsc) {
- if (!dsc) return -1;
-
- /* lock the address space */
- Locked_ptr locked_ptr(_address_space);
- if (!locked_ptr.valid()) {
- error("invalid RM client");
- return -1;
- };
- _utcb_pd_addr = (Native_utcb *)user_utcb_main_thread();
- Hw::Address_space * as = static_cast(&*locked_ptr);
- if (!as->insert_translation((addr_t)_utcb_pd_addr, dsc->phys_addr(),
- sizeof(Native_utcb), Hw::PAGE_FLAGS_UTCB)) {
- error("failed to attach UTCB");
- return -1;
- }
- return 0;
- };
- if (core_env().entrypoint().apply(_utcb, lambda))
+ Locked_ptr locked_ptr(_address_space);
+ if (!locked_ptr.valid()) {
+ error("unable to start thread in invalid address space");
return;
+ };
+ Hw::Address_space * as = static_cast(&*locked_ptr);
+ if (!as->insert_translation(user_utcb_main_thread(), _utcb.phys_addr,
+ sizeof(Native_utcb), Hw::PAGE_FLAGS_UTCB)) {
+ error("failed to attach UTCB");
+ return;
+ }
}
/* initialize thread registers */
_kobj->regs->ip = reinterpret_cast(ip);
_kobj->regs->sp = reinterpret_cast(sp);
- /* start executing new thread */
- unsigned const cpu = _location.xpos();
-
Native_utcb &utcb = *Thread::myself()->utcb();
/* reset capability counter */
@@ -174,18 +179,22 @@ void Platform_thread::start(void * const ip, void * const sp)
utcb.cap_add(Capability_space::capid(_kobj.cap()));
if (_main_thread) {
utcb.cap_add(Capability_space::capid(_pd.parent()));
- utcb.cap_add(Capability_space::capid(_utcb));
+ utcb.cap_add(Capability_space::capid(_utcb._ds));
}
- Kernel::start_thread(*_kobj, cpu, _pd.kernel_pd(), *_utcb_core_addr);
+
+ Kernel::start_thread(*_kobj, _pd.kernel_pd(),
+ *(Native_utcb*)_utcb.core_addr);
}
-void Platform_thread::pager(Pager_object &pager)
+void Platform_thread::pager(Pager_object &po)
{
using namespace Kernel;
- thread_pager(*_kobj, Capability_space::capid(pager.cap()));
- _pager = &pager;
+ po.with_pager([&] (Platform_thread &pt) {
+ thread_pager(*_kobj, *pt._kobj,
+ Capability_space::capid(po.cap())); });
+ _pager = &po;
}
@@ -231,3 +240,9 @@ void Platform_thread::restart()
{
Kernel::restart_thread(Capability_space::capid(_kobj.cap()));
}
+
+
+void Platform_thread::fault_resolved(Untyped_capability cap, bool resolved)
+{
+ Kernel::ack_pager_signal(Capability_space::capid(cap), *_kobj, resolved);
+}
diff --git a/repos/base-hw/src/core/platform_thread.h b/repos/base-hw/src/core/platform_thread.h
index 4c386eb1ef..85ddf9dd15 100644
--- a/repos/base-hw/src/core/platform_thread.h
+++ b/repos/base-hw/src/core/platform_thread.h
@@ -19,6 +19,7 @@
#include
#include
#include
+#include
/* base-internal includes */
#include
@@ -26,6 +27,7 @@
/* core includes */
#include
#include
+#include
/* kernel includes */
#include
@@ -55,13 +57,66 @@ class Core::Platform_thread : Noncopyable
using Label = String<32>;
+ struct Utcb : Noncopyable
+ {
+ struct {
+ Ram_allocator *_ram_ptr = nullptr;
+ Region_map *_core_rm_ptr = nullptr;
+ };
+
+ Ram_dataspace_capability _ds { }; /* UTCB ds of non-core threads */
+
+ addr_t const core_addr; /* UTCB address within core/kernel */
+ addr_t const phys_addr;
+
+ /*
+ * \throw Out_of_ram
+ * \throw Out_of_caps
+ */
+ Ram_dataspace_capability _allocate(Ram_allocator &ram)
+ {
+ return ram.alloc(sizeof(Native_utcb), CACHED);
+ }
+
+ addr_t _attach(Region_map &);
+
+ static addr_t _ds_phys(Rpc_entrypoint &ep, Dataspace_capability ds)
+ {
+ return ep.apply(ds, [&] (Dataspace_component *dsc) {
+ return dsc ? dsc->phys_addr() : 0; });
+ }
+
+ /**
+ * Constructor used for core-local threads
+ */
+ Utcb(addr_t core_addr);
+
+ /**
+ * Constructor used for threads outside of core
+ */
+ Utcb(Rpc_entrypoint &ep, Ram_allocator &ram, Region_map &core_rm)
+ :
+ _core_rm_ptr(&core_rm),
+ _ds(_allocate(ram)),
+ core_addr(_attach(core_rm)),
+ phys_addr(_ds_phys(ep, _ds))
+ { }
+
+ ~Utcb()
+ {
+ if (_core_rm_ptr)
+ _core_rm_ptr->detach(core_addr);
+
+ if (_ram_ptr && _ds.valid())
+ _ram_ptr->free(_ds);
+ }
+ };
+
Label const _label;
Platform_pd &_pd;
Weak_ptr _address_space { };
Pager_object * _pager;
- Native_utcb * _utcb_core_addr { }; /* UTCB addr in core */
- Native_utcb * _utcb_pd_addr; /* UTCB addr in pd */
- Ram_dataspace_capability _utcb { }; /* UTCB dataspace */
+ Utcb _utcb;
unsigned _priority {0};
unsigned _quota {0};
@@ -115,7 +170,8 @@ class Core::Platform_thread : Noncopyable
* \param virt_prio unscaled processor-scheduling priority
* \param utcb core local pointer to userland stack
*/
- Platform_thread(Platform_pd &, size_t const quota, Label const &label,
+ Platform_thread(Platform_pd &, Rpc_entrypoint &, Ram_allocator &,
+ Region_map &, size_t const quota, Label const &label,
unsigned const virt_prio, Affinity::Location,
addr_t const utcb);
@@ -160,6 +216,8 @@ class Core::Platform_thread : Noncopyable
void restart();
+ void fault_resolved(Untyped_capability, bool);
+
/**
* Pause this thread
*/
@@ -241,7 +299,7 @@ class Core::Platform_thread : Noncopyable
Platform_pd &pd() const { return _pd; }
- Ram_dataspace_capability utcb() const { return _utcb; }
+ Ram_dataspace_capability utcb() const { return _utcb._ds; }
};
#endif /* _CORE__PLATFORM_THREAD_H_ */
diff --git a/repos/base-hw/src/core/region_map_support.cc b/repos/base-hw/src/core/region_map_support.cc
deleted file mode 100644
index 4f7f4df79d..0000000000
--- a/repos/base-hw/src/core/region_map_support.cc
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
- * \brief RM- and pager implementations specific for base-hw and core
- * \author Martin Stein
- * \author Stefan Kalkowski
- * \date 2012-02-12
- */
-
-/*
- * Copyright (C) 2012-2017 Genode Labs GmbH
- *
- * This file is part of the Genode OS framework, which is distributed
- * under the terms of the GNU Affero General Public License version 3.
- */
-
-/* base-hw core includes */
-#include
-#include
-#include
-
-using namespace Core;
-
-
-void Pager_entrypoint::entry()
-{
- Untyped_capability cap;
-
- while (1) {
-
- if (cap.valid()) Kernel::ack_signal(Capability_space::capid(cap));
-
- /* receive fault */
- if (Kernel::await_signal(Capability_space::capid(_kobj.cap()))) continue;
-
- Pager_object *po = *(Pager_object**)Thread::myself()->utcb()->data();
- cap = po->cap();
-
- if (!po) continue;
-
- /* fetch fault data */
- Platform_thread * const pt = (Platform_thread *)po->badge();
- if (!pt) {
- warning("failed to get platform thread of faulter");
- continue;
- }
-
- if (pt->exception_state() ==
- Kernel::Thread::Exception_state::EXCEPTION) {
- if (!po->submit_exception_signal())
- warning("unresolvable exception: "
- "pd='", pt->pd().label(), "', "
- "thread='", pt->label(), "', "
- "ip=", Hex(pt->state().cpu.ip));
- continue;
- }
-
- _fault = pt->fault_info();
-
- /* try to resolve fault directly via local region managers */
- if (po->pager(*this) == Pager_object::Pager_result::STOP)
- continue;
-
- /* apply mapping that was determined by the local region managers */
- {
- Locked_ptr locked_ptr(pt->address_space());
- if (!locked_ptr.valid()) continue;
-
- Hw::Address_space * as = static_cast(&*locked_ptr);
-
- Cache cacheable = Genode::CACHED;
- if (!_mapping.cached)
- cacheable = Genode::UNCACHED;
- if (_mapping.write_combined)
- cacheable = Genode::WRITE_COMBINED;
-
- Hw::Page_flags const flags {
- .writeable = _mapping.writeable ? Hw::RW : Hw::RO,
- .executable = _mapping.executable ? Hw::EXEC : Hw::NO_EXEC,
- .privileged = Hw::USER,
- .global = Hw::NO_GLOBAL,
- .type = _mapping.io_mem ? Hw::DEVICE : Hw::RAM,
- .cacheable = cacheable
- };
-
- as->insert_translation(_mapping.dst_addr, _mapping.src_addr,
- 1UL << _mapping.size_log2, flags);
- }
-
- /* let pager object go back to no-fault state */
- po->wake_up();
- }
-}
-
-
-void Mapping::prepare_map_operation() const { }
diff --git a/repos/base-hw/src/core/signal_source_component.h b/repos/base-hw/src/core/signal_source_component.h
index 82186608f3..e4449fc3e7 100644
--- a/repos/base-hw/src/core/signal_source_component.h
+++ b/repos/base-hw/src/core/signal_source_component.h
@@ -19,7 +19,7 @@
/* core includes */
#include
-#include
+#include
#include
namespace Core {
diff --git a/repos/base-hw/src/core/spec/arm/cpu.cc b/repos/base-hw/src/core/spec/arm/cpu.cc
index 06e6f1baf5..697b9a0cf0 100644
--- a/repos/base-hw/src/core/spec/arm/cpu.cc
+++ b/repos/base-hw/src/core/spec/arm/cpu.cc
@@ -22,6 +22,32 @@
using namespace Core;
+void Arm_cpu::Context::print(Output &output) const
+{
+ using namespace Genode;
+ using Genode::print;
+
+ print(output, "\n");
+ print(output, " r0 = ", Hex(r0), "\n");
+ print(output, " r1 = ", Hex(r1), "\n");
+ print(output, " r2 = ", Hex(r2), "\n");
+ print(output, " r3 = ", Hex(r3), "\n");
+ print(output, " r4 = ", Hex(r4), "\n");
+ print(output, " r5 = ", Hex(r5), "\n");
+ print(output, " r6 = ", Hex(r6), "\n");
+ print(output, " r7 = ", Hex(r7), "\n");
+ print(output, " r8 = ", Hex(r8), "\n");
+ print(output, " r9 = ", Hex(r9), "\n");
+ print(output, " r10 = ", Hex(r10), "\n");
+ print(output, " r11 = ", Hex(r11), "\n");
+ print(output, " r12 = ", Hex(r12), "\n");
+ print(output, " ip = ", Hex(ip), "\n");
+ print(output, " sp = ", Hex(sp), "\n");
+ print(output, " lr = ", Hex(lr), "\n");
+ print(output, " cpsr = ", Hex(cpsr));
+}
+
+
Arm_cpu::Context::Context(bool privileged)
{
using Psr = Arm_cpu::Psr;
diff --git a/repos/base-hw/src/core/spec/arm/cpu_support.h b/repos/base-hw/src/core/spec/arm/cpu_support.h
index ef7fabdaf5..012b04eceb 100644
--- a/repos/base-hw/src/core/spec/arm/cpu_support.h
+++ b/repos/base-hw/src/core/spec/arm/cpu_support.h
@@ -49,6 +49,18 @@ struct Core::Arm_cpu : public Hw::Arm_cpu
struct alignas(8) Context : Cpu_state, Fpu_context
{
Context(bool privileged);
+
+ void print(Output &output) const;
+
+ void for_each_return_address(Const_byte_range_ptr const &stack,
+ auto const &fn)
+ {
+ void **fp = (void**)r11;
+ while (stack.contains(fp-1) && stack.contains(fp) && fp[0]) {
+ fn(fp);
+ fp = (void **) fp[-1];
+ }
+ }
};
/**
diff --git a/repos/base-hw/src/core/spec/arm/kernel/thread.cc b/repos/base-hw/src/core/spec/arm/kernel/thread.cc
index c353745e46..39b3e2030a 100644
--- a/repos/base-hw/src/core/spec/arm/kernel/thread.cc
+++ b/repos/base-hw/src/core/spec/arm/kernel/thread.cc
@@ -23,32 +23,35 @@
using namespace Kernel;
-extern "C" void kernel_to_user_context_switch(Cpu::Context*, Cpu::Fpu_context*);
+extern "C" void kernel_to_user_context_switch(Core::Cpu::Context*,
+ Core::Cpu::Fpu_context*);
void Thread::_call_suspend() { }
-void Thread::exception(Cpu & cpu)
+void Thread::exception()
{
+ using Ctx = Core::Cpu::Context;
+
switch (regs->cpu_exception) {
- case Cpu::Context::SUPERVISOR_CALL:
+ case Ctx::SUPERVISOR_CALL:
_call();
return;
- case Cpu::Context::PREFETCH_ABORT:
- case Cpu::Context::DATA_ABORT:
+ case Ctx::PREFETCH_ABORT:
+ case Ctx::DATA_ABORT:
_mmu_exception();
return;
- case Cpu::Context::INTERRUPT_REQUEST:
- case Cpu::Context::FAST_INTERRUPT_REQUEST:
- _interrupt(_user_irq_pool, cpu.id());
+ case Ctx::INTERRUPT_REQUEST:
+ case Ctx::FAST_INTERRUPT_REQUEST:
+ _interrupt(_user_irq_pool);
return;
- case Cpu::Context::UNDEFINED_INSTRUCTION:
+ case Ctx::UNDEFINED_INSTRUCTION:
Genode::raw(*this, ": undefined instruction at ip=",
Genode::Hex(regs->ip));
_die();
return;
- case Cpu::Context::RESET:
+ case Ctx::RESET:
return;
default:
Genode::raw(*this, ": triggered an unknown exception ",
@@ -71,17 +74,17 @@ void Kernel::Thread::Tlb_invalidation::execute(Cpu &) { }
void Thread::Flush_and_stop_cpu::execute(Cpu &) { }
-void Cpu::Halt_job::proceed(Kernel::Cpu &) { }
+void Cpu::Halt_job::proceed() { }
-void Thread::proceed(Cpu & cpu)
+void Thread::proceed()
{
- if (!cpu.active(pd().mmu_regs) && type() != CORE)
- cpu.switch_to(pd().mmu_regs);
+ if (!_cpu().active(pd().mmu_regs) && type() != CORE)
+ _cpu().switch_to(pd().mmu_regs);
- regs->cpu_exception = cpu.stack_start();
- kernel_to_user_context_switch((static_cast(&*regs)),
- (static_cast(&*regs)));
+ regs->cpu_exception = _cpu().stack_start();
+ kernel_to_user_context_switch((static_cast(&*regs)),
+ (static_cast(&*regs)));
}
diff --git a/repos/base-hw/src/core/spec/arm/virtualization/platform_services.cc b/repos/base-hw/src/core/spec/arm/virtualization/platform_services.cc
index 67c15ca117..a7d355235d 100644
--- a/repos/base-hw/src/core/spec/arm/virtualization/platform_services.cc
+++ b/repos/base-hw/src/core/spec/arm/virtualization/platform_services.cc
@@ -16,12 +16,11 @@
/* core includes */
#include
+#include
#include
-#include
#include
#include
#include
-#include
using namespace Core;
@@ -32,10 +31,13 @@ extern addr_t hypervisor_exception_vector;
/*
* Add ARM virtualization specific vm service
*/
-void Core::platform_add_local_services(Rpc_entrypoint &ep,
- Sliced_heap &sh,
- Registry &services,
- Core::Trace::Source_registry &trace_sources)
+void Core::platform_add_local_services(Rpc_entrypoint &ep,
+ Sliced_heap &sh,
+ Registry &services,
+ Trace::Source_registry &trace_sources,
+ Ram_allocator &core_ram,
+ Region_map &core_rm,
+ Range_allocator &)
{
map_local(Platform::core_phys_addr((addr_t)&hypervisor_exception_vector),
Hw::Mm::hypervisor_exception_vector().base,
@@ -50,8 +52,7 @@ void Core::platform_add_local_services(Rpc_entrypoint &ep,
Hw::Mm::hypervisor_stack().size / get_page_size(),
Hw::PAGE_FLAGS_KERN_DATA);
- static Vm_root vm_root(ep, sh, core_env().ram_allocator(),
- core_env().local_rm(), trace_sources);
+ static Vm_root vm_root(ep, sh, core_ram, core_rm, trace_sources);
static Core_service vm_service(services, vm_root);
},
[&] (Range_allocator::Alloc_error) {
diff --git a/repos/base-hw/src/core/spec/arm/virtualization/vm_session_component.cc b/repos/base-hw/src/core/spec/arm/virtualization/vm_session_component.cc
index 9eb54ab608..fdd40179e5 100644
--- a/repos/base-hw/src/core/spec/arm/virtualization/vm_session_component.cc
+++ b/repos/base-hw/src/core/spec/arm/virtualization/vm_session_component.cc
@@ -14,15 +14,11 @@
/* Genode includes */
#include
-/* base internal includes */
-#include
-
/* core includes */
#include
#include
#include
#include
-#include
using namespace Core;
@@ -87,29 +83,14 @@ void * Vm_session_component::_alloc_table()
}
-using Vmid_allocator = Bit_allocator<256>;
-
-static Vmid_allocator &alloc()
-{
- static Vmid_allocator * allocator = nullptr;
- if (!allocator) {
- allocator = unmanaged_singleton();
-
- /* reserve VM ID 0 for the hypervisor */
- addr_t id = allocator->alloc();
- assert (id == 0);
- }
- return *allocator;
-}
-
-
Genode::addr_t Vm_session_component::_alloc_vcpu_data(Genode::addr_t ds_addr)
{
return ds_addr;
}
-Vm_session_component::Vm_session_component(Rpc_entrypoint &ds_ep,
+Vm_session_component::Vm_session_component(Vmid_allocator & vmid_alloc,
+ Rpc_entrypoint &ds_ep,
Resources resources,
Label const &,
Diag,
@@ -127,7 +108,8 @@ Vm_session_component::Vm_session_component(Rpc_entrypoint &ds_ep,
_table(*construct_at(_alloc_table())),
_table_array(*(new (cma()) Board::Vm_page_table_array([] (void * virt) {
return (addr_t)cma().phys_addr(virt);}))),
- _id({(unsigned)alloc().alloc(), cma().phys_addr(&_table)})
+ _vmid_alloc(vmid_alloc),
+ _id({(unsigned)_vmid_alloc.alloc(), cma().phys_addr(&_table)})
{
/* configure managed VM area */
_map.add_range(0, 0UL - 0x1000);
@@ -162,5 +144,5 @@ Vm_session_component::~Vm_session_component()
/* free guest-to-host page tables */
destroy(platform().core_mem_alloc(), &_table);
destroy(platform().core_mem_alloc(), &_table_array);
- alloc().free(_id.id);
+ _vmid_alloc.free(_id.id);
}
diff --git a/repos/base-hw/src/core/spec/arm_v7/trustzone/kernel/vm.cc b/repos/base-hw/src/core/spec/arm_v7/trustzone/kernel/vm.cc
index 16fddfcb6d..54256ead92 100644
--- a/repos/base-hw/src/core/spec/arm_v7/trustzone/kernel/vm.cc
+++ b/repos/base-hw/src/core/spec/arm_v7/trustzone/kernel/vm.cc
@@ -28,14 +28,13 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Identity & id)
:
Kernel::Object { *this },
- Cpu_job(Scheduler::Priority::min(), 0),
+ Cpu_context(cpu, Scheduler::Priority::min(), 0),
_user_irq_pool(user_irq_pool),
_state(data),
_context(context),
_id(id),
_vcpu_context(cpu)
{
- affinity(cpu);
/* once constructed, exit with a startup exception */
pause();
_state.cpu_exception = Genode::VCPU_EXCEPTION_STARTUP;
@@ -46,12 +45,12 @@ Vm::Vm(Irq::Pool & user_irq_pool,
Vm::~Vm() {}
-void Vm::exception(Cpu & cpu)
+void Vm::exception()
{
switch(_state.cpu_exception) {
case Genode::Cpu_state::INTERRUPT_REQUEST: [[fallthrough]];
case Genode::Cpu_state::FAST_INTERRUPT_REQUEST:
- _interrupt(_user_irq_pool, cpu.id());
+ _interrupt(_user_irq_pool);
return;
case Genode::Cpu_state::DATA_ABORT:
_state.dfar = Cpu::Dfar::read();
@@ -69,19 +68,19 @@ bool secure_irq(unsigned const i);
extern "C" void monitor_mode_enter_normal_world(Genode::Vcpu_state&, void*);
-void Vm::proceed(Cpu & cpu)
+void Vm::proceed()
{
unsigned const irq = _state.irq_injection;
if (irq) {
- if (cpu.pic().secure(irq)) {
+ if (_cpu().pic().secure(irq)) {
Genode::raw("Refuse to inject secure IRQ into VM");
} else {
- cpu.pic().trigger(irq);
+ _cpu().pic().trigger(irq);
_state.irq_injection = 0;
}
}
- monitor_mode_enter_normal_world(_state, (void*) cpu.stack_start());
+ monitor_mode_enter_normal_world(_state, (void*) _cpu().stack_start());
}
diff --git a/repos/base-hw/src/core/spec/arm_v7/trustzone/platform_services.cc b/repos/base-hw/src/core/spec/arm_v7/trustzone/platform_services.cc
index 2a3919d40a..3d121ef192 100644
--- a/repos/base-hw/src/core/spec/arm_v7/trustzone/platform_services.cc
+++ b/repos/base-hw/src/core/spec/arm_v7/trustzone/platform_services.cc
@@ -17,7 +17,6 @@
/* core includes */
#include
#include
-#include
#include
#include
#include
@@ -29,10 +28,13 @@ extern int monitor_mode_exception_vector;
/*
* Add TrustZone specific vm service
*/
-void Core::platform_add_local_services(Rpc_entrypoint &ep,
- Sliced_heap &sliced_heap,
- Registry &local_services,
- Core::Trace::Source_registry &trace_sources)
+void Core::platform_add_local_services(Rpc_entrypoint &ep,
+ Sliced_heap &sliced_heap,
+ Registry &services,
+ Trace::Source_registry &trace_sources,
+ Ram_allocator &core_ram,
+ Region_map &core_rm,
+ Range_allocator &)
{
static addr_t const phys_base =
Platform::core_phys_addr((addr_t)&monitor_mode_exception_vector);
@@ -40,8 +42,7 @@ void Core::platform_add_local_services(Rpc_entrypoint &ep,
map_local(phys_base, Hw::Mm::system_exception_vector().base, 1,
Hw::PAGE_FLAGS_KERN_TEXT);
- static Vm_root vm_root(ep, sliced_heap, core_env().ram_allocator(),
- core_env().local_rm(), trace_sources);
+ static Vm_root vm_root(ep, sliced_heap, core_ram, core_rm, trace_sources);
- static Core_service