Merge branch 'genodelabs:master' into master

This commit is contained in:
Michael Müller
2025-01-21 15:00:29 +01:00
committed by GitHub
568 changed files with 3244 additions and 5353 deletions

View File

@@ -1 +1 @@
24.08 24.11

View File

@@ -1,517 +0,0 @@
=======================
The Genode build system
=======================
Norman Feske
Abstract
########
The Genode OS Framework comes with a custom build system that is designed for
the creation of highly modular and portable systems software. Understanding
its basic concepts is pivotal for using the full potential of the framework.
This document introduces those concepts and the best practises of putting them
to good use. Beside building software components from source code, common
and repetitive development tasks are the testing of individual components
and the integration of those components into complex system scenarios. To
streamline such tasks, the build system is accompanied with special tooling
support. This document introduces those tools.
Build directories and repositories
##################################
The build system is supposed to never touch the source tree. The procedure of
building components and integrating them into system scenarios is done at
a distinct build directory. One build directory targets a specific platform,
i.e., a kernel and hardware architecture. Because the source tree is decoupled
from the build directory, one source tree can have many different build
directories associated, each targeted at another platform.
The recommended way for creating a build directory is the use of the
'create_builddir' tool located at '<genode-dir>/tool/'. By starting the tool
without arguments, its usage information will be printed. For creating a new
build directory, one of the listed target platforms must be specified.
Furthermore, the location of the new build directory has to be specified via
the 'BUILD_DIR=' argument. For example:
! cd <genode-dir>
! ./tool/create_builddir linux_x86 BUILD_DIR=/tmp/build.linux_x86
This command will create a new build directory for the Linux/x86 platform
at _/tmp/build.linux_x86/_.
Build-directory configuration via 'build.conf'
==============================================
The fresh build directory will contain a 'Makefile', which is a symlink to
_tool/builddir/build.mk_. This makefile is the front end of the build system
and not supposed to be edited. Beside the makefile, there is a _etc/_
subdirectory that contains the build-directory configuration. For most
platforms, there is only a single _build.conf_ file, which defines the parts of
the Genode source tree incorporated in the build process. Those parts are
called _repositories_.
The repository concept allows for keeping the source code well separated for
different concerns. For example, the platform-specific code for each target
platform is located in a dedicated _base-<platform>_ repository. Also, different
abstraction levels and features of the system are residing in different
repositories. The _etc/build.conf_ file defines the set of repositories to
consider in the build process. At build time, the build system overlays the
directory structures of all repositories specified via the 'REPOSITORIES'
declaration to form a single logical source tree. By changing the list of
'REPOSITORIES', the view of the build system on the source tree can be altered.
The _etc/build.conf_ as found in a fresh created build directory will list the
_base-<platform>_ repository of the platform selected at the 'create_builddir'
command line as well as the 'base', 'os', and 'demo' repositories needed for
compiling Genode's default demonstration scenario. Furthermore, there are a
number of commented-out lines that can be uncommented for enabling additional
repositories.
Note that the order of the repositories listed in the 'REPOSITORIES' declaration
is important. Front-most repositories shadow subsequent repositories. This
makes the repository mechanism a powerful tool for tweaking existing repositories:
By adding a custom repository in front of another one, customized versions of
single files (e.g., header files or target description files) can be supplied to
the build system without changing the original repository.
Building targets
================
To build all targets contained in the list of 'REPOSITORIES' as defined in
_etc/build.conf_, simply issue 'make'. This way, all components that are
compatible with the build directory's base platform will be built. In practice,
however, only some of those components may be of interest. Hence, the build
can be tailored to those components which are of actual interest by specifying
source-code subtrees. For example, using the following command
! make core server/nitpicker
the build system builds all targets found in the 'core' and 'server/nitpicker'
source directories. You may specify any number of subtrees to the build
system. As indicated by the build output, the build system revisits
each library that is used by each target found in the specified subtrees.
This is very handy for developing libraries because instead of re-building
your library and then your library-using program, you just build your program
and that's it. This concept even works recursively, which means that libraries
may depend on other libraries.
In practice, you won't ever need to build the _whole tree_ but only the
targets that you are interested in.
Cleaning the build directory
============================
To remove all but kernel-related generated files, use
! make clean
To remove all generated files, use
! make cleanall
Both 'clean' and 'cleanall' won't remove any files from the _bin/_
subdirectory. This makes the _bin/_ a safe place for files that are
unrelated to the build process, yet required for the integration stage, e.g.,
binary data.
Controlling the verbosity of the build process
==============================================
To understand the inner workings of the build process in more detail, you can
tell the build system to display each directory change by specifying
! make VERBOSE_DIR=
If you are interested in the arguments that are passed to each invocation of
'make', you can make them visible via
! make VERBOSE_MK=
Furthermore, you can observe each single shell-command invocation by specifying
! make VERBOSE=
Of course, you can combine these verboseness toggles for maximizing the noise.
Enabling parallel builds
========================
To utilize multiple CPU cores during the build process, you may invoke 'make'
with the '-j' argument. If manually specifying this argument becomes an
inconvenience, you may add the following line to your _etc/build.conf_ file:
! MAKE += -j<N>
This way, the build system will always use '<N>' CPUs for building.
Caching inter-library dependencies
==================================
The build system allows to repeat the last build without performing any
library-dependency checks by using:
! make again
The use of this feature can significantly improve the work flow during
development because in contrast to source-codes, library dependencies rarely
change. So the time needed for re-creating inter-library dependencies at each
build can be saved.
Repository directory layout
###########################
Each Genode repository has the following layout:
Directory | Description
------------------------------------------------------------
'doc/' | Documentation, specific for the repository
------------------------------------------------------------
'etc/' | Default configuration of the build process
------------------------------------------------------------
'mk/' | The build system
------------------------------------------------------------
'include/' | Globally visible header files
------------------------------------------------------------
'src/' | Source codes and target build descriptions
------------------------------------------------------------
'lib/mk/' | Library build descriptions
Creating targets and libraries
##############################
Target descriptions
===================
A good starting point is to look at the init target. The source code of init is
located at _os/src/init/_. In this directory, you will find a target description
file named _target.mk_. This file contains the building instructions and it is
usually very simple. The build process is controlled by defining the following
variables.
Build variables to be defined by you
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:'TARGET': is the name of the binary to be created. This is the
only *mandatory variable* to be defined in a _target.mk_ file.
:'REQUIRES': expresses the requirements that must be satisfied in order to
build the target. You find more details about the underlying mechanism in
Section [Specializations].
:'LIBS': is the list of libraries that are used by the target.
:'SRC_CC': contains the list of '.cc' source files. The default search location
for source codes is the directory, where the _target.mk_ file resides.
:'SRC_C': contains the list of '.c' source files.
:'SRC_S': contains the list of assembly '.s' source files.
:'SRC_BIN': contains binary data files to be linked to the target.
:'INC_DIR': is the list of include search locations. Directories should
always be appended by using +=. Never use an assignment!
:'EXT_OBJECTS': is a list of Genode-external objects or libraries. This
variable is mostly used for interfacing Genode with legacy software
components.
Rarely used variables
---------------------
:'CC_OPT': contains additional compiler options to be used for '.c' as
well as for '.cc' files.
:'CC_CXX_OPT': contains additional compiler options to be used for the
C++ compiler only.
:'CC_C_OPT': contains additional compiler options to be used for the
C compiler only.
Specifying search locations
~~~~~~~~~~~~~~~~~~~~~~~~~~~
When specifying search locations for header files via the 'INC_DIR' variable or
for source files via 'vpath', relative pathnames are illegal to use. Instead,
you can use the following variables to reference locations within the
source-code repository, where your target lives:
:'REP_DIR': is the base directory of the current source-code repository.
Normally, specifying locations relative to the base of the repository is
never used by _target.mk_ files but needed by library descriptions.
:'PRG_DIR': is the directory, where your _target.mk_ file resides. This
variable is always to be used when specifying a relative path.
Library descriptions
====================
In contrast to target descriptions that are scattered across the whole source
tree, library descriptions are located at the central place _lib/mk_. Each
library corresponds to a _<libname>.mk_ file. The base of the description file
is the name of the library. Therefore, no 'TARGET' variable needs to be set.
The source-code locations are expressed as '$(REP_DIR)'-relative 'vpath'
commands.
Library-description files support the following additional declarations:
:'SHARED_LIB = yes': declares that the library should be built as a shared
object rather than a static library. The resulting object will be called
_<libname>.lib.so_.
Specializations
===============
Building components for different platforms likely implicates portions of code
that are tied to certain aspects of the target platform. For example, a target
platform may be characterized by
* A kernel API such as L4v2, Linux, L4.sec,
* A hardware architecture such as x86, ARM, Coldfire,
* A certain hardware facility such as a custom device, or
* Other properties such as software license requirements.
Each of these attributes express a specialization of the build process. The
build system provides a generic mechanism to handle such specializations.
The _programmer_ of a software component knows the properties on which his
software relies and thus, specifies these requirements in his build description
file.
The _user/customer/builder_ decides to build software for a specific platform
and defines the platform specifics via the 'SPECS' variable per build
directory in _etc/specs.conf_. In addition to an (optional) _etc/specs.conf_
file within the build directory, the build system incorporates the first
_etc/specs.conf_ file found in the repositories as configured for the
build directory. For example, for a 'linux_x86' build directory, the
_base-linux/etc/specs.conf_ file is used by default. The build directory's
'specs.conf' file can still be used to extend the 'SPECS' declarations, for
example to enable special features.
Each '<specname>' in the 'SPECS' variable instructs the build system to
* Include the 'make'-rules of a corresponding _base/mk/spec-<specname>.mk_
file. This enables the customization of the build process for each platform.
* Search for _<libname>.mk_ files in the _lib/mk/<specname>/_ subdirectory.
This way, we can provide alternative implementations of one and the same
library interface for different platforms.
Before a target or library gets built, the build system checks if the 'REQUIRES'
entries of the build description file are satisfied by entries of the 'SPECS'
variable. The compilation is executed only if each entry in the 'REQUIRES'
variable is present in the 'SPECS' variable as supplied by the build directory
configuration.
Building tools to be executed on the host platform
===================================================
Sometimes, software requires custom tools that are used to generate source
code or other ingredients for the build process, for example IDL compilers.
Such tools won't be executed on top of Genode but on the host platform
during the build process. Hence, they must be compiled with the tool chain
installed on the host, not the Genode tool chain.
The Genode build system accommodates the building of such host tools as a side
effect of building a library or a target. Even though it is possible to add
the tool compilation step to a regular build description file, it is
recommended to introduce a dedicated pseudo library for building such tools.
This way, the rules for building host tools are kept separate from rules that
refer to Genode programs. By convention, the pseudo library should be named
_<package>_host_tools_ and the host tools should be built at
_<build-dir>/tool/<package>/_. With _<package>_, we refer to the name of the
software package the tool belongs to, e.g., qt5 or mupdf. To build a tool
named _<tool>_, the pseudo library contains a custom make rule like the
following:
! $(BUILD_BASE_DIR)/tool/<package>/<tool>:
! $(MSG_BUILD)$(notdir $@)
! $(VERBOSE)mkdir -p $(dir $@)
! $(VERBOSE)...build commands...
To let the build system trigger the rule, add the custom target to the
'HOST_TOOLS' variable:
! HOST_TOOLS += $(BUILD_BASE_DIR)/tool/<package>/<tool>
Once the pseudo library for building the host tools is in place, it can be
referenced by each target or library that relies on the respective tools via
the 'LIBS' declaration. The tool can be invoked by referring to
'$(BUILD_BASE_DIR)/tool/<package>/tool'.
For an example of using custom host tools, please refer to the mupdf package
found within the libports repository. During the build of the mupdf library,
two custom tools fontdump and cmapdump are invoked. The tools are built via
the _lib/mk/mupdf_host_tools.mk_ library description file. The actual mupdf
library (_lib/mk/mupdf.mk_) has the pseudo library 'mupdf_host_tools' listed
in its 'LIBS' declaration and refers to the tools relative to
'$(BUILD_BASE_DIR)'.
Building additional custom targets accompanying library or program
==================================================================
There are cases when it is important to build additional targets
besides standard files built for library or program. Of course there
is no problem with writing specific make rules for commands that
generate those target files but for them to be built a proper
dependency must be specified. To achieve it those additional targets
should be added to 'CUSTOM_TARGET_DEPS' variable like e.g. in
iwl_firmware library from dde_linux repository:
! CUSTOM_TARGET_DEPS += $(addprefix $(BIN_DIR)/,$(IMAGES))
Automated integration and testing
#################################
Genode's cross-kernel portability is one of the prime features of the
framework. However, each kernel takes a different route when it comes to
configuring, integrating, and booting the system. Hence, for using a particular
kernel, profound knowledge about the boot concept and the kernel-specific tools
is required. To streamline the testing of Genode-based systems across the many
different supported kernels, the framework comes equipped with tools that
relieve you from these peculiarities.
Run scripts
===========
Using so-called run scripts, complete Genode systems can be described in a
concise and kernel-independent way. Once created, a run script can be used
to integrate and test-drive a system scenario directly from the build directory.
The best way to get acquainted with the concept is reviewing the run script
for the 'hello_tutorial' located at _hello_tutorial/run/hello.run_.
Let's revisit each step expressed in the _hello.run_ script:
* Building the components needed for the system using the 'build' command.
This command instructs the build system to compile the targets listed in
the brace block. It has the same effect as manually invoking 'make' with
the specified argument from within the build directory.
* Creating a new boot directory using the 'create_boot_directory' command.
The integration of the scenario is performed in a dedicated directory at
_<build-dir>/var/run/<run-script-name>/_. When the run script is finished,
this directory will contain all components of the final system. In the
following, we will refer to this directory as run directory.
* Installing the Genode 'config' file into the run directory using the
'install_config' command. The argument to this command will be written
to a file called 'config' at the run directory picked up by
Genode's init process.
* Creating a bootable system image using the 'build_boot_image' command.
This command copies the specified list of files from the _<build-dir>/bin/_
directory to the run directory and executes the platform-specific steps
needed to transform the content of the run directory into a bootable
form. This form depends on the actual base platform and may be an ISO
image or a bootable ELF image.
* Executing the system image using the 'run_genode_until' command. Depending
on the base platform, the system image will be executed using an emulator.
For most platforms, Qemu is the tool of choice used by default. On Linux,
the scenario is executed by starting 'core' directly from the run
directory. The 'run_genode_until' command takes a regular expression
as argument. If the log output of the scenario matches the specified
pattern, the 'run_genode_until' command returns. If specifying 'forever'
as argument (as done in 'hello.run'), this command will never return.
If a regular expression is specified, an additional argument determines
a timeout in seconds. If the regular expression does not match until
the timeout is reached, the run script will abort.
Please note that the _hello.run_ script does not contain kernel-specific
information. Therefore it can be executed from the build directory of any base
platform by using:
! make run/hello
When invoking 'make' with an argument of the form 'run/*', the build system
will look in all repositories for a run script with the specified name. The run
script must be located in one of the repositories 'run/' subdirectories and
have the file extension '.run'.
For a more comprehensive run script, _os/run/demo.run_ serves as a good
example. This run script describes Genode's default demo scenario. As seen in
'demo.run', parts of init's configuration can be made dependent on the
platform's properties expressed as spec values. For example, the PCI driver
gets included in init's configuration only on platforms with a PCI bus. For
appending conditional snippets to the _config_ file, there exists the 'append_if'
command, which takes a condition as first and the snippet as second argument.
To test for a SPEC value, the command '[have_spec <spec-value>]' is used as
condition. Analogously to how 'append_if' appends strings, there exists
'lappend_if' to append list items. The latter command is used to conditionally
include binaries to the list of boot modules passed to the 'build_boot_image'
command.
The run mechanism explained
===========================
Under the hood, run scripts are executed by an expect interpreter. When the
user invokes a run script via _make run/<run-script>_, the build system invokes
the run tool at _<genode-dir>/tool/run_ with the run script as argument. The
run tool is an expect script that has no other purpose than defining several
commands used by run scripts, including a platform-specific script snippet
called run environment ('env'), and finally including the actual run script.
Whereas _tool/run_ provides the implementations of generic and largely
platform-independent commands, the _env_ snippet included from the platform's
respective _base-<platform>/run/env_ file contains all platform-specific
commands. For reference, the most simplistic run environment is the one at
_base-linux/run/env_, which implements the 'create_boot_directory',
'install_config', 'build_boot_image', and 'run_genode_until' commands for Linux
as base platform. For the other platforms, the run environments are far more
elaborative and document precisely how the integration and boot concept works
on each platform. Hence, the _base-<platform>/run/env_ files are not only
necessary parts of Genode's tooling support but serve as resource for
peculiarities of using each kernel.
Using run script to implement test cases
========================================
Because run scripts are actually expect scripts, the whole arsenal of
language features of the Tcl scripting language is available to them. This
turns run scripts into powerful tools for the automated execution of test
cases. A good example is the run script at _libports/run/lwip.run_, which tests
the lwIP stack by running a simple Genode-based HTTP server on Qemu. It fetches
and validates a HTML page from this server. The run script makes use of a
regular expression as argument to the 'run_genode_until' command to detect the
state when the web server becomes ready, subsequently executes the 'lynx' shell
command to fetch the web site, and employs Tcl's support for regular
expressions to validate the result. The run script works across base platforms
that use Qemu as execution environment.
To get the most out of the run mechanism, a basic understanding of the Tcl
scripting language is required. Furthermore the functions provided by
_tool/run_ and _base-<platform>/run/env_ should be studied.
Automated testing across base platforms
=======================================
To execute one or multiple test cases on more than one base platform, there
exists a dedicated tool at _tool/autopilot_. Its primary purpose is the
nightly execution of test cases. The tool takes a list of platforms and of
run scripts as arguments and executes each run script on each platform. The
build directory for each platform is created at
_/tmp/autopilot.<username>/<platform>_ and the output of each run script is
written to a file called _<platform>.<run-script>.log_. On stderr, autopilot
prints the statistics about whether or not each run script executed
successfully on each platform. If at least one run script failed, autopilot
returns a non-zero exit code, which makes it straight forward to include
autopilot into an automated build-and-test environment.

View File

@@ -1,299 +0,0 @@
Coding style guidelines for Genode
##################################
Things to avoid
===============
Please avoid using pre-processor macros. C++ provides language
features for almost any case, for which a C programmer uses
macros.
:Defining constants:
Use 'enum' instead of '#define'
! enum { MAX_COLORS = 3 };
! enum {
! COLOR_RED = 1,
! COLOR_BLUE = 2,
! COLOR_GREEN = 3
! };
:Meta programming:
Use templates instead of pre-processor macros. In contrast to macros,
templates are type-safe and fit well with the implementation syntax.
:Conditional-code inclusion:
Please avoid C-hacker style '#ifdef CONFIG_PLATFROM' - '#endif'
constructs. Instead, factor-out the encapsulated code into a
separate file and introduce a proper function interface.
The build process should then be used to select the appropriate
platform-specific files at compile time. Keep platform dependent
code as small as possible. Never pollute existing generic code
with platform-specific code.
Header of each file
===================
! /*
! * \brief Short description of the file
! * \author Original author
! * \date Creation date
! *
! * Some more detailed description. This is optional.
! */
Identifiers
===========
* The first character of class names are uppercase, any other characters are
lowercase.
* Function and variable names are lower case.
* 'Multi_word_identifiers' use underline to separate words.
* 'CONSTANTS' and template arguments are upper case.
* Private and protected members of a class begin with an '_'-character.
* Accessor methods are named after their corresponding attributes:
! /**
! * Request private member variable
! */
! int value() const { return _value; }
!
! /**
! * Set the private member variable
! */
! void value(int value) { _value = value; }
* Accessors that return a boolean value do not carry an 'is_' prefix. E.g.,
a method for requesting the validity of an object should be named
'valid()', not 'is_valid()'.
Indentation
===========
* Use one tab per indentation step. *Do not mix tabs and spaces!*
* Use no tabs except at the beginning of a line.
* Use spaces for the alignment of continuation lines such as function
arguments that span multiple lines. The alignment spaces of such lines
should start after the (tab-indented) indentation level. For example:
! {
! <tab>function_with_many_arguments(arg1,
! <tab><--- spaces for aligment --->arg2,
! ...
! }
* Remove trailing spaces at the end of lines
This way, each developer can set his preferred tab size in his editor
and the source code always looks good.
_Hint:_ In VIM, use the 'set list' and 'set listchars' commands to make tabs
and spaces visible.
* If class initializers span multiple lines, put the colon on a separate
line and indent the initializers using one tab. For example:
! Complicated_machinery(Material &material, Deadline deadline)
! :
! <tab>_material(material),
! <tab>_deadline(deadline),
! <tab>...
! {
! ...
! }
* Preferably place statements that alter the control flow - such as
'break', 'continue', or 'return' - at the beginning of a separate line,
followed by vertical space (a blank line or the closing brace of the
surrounding scope).
! if (early_return_possible)
! return;
Switch statements
~~~~~~~~~~~~~~~~~
Switch-statement blocks should be indented as follows:
! switch (color) {
!
! case BLUE:
! <tab>break;
!
! case GREEN:
! <tab>{
! <tab><tab>int declaration_required;
! <tab><tab>...
! <tab>}
!
! default:
! }
Please note that the case labels have the same indentation
level as the switch statement. This avoids a two-level
indentation-change at the end of the switch block that
would occur otherwise.
Vertical whitespaces
====================
In header files:
* Leave two empty lines between classes.
* Leave one empty line between member functions.
In implementation files:
* Leave two empty lines between functions.
Braces
======
* Braces after class, struct and function names are placed at a new line:
! class Foo
! {
! public:
!
! void method(void)
! {
! ...
! }
! };
except for one-line functions.
* All other occurrences of open braces (for 'if', 'while', 'do', 'for',
'namespace', 'enum' etc.) are at the end of a line:
! if (flag) {
! ..
! } else {
! ..
! }
* One-line functions should be written on a single line as long as the line
length does not exceed approximately 80 characters.
Typically, this applies for accessor functions.
If slightly more space than one line is needed, indent as follows:
! int heavy_computation(int a, int lot, int of, int args) {
! return a + lot + of + args; }
Comments
========
Function/method header
~~~~~~~~~~~~~~~~~~~~~~
Each public or protected (but no private) method in a header-file should be
prepended by a header as follows:
! /**
! * Short description
! *
! * \param a meaning of parameter a
! * \param b meaning of parameter b
! * \param c,d meaning of parameters c and d
! *
! * \throw Exception_type meaning of the exception
! *
! * \return meaning of return value
! *
! * More detailed information about the function. This is optional.
! */
Descriptions of parameters and return values should be lower-case and brief.
More elaborative descriptions can be documented in the text area below.
In implementation files, only local and private functions should feature
function headers.
Single-line comments
~~~~~~~~~~~~~~~~~~~~
! /* use this syntax for single line comments */
A single-line comment should be prepended by an empty line.
Single-line comments should be short - no complete sentences. Use lower-case.
C++-style comments ('//') should only be used for temporarily commenting-out
code. Such commented-out garbage is easy to 'grep' and there are handy
'vim'-macros available for creating and removing such comments.
Variable descriptions
~~~~~~~~~~~~~~~~~~~~~
Use the same syntax as for single-line comments. Insert two or more
spaces before your comment starts.
! int size; /* in kilobytes */
Multi-line comments
~~~~~~~~~~~~~~~~~~~
Multi-line comments are more detailed descriptions in the form of
sentences.
A multi-line comment should be enclosed by empty lines.
! /*
! * This is some tricky
! * algorithm that works
! * as follows:
! * ...
! */
The first and last line of a multi-line comment contain no words.
Source-code blocks
~~~~~~~~~~~~~~~~~~
For structuring your source code, you can entitle the different
parts of a file like this:
! <- two empty lines
!
! /********************
! ** Event handlers **
! ********************/
! <- one empty line
Note the two stars at the left and right. There are two of them to
make the visible width of the border match its height (typically,
characters are ca. twice as high as wide).
A source-code block header represents a headline for the following
code. To couple this headline with the following code closer than
with previous code, leave two empty lines above and one empty line
below the source-code block header.
Order of public, protected, and private blocks
==============================================
For consistency reasons, use the following class layout:
! class Sandstein
! {
! private:
! ...
! protected:
! ...
! public:
! };
Typically, the private section contains member variables that are used
by public accessor functions below. In this common case, we only reference
symbols that are defined above as it is done when programming plain C.
Leave one empty line (or a line that contains only a brace) above and below
a 'private', 'protected', or 'public' label. This also applies when the
label is followed by a source-code block header.

View File

@@ -1,70 +1,333 @@
Conventions for the Genode development
Norman Feske ==================================================
Conventions and coding-style guidelines for Genode
==================================================
Documentation
############# Documentation and naming of files
#################################
We use the GOSH syntax [https://github.com/nfeske/gosh] for documentation and We use the GOSH syntax [https://github.com/nfeske/gosh] for documentation and
README files. README files.
We encourage that each directory contains a file called 'README' that briefly
explains what the directory is about.
README files File names
############ ----------
Each directory should contain a file called 'README' that briefly explains All normal file names are lowercase. Filenames should be chosen to be
what the directory is about. In 'doc/Makefile' is a rule for expressive. Someone who explores your files for the first time might not
generating a directory overview from the 'README' files automatically.
You can structure your 'README' file by using the GOSH style for subsections:
! Subsection
! ~~~~~~~~~~
Do not use chapters or sections in your 'README' files.
Filenames
#########
All normal filenames are lowercase. Filenames should be chosen to be
expressive. Someone who explores your files for the first time might not
understand what 'mbi.cc' means but 'multiboot_info.cc' would ring a bell. If a understand what 'mbi.cc' means but 'multiboot_info.cc' would ring a bell. If a
filename contains multiple words, use the '_' to separate them (instead of file name contains multiple words, use the '_' to separate them (instead of
'miscmath.h', use 'misc_math.h'). 'miscmath.h', use 'misc_math.h').
Coding style Coding style
############ ############
A common coding style helps a lot to ease collaboration. The official coding Things to avoid
style of the Genode base components is described in 'doc/coding_style.txt'. ===============
If you consider working closely together with the Genode main developers,
your adherence to this style is greatly appreciated. Please avoid using pre-processor macros. C++ provides language
features for almost any case, for which a C programmer uses
macros.
:Defining constants:
Use 'enum' instead of '#define'
! enum { MAX_COLORS = 3 };
! enum {
! COLOR_RED = 1,
! COLOR_BLUE = 2,
! COLOR_GREEN = 3
! };
:Meta programming:
Use templates instead of pre-processor macros. In contrast to macros,
templates are type-safe and fit well with the implementation syntax.
:Conditional-code inclusion:
Please avoid C-hacker style '#ifdef CONFIG_PLATFROM' - '#endif'
constructs. Instead, factor-out the encapsulated code into a
separate file and introduce a proper function interface.
The build process should then be used to select the appropriate
platform-specific files at compile time. Keep platform dependent
code as small as possible. Never pollute existing generic code
with platform-specific code.
Include files and RPC interfaces Header of each file
################################ ===================
Never place include files directly into the '<repository>/include/' directory ! /*
but use a meaningful subdirectory that corresponds to the component that ! * \brief Short description of the file
provides the interfaces. ! * \author Original author
! * \date Creation date
Each RPC interface is represented by a separate include subdirectory. For ! *
an example, see 'base/include/ram_session/'. The header file that defines ! * Some more detailed description. This is optional.
the RPC function interface has the same base name as the directory. The RPC ! */
stubs are called 'client.h' and 'server.h'. If your interface uses a custom
capability type, it is defined in 'capability.h'. Furthermore, if your
interface is a session interface of a service, it is good practice to
provide a connection class in a 'connection.h' file for managing session-
construction arguments and the creation and destruction of sessions.
Specialization-dependent include directories are placed in 'include/<specname>/'.
Service Names Identifiers
############# ===========
* The first character of class names are uppercase, any other characters are
lowercase.
* Function and variable names are lower case.
* 'Multi_word_identifiers' use underline to separate words.
* 'CONSTANTS' and template arguments are upper case.
* Private and protected members of a class begin with an '_'-character.
* Accessor methods are named after their corresponding attributes:
! /**
! * Request private member variable
! */
! int value() const { return _value; }
!
! /**
! * Set the private member variable
! */
! void value(int value) { _value = value; }
* Accessors that return a boolean value do not carry an 'is_' prefix. E.g.,
a method for requesting the validity of an object should be named
'valid()', not 'is_valid()'.
Indentation
===========
* Use one tab per indentation step. *Do not mix tabs and spaces!*
* Use no tabs except at the beginning of a line.
* Use spaces for the alignment of continuation lines such as function
arguments that span multiple lines. The alignment spaces of such lines
should start after the (tab-indented) indentation level. For example:
! {
! <tab>function_with_many_arguments(arg1,
! <tab><--- spaces for aligment --->arg2,
! ...
! }
* Remove trailing spaces at the end of lines
This way, each developer can set his preferred tab size in his editor
and the source code always looks good.
_Hint:_ In VIM, use the 'set list' and 'set listchars' commands to make tabs
and spaces visible.
* If class initializers span multiple lines, put the colon on a separate
line and indent the initializers using one tab. For example:
! Complicated_machinery(Material &material, Deadline deadline)
! :
! <tab>_material(material),
! <tab>_deadline(deadline),
! <tab>...
! {
! ...
! }
* Preferably place statements that alter the control flow - such as
'break', 'continue', or 'return' - at the beginning of a separate line,
followed by vertical space (a blank line or the closing brace of the
surrounding scope).
! if (early_return_possible)
! return;
Switch statements
~~~~~~~~~~~~~~~~~
Switch-statement blocks should be indented as follows:
! switch (color) {
!
! case BLUE:
! <tab>break;
!
! case GREEN:
! <tab>{
! <tab><tab>int declaration_required;
! <tab><tab>...
! <tab>}
!
! default:
! }
Please note that the case labels have the same indentation
level as the switch statement. This avoids a two-level
indentation-change at the end of the switch block that
would occur otherwise.
Vertical whitespaces
====================
In header files:
* Leave two empty lines between classes.
* Leave one empty line between member functions.
In implementation files:
* Leave two empty lines between functions.
Braces
======
* Braces after class, struct and function names are placed at a new line:
! class Foo
! {
! public:
!
! void method(void)
! {
! ...
! }
! };
except for one-line functions.
* All other occurrences of open braces (for 'if', 'while', 'do', 'for',
'namespace', 'enum' etc.) are at the end of a line:
! if (flag) {
! ..
! } else {
! ..
! }
* One-line functions should be written on a single line as long as the line
length does not exceed approximately 80 characters.
Typically, this applies for accessor functions.
If slightly more space than one line is needed, indent as follows:
! int heavy_computation(int a, int lot, int of, int args) {
! return a + lot + of + args; }
Comments
========
Function/method header
~~~~~~~~~~~~~~~~~~~~~~
Each public or protected (but no private) method in a header-file should be
prepended by a header as follows:
! /**
! * Short description
! *
! * \param a meaning of parameter a
! * \param b meaning of parameter b
! * \param c,d meaning of parameters c and d
! *
! * \throw Exception_type meaning of the exception
! *
! * \return meaning of return value
! *
! * More detailed information about the function. This is optional.
! */
Descriptions of parameters and return values should be lower-case and brief.
More elaborative descriptions can be documented in the text area below.
In implementation files, only local and private functions should feature
function headers.
Single-line comments
~~~~~~~~~~~~~~~~~~~~
! /* use this syntax for single line comments */
A single-line comment should be prepended by an empty line.
Single-line comments should be short - no complete sentences. Use lower-case.
C++-style comments ('//') should only be used for temporarily commenting-out
code. Such commented-out garbage is easy to 'grep' and there are handy
'vim'-macros available for creating and removing such comments.
Variable descriptions
~~~~~~~~~~~~~~~~~~~~~
Use the same syntax as for single-line comments. Insert two or more
spaces before your comment starts.
! int size; /* in kilobytes */
Multi-line comments
~~~~~~~~~~~~~~~~~~~
Multi-line comments are more detailed descriptions in the form of
sentences.
A multi-line comment should be enclosed by empty lines.
! /*
! * This is some tricky
! * algorithm that works
! * as follows:
! * ...
! */
The first and last line of a multi-line comment contain no words.
Source-code blocks
~~~~~~~~~~~~~~~~~~
For structuring your source code, you can entitle the different
parts of a file like this:
! <- two empty lines
!
! /********************
! ** Event handlers **
! ********************/
! <- one empty line
Note the two stars at the left and right. There are two of them to
make the visible width of the border match its height (typically,
characters are ca. twice as high as wide).
A source-code block header represents a headline for the following
code. To couple this headline with the following code closer than
with previous code, leave two empty lines above and one empty line
below the source-code block header.
Order of public, protected, and private blocks
==============================================
For consistency reasons, use the following class layout:
! class Sandstein
! {
! private:
! ...
! protected:
! ...
! public:
! };
Typically, the private section contains member variables that are used
by public accessor functions below. In this common case, we only reference
symbols that are defined above as it is done when programming plain C.
Leave one empty line (or a line that contains only a brace) above and below
a 'private', 'protected', or 'public' label. This also applies when the
label is followed by a source-code block header.
Naming of Genode services
=========================
Service names as announced via the 'parent()->announce()' function follow Service names as announced via the 'parent()->announce()' function follow
the following convention: the following convention:

View File

@@ -1,514 +0,0 @@
============================
Package management on Genode
============================
Norman Feske
Motivation and inspiration
##########################
The established system-integration work flow with Genode is based on
the 'run' tool, which automates the building, configuration, integration,
and testing of Genode-based systems. Whereas the run tool succeeds in
overcoming the challenges that come with Genode's diversity of kernels and
supported hardware platforms, its scalability is somewhat limited to
appliance-like system scenarios: The result of the integration process is
a system image with a certain feature set. Whenever requirements change,
the system image is replaced with a new created image that takes those
requirements into account. In practice, there are two limitations of this
system-integration approach:
First, since the run tool implicitly builds all components required for a
system scenario, the system integrator has to compile all components from
source. E.g., if a system includes a component based on Qt5, one needs to
compile the entire Qt5 application framework, which induces significant
overhead to the actual system-integration tasks of composing and configuring
components.
Second, general-purpose systems tend to become too complex and diverse to be
treated as system images. When looking at commodity OSes, each installation
differs with respect to the installed set of applications, user preferences,
used device drivers and system preferences. A system based on the run tool's
work flow would require the user to customize the run script of the system for
each tweak. To stay up to date, the user would need to re-create the
system image from time to time while manually maintaining any customizations.
In practice, this is a burden, very few end users are willing to endure.
The primary goal of Genode's package management is to overcome these
scalability limitations, in particular:
* Alleviating the need to build everything that goes into system scenarios
from scratch,
* Facilitating modular system compositions while abstracting from technical
details,
* On-target system update and system development,
* Assuring the user that system updates are safe to apply by providing the
ability to easily roll back the system or parts thereof to previous versions,
* Securing the integrity of the deployed software,
* Fostering a federalistic evolution of Genode systems,
* Low friction for existing developers.
The design of Genode's package-management concept is largely influenced by Git
as well as the [https://nixos.org/nix/ - Nix] package manager. In particular
the latter opened our eyes to discover the potential that lies beyond the
package management employed in state-of-the art commodity systems. Even though
we considered adapting Nix for Genode and actually conducted intensive
experiments in this direction (thanks to Emery Hemingway who pushed forward
this line of work), we settled on a custom solution that leverages Genode's
holistic view on all levels of the operating system including the build system
and tooling, source structure, ABI design, framework API, system
configuration, inter-component interaction, and the components itself. Whereby
Nix is designed for being used on top of Linux, Genode's whole-systems view
led us to simplifications that eliminated the needs for Nix' powerful features
like its custom description language.
Nomenclature
############
When speaking about "package management", one has to clarify what a "package"
in the context of an operating system represents. Traditionally, a package
is the unit of delivery of a bunch of "dumb" files, usually wrapped up in
a compressed archive. A package may depend on the presence of other
packages. Thereby, a dependency graph is formed. To express how packages fit
with each other, a package is usually accompanied with meta data
(description). Depending on the package manager, package descriptions follow
certain formalisms (e.g., package-description language) and express
more-or-less complex concepts such as versioning schemes or the distinction
between hard and soft dependencies.
Genode's package management does not follow this notion of a "package".
Instead of subsuming all deliverable content under one term, we distinguish
different kinds of content, each in a tailored and simple form. To avoid the
clash of the notions of the common meaning of a "package", we speak of
"archives" as the basic unit of delivery. The following subsections introduce
the different categories.
Archives are named with their version as suffix, appended via a slash. The
suffix is maintained by the author of the archive. The recommended naming
scheme is the use of the release date as version suffix, e.g.,
'report_rom/2017-05-14'.
Raw-data archives
=================
A raw-data archive contains arbitrary data that is - in contrast to executable
binaries - independent from the processor architecture. Examples are
configuration data, game assets, images, or fonts. The content of raw-data
archives is expected to be consumed by components at runtime. It is not
relevant for the build process for executable binaries. Each raw-data
archive contains merely a collection of data files. There is no meta data.
API archive
===========
An API archive has the structure of a Genode source-code repository. It may
contain all the typical content of such a source-code repository such as header
files (in the _include/_ subdirectory), source codes (in the _src/_
subdirectory), library-description files (in the _lib/mk/_ subdirectory), or
ABI symbols (_lib/symbols/_ subdirectory). At the top level, a LICENSE file is
expected that clarifies the license of the contained source code. There is no
meta data contained in an API archive.
An API archive is meant to provide _ingredients_ for building components. The
canonical example is the public programming interface of a library (header
files) and the library's binary interface in the form of an ABI-symbols file.
One API archive may contain the interfaces of multiple libraries. For example,
the interfaces of libc and libm may be contained in a single "libc" API
archive because they are closely related to each other. Conversely, an API
archive may contain a single header file only. The granularity of those
archives may vary. But they have in common that they are used at build time
only, not at runtime.
Source archive
==============
Like an API archive, a source archive has the structure of a Genode
source-tree repository and is expected to contain all the typical content of
such a source repository along with a LICENSE file. But unlike an API archive,
it contains descriptions of actual build targets in the form of Genode's usual
'target.mk' files.
In addition to the source code, a source archive contains a file
called 'used_apis', which contains a list of API-archive names with each
name on a separate line. For example, the 'used_apis' file of the 'report_rom'
source archive looks as follows:
! base/2017-05-14
! os/2017-05-13
! report_session/2017-05-13
The 'used_apis' file declares the APIs needed to incorporate into the build
process when building the source archive. Hence, they represent _build-time_
_dependencies_ on the specific API versions.
A source archive may be equipped with a top-level file called 'api' containing
the name of exactly one API archive. If present, it declares that the source
archive _implements_ the specified API. For example, the 'libc/2017-05-14'
source archive contains the actual source code of the libc and libm as well as
an 'api' file with the content 'libc/2017-04-13'. The latter refers to the API
implemented by this version of the libc source package (note the differing
versions of the API and source archives)
Binary archive
==============
A binary archive contains the build result of the equally-named source archive
when built for a particular architecture. That is, all files that would appear
at the _<build-dir>/bin/_ subdirectory when building all targets present in
the source archive. There is no meta data present in a binary archive.
A binary archive is created out of the content of its corresponding source
archive and all API archives listed in the source archive's 'used_apis' file.
Note that since a binary archive depends on only one source archive, which
has no further dependencies, all binary archives can be built independently
from each other.
For example, a libc-using application needs the source code of the
application as well as the libc's API archive (the libc's header file and
ABI) but it does not need the actual libc library to be present.
Package archive
===============
A package archive contains an 'archives' file with a list of archive names
that belong together at runtime. Each listed archive appears on a separate line.
For example, the 'archives' file of the package archive for the window
manager 'wm/2018-02-26' looks as follows:
! genodelabs/raw/wm/2018-02-14
! genodelabs/src/wm/2018-02-26
! genodelabs/src/report_rom/2018-02-26
! genodelabs/src/decorator/2018-02-26
! genodelabs/src/floating_window_layouter/2018-02-26
In contrast to the list of 'used_apis' of a source archive, the content of
the 'archives' file denotes the origin of the respective archives
("genodelabs"), the archive type, followed by the versioned name of the
archive.
An 'archives' file may specify raw archives, source archives, or package
archives (as type 'pkg'). It thereby allows the expression of _runtime
dependencies_. If a package archive lists another package archive, it inherits
the content of the listed archive. This way, a new package archive may easily
customize an existing package archive.
A package archive does not specify binary archives directly as they differ
between the architecture and are already referenced by the source archives.
In addition to an 'archives' file, a package archive is expected to contain
a 'README' file explaining the purpose of the collection.
Depot structure
###############
Archives are stored within a directory tree called _depot/_. The depot
is structured as follows:
! <user>/pubkey
! <user>/download
! <user>/src/<name>/<version>/
! <user>/api/<name>/<version>/
! <user>/raw/<name>/<version>/
! <user>/pkg/<name>/<version>/
! <user>/bin/<arch>/<src-name>/<src-version>/
The <user> stands for the origin of the contained archives. For example, the
official archives provided by Genode Labs reside in a _genodelabs/_
subdirectory. Within this directory, there is a 'pubkey' file with the
user's public key that is used to verify the integrity of archives downloaded
from the user. The file 'download' specifies the download location as an URL.
Subsuming archives in a subdirectory that correspond to their origin
(user) serves two purposes. First, it provides a user-local name space for
versioning archives. E.g., there might be two versions of a
'nitpicker/2017-04-15' source archive, one by "genodelabs" and one by
"nfeske". However, since each version resides under its origin's subdirectory,
version-naming conflicts between different origins cannot happen. Second, by
allowing multiple archive origins in the depot side-by-side, package archives
may incorporate archives of different origins, which fosters the goal of a
federalistic development, where contributions of different origins can be
easily combined.
The actual archives are stored in the subdirectories named after the archive
types ('raw', 'api', 'src', 'bin', 'pkg'). Archives contained in the _bin/_
subdirectories are further subdivided in the various architectures (like
'x86_64', or 'arm_v7').
Depot management
################
The tools for managing the depot content reside under the _tool/depot/_
directory. When invoked without arguments, each tool prints a brief
description of the tool and its arguments.
Unless stated otherwise, the tools are able to consume any number of archives
as arguments. By default, they perform their work sequentially. This can be
changed by the '-j<N>' argument, where <N> denotes the desired level of
parallelization. For example, by specifying '-j4' to the _tool/depot/build_
tool, four concurrent jobs are executed during the creation of binary archives.
Downloading archives
====================
The depot can be populated with archives in two ways, either by creating
the content from locally available source codes as explained by Section
[Automated extraction of archives from the source tree], or by downloading
ready-to-use archives from a web server.
In order to download archives originating from a specific user, the depot's
corresponding user subdirectory must contain two files:
:_pubkey_: contains the public key of the GPG key pair used by the creator
(aka "user") of the to-be-downloaded archives for signing the archives. The
file contains the ASCII-armored version of the public key.
:_download_: contains the base URL of the web server where to fetch archives
from. The web server is expected to mirror the structure of the depot.
That is, the base URL is followed by a sub directory for the user,
which contains the archive-type-specific subdirectories.
If both the public key and the download locations are defined, the download
tool can be used as follows:
! ./tool/depot/download genodelabs/src/zlib/2018-01-10
The tool automatically downloads the specified archives and their
dependencies. For example, as the zlib depends on the libc API, the libc API
archive is downloaded as well. All archive types are accepted as arguments
including binary and package archives. Furthermore, it is possible to download
all binary archives referenced by a package archive. For example, the
following command downloads the window-manager (wm) package archive including
all binary archives for the 64-bit x86 architecture. Downloaded binary
archives are always accompanied with their corresponding source and used API
archives.
! ./tool/depot/download genodelabs/pkg/x86_64/wm/2018-02-26
Archive content is not downloaded directly to the depot. Instead, the
individual archives and signature files are downloaded to a quarantine area in
the form of a _public/_ directory located in the root of Genode's source tree.
As its name suggests, the _public/_ directory contains data that is imported
from or to-be exported to the public. The download tool populates it with the
downloaded archives in their compressed form accompanied with their
signatures.
The compressed archives are not extracted before their signature is checked
against the public key defined at _depot/<user>/pubkey_. If however the
signature is valid, the archive content is imported to the target destination
within the depot. This procedure ensures that depot content - whenever
downloaded - is blessed by a cryptographic signature of its creator.
Building binary archives from source archives
=============================================
With the depot populated with source and API archives, one can use the
_tool/depot/build_ tool to produce binary archives. The arguments have the
form '<user>/bin/<arch>/<src-name>' where '<arch>' stands for the targeted
CPU architecture. For example, the following command builds the 'zlib'
library for the 64-bit x86 architecture. It executes four concurrent jobs
during the build process.
! ./tool/depot/build genodelabs/bin/x86_64/zlib/2018-01-10 -j4
Note that the command expects a specific version of the source archive as
argument. The depot may contain several versions. So the user has to decide,
which one to build.
After the tool is finished, the freshly built binary archive can be found in
the depot within the _genodelabs/bin/<arch>/<src>/<version>/_ subdirectory.
Only the final result of the built process is preserved. In the example above,
that would be the _zlib.lib.so_ library.
For debugging purposes, it might be interesting to inspect the intermediate
state of the build. This is possible by adding 'KEEP_BUILD_DIR=1' as argument
to the build command. The binary's intermediate build directory can be
found besides the binary archive's location named with a '.build' suffix.
By default, the build tool won't attempt to rebuild a binary archive that is
already present in the depot. However, it is possible to force a rebuild via
the 'REBUILD=1' argument.
Publishing archives
===================
Archives located in the depot can be conveniently made available to the public
using the _tool/depot/publish_ tool. Given an archive path, the tool takes
care of determining all archives that are implicitly needed by the specified
one, wrapping the archive's content into compressed tar archives, and signing
those.
As a precondition, the tool requires you to possess the private key that
matches the _depot/<you>/pubkey_ file within your depot. The key pair should
be present in the key ring of your GNU privacy guard.
To publish archives, one needs to specify the specific version to publish.
For example:
! ./tool/depot/publish <you>/pkg/x86_64/wm/2018-02-26
The command checks that the specified archive and all dependencies are present
in the depot. It then proceeds with the archiving and signing operations. For
the latter, the pass phrase for your private key will be requested. The
publish tool prints the information about the processed archives, e.g.:
! publish /.../public/<you>/api/base/2018-02-26.tar.xz
! publish /.../public/<you>/api/framebuffer_session/2017-05-31.tar.xz
! publish /.../public/<you>/api/gems/2018-01-28.tar.xz
! publish /.../public/<you>/api/input_session/2018-01-05.tar.xz
! publish /.../public/<you>/api/nitpicker_gfx/2018-01-05.tar.xz
! publish /.../public/<you>/api/nitpicker_session/2018-01-05.tar.xz
! publish /.../public/<you>/api/os/2018-02-13.tar.xz
! publish /.../public/<you>/api/report_session/2018-01-05.tar.xz
! publish /.../public/<you>/api/scout_gfx/2018-01-05.tar.xz
! publish /.../public/<you>/bin/x86_64/decorator/2018-02-26.tar.xz
! publish /.../public/<you>/bin/x86_64/floating_window_layouter/2018-02-26.tar.xz
! publish /.../public/<you>/bin/x86_64/report_rom/2018-02-26.tar.xz
! publish /.../public/<you>/bin/x86_64/wm/2018-02-26.tar.xz
! publish /.../public/<you>/pkg/wm/2018-02-26.tar.xz
! publish /.../public/<you>/raw/wm/2018-02-14.tar.xz
! publish /.../public/<you>/src/decorator/2018-02-26.tar.xz
! publish /.../public/<you>/src/floating_window_layouter/2018-02-26.tar.xz
! publish /.../public/<you>/src/report_rom/2018-02-26.tar.xz
! publish /.../public/<you>/src/wm/2018-02-26.tar.xz
According to the output, the tool populates a directory called _public/_
at the root of the Genode source tree with the to-be-published archives.
The content of the _public/_ directory is now ready to be copied to a
web server, e.g., by using rsync.
Automated extraction of archives from the source tree
#####################################################
Genode users are expected to populate their local depot with content obtained
via the _tool/depot/download_ tool. However, Genode developers need a way to
create depot archives locally in order to make them available to users. Thanks
to the _tool/depot/extract_ tool, the assembly of archives does not need to be
a manual process. Instead, archives can be conveniently generated out of the
source codes present in the Genode source tree and the _contrib/_ directory.
However, the granularity of splitting source code into archives, the
definition of what a particular API entails, and the relationship between
archives must be augmented by the archive creator as this kind of information
is not present in the source tree as is. This is where so-called "archive
recipes" enter the picture. An archive recipe defines the content of an
archive. Such recipes can be located at an _recipes/_ subdirectory of any
source-code repository, similar to how port descriptions and run scripts
are organized. Each _recipe/_ directory contains subdirectories for the
archive types, which, in turn, contain a directory for each archive. The
latter is called a _recipe directory_.
Recipe directory
----------------
The recipe directory is named after the archive _omitting the archive version_
and contains at least one file named _hash_. This file defines the version
of the archive along with a hash value of the archive's content
separated by a space character. By tying the version name to a particular hash
value, the _extract_ tool is able to detect the appropriate points in time
whenever the version should be increased due to a change of the archive's
content.
API, source, and raw-data archive recipes
-----------------------------------------
Recipe directories for API, source, or raw-data archives contain a
_content.mk_ file that defines the archive content in the form of make
rules. The content.mk file is executed from the archive's location within
the depot. Hence, the contained rules can refer to archive-relative files as targets.
The first (default) rule of the content.mk file is executed with a customized
make environment:
:GENODE_DIR: A variable that holds the path to root of the Genode source tree,
:REP_DIR: A variable with the path to source code repository where the recipe
is located
:port_dir: A make function that returns the directory of a port within the
_contrib/_ directory. The function expects the location of the
corresponding port file as argument, for example, the 'zlib' recipe
residing in the _libports/_ repository may specify '$(REP_DIR)/ports/zlib'
to access the 3rd-party zlib source code.
Source archive recipes contain simplified versions of the 'used_apis' and
(for libraries) 'api' files as found in the archives. In contrast to the
depot's counterparts of these files, which contain version-suffixed names,
the files contained in recipe directories omit the version suffix. This
is possible because the extract tool always extracts the _current_ version
of a given archive from the source tree. This current version is already
defined in the corresponding recipe directory.
Package-archive recipes
-----------------------
The recipe directory for a package archive contains the verbatim content of
the to-be-created package archive except for the _archives_ file. All other
files are copied verbatim to the archive. The content of the recipe's
_archives_ file may omit the version information from the listed ingredients.
Furthermore, the user part of each entry can be left blank by using '_' as a
wildcard. When generating the package archive from the recipe, the extract
tool will replace this wildcard with the user that creates the archive.
Convenience front-end to the extract, build tools
#################################################
For developers, the work flow of interacting with the depot is most often the
combination of the _extract_ and _build_ tools whereas the latter expects
concrete version names as arguments. The _create_ tool accelerates this common
usage pattern by allowing the user to omit the version names. Operations
implicitly refer to the _current_ version of the archives as defined in
the recipes.
Furthermore, the _create_ tool is able to manage version updates for the
developer. If invoked with the argument 'UPDATE_VERSIONS=1', it automatically
updates hash files of the involved recipes by taking the current date as
version name. This is a valuable assistance in situations where a commonly
used API changes. In this case, the versions of the API and all dependent
archives must be increased, which would be a labour-intensive task otherwise.
If the depot already contains an archive of the current version, the create
tools won't re-create the depot archive by default. Local modifications of
the source code in the repository do not automatically result in a new archive.
To ensure that the depot archive is current, one can specify 'FORCE=1' to
the create tool. With this argument, existing depot archives are replaced by
freshly extracted ones and version updates are detected. When specified for
creating binary archives, 'FORCE=1' normally implies 'REBUILD=1'. To prevent
the superfluous rebuild of binary archives whose source versions remain
unchanged, 'FORCE=1' can be combined with the argument 'REBUILD='.
Accessing depot content from run scripts
########################################
The depot tools are not meant to replace the run tool but rather to complement
it. When both tools are combined, the run tool implicitly refers to "current"
archive versions as defined for the archive's corresponding recipes. This way,
the regular run-tool work flow can be maintained while attaining a
productivity boost by fetching content from the depot instead of building it.
Run scripts can use the 'import_from_depot' function to incorporate archive
content from the depot into a scenario. The function must be called after the
'create_boot_directory' function and takes any number of pkg, src, or raw
archives as arguments. An archive is specified as depot-relative path of the
form '<user>/<type>/name'. Run scripts may call 'import_from_depot'
repeatedly. Each argument can refer to a specific version of an archive or
just the version-less archive name. In the latter case, the current version
(as defined by a corresponding archive recipe in the source tree) is used.
If a 'src' archive is specified, the run tool integrates the content of
the corresponding binary archive into the scenario. The binary archives
are selected according the spec values as defined for the build directory.

View File

@@ -1,154 +0,0 @@
=============================
How to start exploring Genode
=============================
Norman Feske
Abstract
########
This guide is meant to provide you a painless start with using the Genode OS
Framework. It explains the steps needed to get a simple demo system running
on Linux first, followed by the instructions on how to run the same scenario
on a microkernel.
Quick start to build Genode for Linux
#####################################
The best starting point for exploring Genode is to run it on Linux. Make sure
that your system satisfies the following requirements:
* GNU Make version 3.81 or newer
* 'libsdl2-dev', 'libdrm-dev', and 'libgbm-dev' (needed to run interactive
system scenarios directly on Linux)
* 'tclsh' and 'expect'
* 'byacc' (only needed for the L4/Fiasco kernel)
* 'qemu' and 'xorriso' (for testing non-Linux platforms via Qemu)
For using the entire collection of ported 3rd-party software, the following
packages should be installed additionally: 'autoconf2.64', 'autogen', 'bison',
'flex', 'g++', 'git', 'gperf', 'libxml2-utils', 'subversion', and 'xsltproc'.
Your exploration of Genode starts with obtaining the source code of the
[https://sourceforge.net/projects/genode/files/latest/download - latest version]
of the framework. For detailed instructions and alternatives to the
download from Sourceforge please refer to [https://genode.org/download].
Furthermore, you will need to install the official Genode tool chain, which
you can download at [https://genode.org/download/tool-chain].
The Genode build system never touches the source tree but generates object
files, libraries, and programs in a dedicated build directory. We do not have a
build directory yet. For a quick start, let us create one for the Linux base
platform:
! cd <genode-dir>
! ./tool/create_builddir x86_64
This creates a new build directory for building x86_64 binaries in './build'.
The build system creates unified binaries that work on the given
architecture independent from the underlying base platform, in this case Linux.
Now change into the fresh build directory:
! cd build/x86_64
Please uncomment the following line in 'etc/build.conf' to make the
build process as smooth as possible.
! RUN_OPT += --depot-auto-update
To give Genode a try, build and execute a simple demo scenario via:
! make KERNEL=linux BOARD=linux run/demo
By invoking 'make' with the 'run/demo' argument, all components needed by the
demo scenario are built and the demo is executed. This includes all components
which are implicitly needed by the base platform. The base platform that the
components will be executed upon on is selected via the 'KERNEL' and 'BOARD'
variables. If you are interested in looking behind the scenes of the demo
scenario, please refer to 'doc/build_system.txt' and the run script at
'os/run/demo.run'.
Using platforms other than Linux
================================
Running Genode on Linux is the most convenient way to get acquainted with the
framework. However, the point where Genode starts to shine is when used as the
user land executed on a microkernel. The framework supports a variety of
different kernels such as L4/Fiasco, L4ka::Pistachio, OKL4, and NOVA. Those
kernels largely differ in terms of feature sets, build systems, tools, and boot
concepts. To relieve you from dealing with those peculiarities, Genode provides
you with an unified way of using them. For each kernel platform, there exists
a dedicated description file that enables the 'prepare_port' tool to fetch and
prepare the designated 3rd-party sources. Just issue the following command
within the toplevel directory of the Genode source tree:
! ./tool/ports/prepare_port <platform>
Note that each 'base-<platform>' directory comes with a 'README' file, which
you should revisit first when exploring the base platform. Additionally, most
'base-<platform>' directories provide more in-depth information within their
respective 'doc/' subdirectories.
For the VESA driver on x86, the x86emu library is required and can be
downloaded and prepared by again invoking the 3rd-party sources preparation
tool:
! ./tool/ports/prepare_port x86emu
On x86 base platforms the GRUB2 boot loader is required and can be
downloaded and prepared by invoking:
! ./tool/ports/prepare_port grub2
Now that the base platform is prepared, the 'create_builddir' tool can be used
to create a build directory for your architecture of choice by giving the
architecture as argument. To see the list of available architecture, execute
'create_builddir' with no arguments. Note, that not all kernels support all
architectures.
For example, to give the demo scenario a spin on the OKL4 kernel, the following
steps are required:
# Download the kernel:
! cd <genode-dir>
! ./tool/ports/prepare_port okl4
# Create a build directory
! ./tool/create_builddir x86_32
# Uncomment the following line in 'x86_32/etc/build.conf'
! REPOSITORIES += $(GENODE_DIR)/repos/libports
# Build and execute the demo using Qemu
! make -C build/x86_32 KERNEL=okl4 BOARD=pc run/demo
The procedure works analogously for the other base platforms. You can, however,
reuse the already created build directory and skip its creation step if the
architecture matches.
How to proceed with exploring Genode
####################################
Now that you have taken the first steps into using Genode, you may seek to
get more in-depth knowledge and practical experience. The foundation for doing
so is a basic understanding of the build system. The documentation at
'build_system.txt' provides you with the information about the layout of the
source tree, how new components are integrated, and how complete system
scenarios can be expressed. Equipped with this knowledge, it is time to get
hands-on experience with creating custom Genode components. A good start is the
'hello_tutorial', which shows you how to implement a simple client-server
scenario. To compose complex scenarios out of many small components, the
documentation of the Genode's configuration concept at 'os/doc/init.txt' is an
essential reference.
Certainly, you will have further questions on your way with exploring Genode.
The best place to get these questions answered is the Genode mailing list.
Please feel welcome to ask your questions and to join the discussions:
:Genode Mailing Lists:
[https://genode.org/community/mailing-lists]

View File

@@ -1,236 +0,0 @@
==========================
Google Summer of Code 2012
==========================
Genode Labs has applied as mentoring organization for the Google Summer of Code
program in 2012. This document summarizes all information important to Genode's
participation in the program.
:[http://www.google-melange.com/gsoc/homepage/google/gsoc2012]:
Visit the official homepage of the Google Summer of Code program.
*Update* Genode Labs was not accepted as mentoring organization for GSoC 2012.
Application of Genode Labs as mentoring organization
####################################################
:Organization ID: genodelabs
:Organization name: Genode Labs
:Organization description:
Genode Labs is a self-funded company founded by the original creators of the
Genode OS project. Its primary mission is to bring the Genode operating-system
technology, which started off as an academic research project, to the real
world. At present, Genode Labs is the driving force behind the Genode OS
project.
:Organization home page url:
http://www.genode-labs.com
:Main organization license:
GNU General Public License version 2
:Admins:
nfeske, chelmuth
:What is the URL for your Ideas page?:
[http://genode.org/community/gsoc_2012]
:What is the main IRC channel for your organization?:
#genode
:What is the main development mailing list for your organization?:
genode-main@lists.sourceforge.net
:Why is your organization applying to participate? What do you hope to gain?:
During the past three months, our project underwent the transition from a
formerly company-internal development to a completely open and transparent
endeavour. By inviting a broad community for participation in shaping the
project, we hope to advance Genode to become a broadly used and recognised
technology. GSoC would help us to build our community.
The project has its roots at the University of Technology Dresden where the
Genode founders were former members of the academic research staff. We have
a long and successful track record with regard to supervising students. GSoC
would provide us with the opportunity to establish and cultivate
relationships to new students and to spawn excitement about Genode OS
technology.
:Does your organization have an application templateo?:
GSoC student projects follow the same procedure as regular community
contributions, in particular the student is expected to sign the Genode
Contributor's Agreement. (see [http://genode.org/community/contributions])
:What criteria did you use to select your mentors?:
We selected the mentors on the basis of their long-time involvement with the
project and their time-tested communication skills. For each proposed working
topic, there is least one stakeholder with profound technical background within
Genode Labs. This person will be the primary contact person for the student
working on the topic. However, we will encourgage the student to make his/her
development transparant to all community members (i.e., via GitHub). So
So any community member interested in the topic is able to bring in his/her
ideas at any stage of development. Consequently, in practive, there will be
multiple persons mentoring each students.
:What is your plan for dealing with disappearing students?:
Actively contact them using all channels of communication available to us,
find out the reason for disappearance, trying to resolve the problems. (if
they are related to GSoC or our project for that matter).
:What is your plan for dealing with disappearing mentors?:
All designated mentors are local to Genode Labs. So the chance for them to
disappear to very low. However, if a mentor disappears for any serious reason
(i.e., serious illness), our organization will provide a back-up mentor.
:What steps will you take to encourage students to interact with your community?:
First, we discussed GSoC on our mailing list where we received an overly
positive response. We checked back with other Open-Source projects related to
our topics, exchanged ideas, and tried to find synergies between our
respective projects. For most project ideas, we have created issues in our
issue tracker to collect technical information and discuss the topic.
For several topics, we already observed interests of students to participate.
During the work on the topics, the mentors will try to encourage the
students to play an active role in discussions on our mailing list, also on
topics that are not strictly related to the student project. We regard an
active participation as key to to enable new community members to develop a
holistic view onto our project and gather a profound understanding of our
methodologies.
Student projects will be carried out in a transparent fashion at GitHub.
This makes it easy for each community member to get involved, discuss
the rationale behind design decisions, and audit solutions.
Topics
######
While discussing GSoC participation on our mailing list, we identified the
following topics as being well suited for GSoC projects. However, if none of
those topics receives resonance from students, there is more comprehensive list
of topics available at our road map and our collection of future challenges:
:[http://genode.org/about/road-map]: Road-map
:[http://genode.org/about/challenges]: Challenges
Combining Genode with the HelenOS/SPARTAN kernel
================================================
[http://www.helenos.org - HelenOS] is a microkernel-based multi-server OS
developed at the university of Prague. It is based on the SPARTAN microkernel,
which runs on a wide variety of CPU architectures including Sparc, MIPS, and
PowerPC. This broad platform support makes SPARTAN an interesting kernel to
look at alone. But a further motivation is the fact that SPARTAN does not
follow the classical L4 road, providing a kernel API that comes with an own
terminology and different kernel primitives. This makes the mapping of
SPARTAN's kernel API to Genode a challenging endeavour and would provide us
with feedback regarding the universality of Genode's internal interfaces.
Finally, this project has the potential to ignite a further collaboration
between the HelenOS and Genode communities.
Block-level encryption
======================
Protecting privacy is one of the strongest motivational factors for developing
Genode. One pivotal element with that respect is the persistence of information
via block-level encryption. For example, to use Genode every day at Genode
Labs, it's crucial to protect the confidentiality of some information that's
not part of the Genode code base, e.g., emails and reports. There are several
expansion stages imaginable to reach the goal and the basic building blocks
(block-device interface, ATA/SATA driver for Qemu) are already in place.
:[https://github.com/genodelabs/genode/issues/55 - Discuss the issue...]:
Virtual NAT
===========
For sharing one physical network interface among multiple applications, Genode
comes with a component called nic_bridge, which implements proxy ARP. Through
this component, each application receives a distinct (virtual) network
interface that is visible to the real network. I.e., each application requests
an IP address via a DHCP request at the local network. An alternative approach
would be a component that implements NAT on Genode's NIC session interface.
This way, the whole Genode system would use only one IP address visible to the
local network. (by stacking multiple nat and nic_bridge components together, we
could even form complex virtual networks inside a single Genode system)
The implementation of the virtual NAT could follow the lines of the existing
nic_bridge component. For parsing network packets, there are already some handy
utilities available (at os/include/net/).
:[https://github.com/genodelabs/genode/issues/114 - Discuss the issue...]:
Runtime for the Go or D programming language
============================================
Genode is implemented in C++. However, we are repeatedly receiving requests
for offering more safe alternatives for implementing OS-level functionality
such as device drivers, file systems, and other protocol stacks. The goals
for this project are to investigate the Go and D programming languages with
respect to their use within Genode, port the runtime of of those languages
to Genode, and provide a useful level of integration with Genode.
Block cache
===========
Currently, there exists only the iso9660 server that is able to cache block
accesses. A generic solution for caching block-device accesses would be nice.
One suggestion is a component that requests a block session (routed to a block
device driver) as back end and also announces a block service (front end)
itself. Such a block-cache server waits for requests at the front end and
forwards them to the back end. But it uses its own memory to cache blocks.
The first version could support only read-only block devices (such as CDROM) by
caching the results of read accesses. In this version, we already need an
eviction strategy that kicks in once the block cache gets saturated. For a
start this could be FIFO or LRU (least recently used).
A more sophisticated version would support write accesses, too. Here we need a
way to sync blocks to the back end at regular intervals in order to guarantee
that all block-write accesses are becoming persistent after a certain time. We
would also need a way to explicitly flush the block cache (i.e., when the
front-end block session gets closed).
:[https://github.com/genodelabs/genode/issues/113 - Discuss the issue...]:
; _Since Genode Labs was not accepted as GSoC mentoring organization, the_
; _following section has become irrelevant. Hence, it is commented-out_
;
; Student applications
; ####################
;
; The formal steps for applying to the GSoC program will be posted once Genode
; Labs is accepted as mentoring organization. If you are a student interested
; in working on a Genode-related GSoC project, now is a good time to get
; involved with the Genode community. The best way is joining the discussions
; at our mailing list and the issue tracker. This way, you will learn about
; the currently relevant topics, our discussion culture, and the people behind
; the project.
;
; :[http://genode.org/community/mailing-lists]: Join our mailing list
; :[https://github.com/genodelabs/genode/issues]: Discuss issues around Genode

View File

@@ -4,6 +4,51 @@
=========== ===========
Genode OS Framework release 24.11 | 2024-11-22
##############################################
| With mirrored and panoramic multi-monitor setups, pointer grabbing,
| atomic blitting and panning, and panel-self-refresh support, Genode's GUI
| stack gets ready for the next decade. Hardware-wise, version 24.11 brings
| a massive driver update for the i.MX SoC family. As a special highlight, the
| release is accompanied by the first edition of the free book "Genode
| Applications" as a gateway for application developers into Genode.
Closing up the Year of Sculpt OS usability as the theme of our road map
for 2024, we are excited to unveil the results of two intense lines of
usability-concerned work with the release of Genode 24.11.
For the usability of the Genode-based Sculpt OS as day-to-day operating
system, the support of multi-monitor setups has been an unmet desire
for a long time. Genode 24.11 does not only deliver a solution as a
singular feature but improves the entire GUI stack in a holistic way,
addressing panel self-refresh, mechanisms needed to overcome tearing
artifacts, rigid resource partitioning between GUI applications, up to
pointer-grabbing support.
The second line of work addresses the usability of application development for
Genode and Sculpt OS in particular. Over the course of the year, our Goa SDK
has seen a succession of improvements that make the development, porting,
debugging, and publishing of software a breeze. Still, given Genode's
novelties, the learning curve to get started has remained challenging. Our new
book "Genode Applications" is intended as a gateway into the world of Genode
for those of us who enjoy dwelling in architectural beauty but foremost want
to get things done. It features introductory material, explains fundamental
concepts and components, and invites the reader on to a ride through a series
of beginner-friendly as well as advanced tutorials. The book can be downloaded
for free at [https://genode.org].
Regarding hardware support, our work during the release cycle was hugely
motivated by the prospect of bringing Genode to the MNT Pocket Reform laptop,
which is based on the NXP i.MX8MP SoC. Along this way, we upgraded all
Linux-based i.MX drivers to kernel version 6.6 while consolidating a variety
of vendor kernels, equipped our platform driver with watchdog support, and
added board support for this platform to Sculpt OS.
You can find these among more topics covered in the detailed
[https:/documentation/release-notes/24.11 - release documentation of version 24.11...]
Sculpt OS release 24.10 | 2024-10-30 Sculpt OS release 24.10 | 2024-10-30
#################################### ####################################

File diff suppressed because it is too large Load Diff

579
doc/release_notes/24-11.txt Normal file
View File

@@ -0,0 +1,579 @@
===============================================
Release notes for the Genode OS Framework 24.11
===============================================
Genode Labs
During the discussion of this year's road-map roughly one year ago, the
usability concerns of Sculpt OS stood out.
Besides suspend/resume, which we addressed
[https://genode.org/documentation/release-notes/24.05#Suspend_resume_infrastructure - earlier this year],
multi-monitor support ranked highest on the list of desires. We are more than
happy to wrap up the year with the realization of this feature.
Section [Multi-monitor support] presents the many facets and outcomes of this
intensive line of work.
Over the course of 2024, our Goa SDK has received tremendous advances, which
make the development, porting, debugging, and publishing of software for
Genode - and Sculpt OS in particular - a breeze.
So far however, the learning curve for getting started remained rather steep
because the underlying concepts largely deviate from the beaten tracks known
from traditional operating systems. Even though there is plenty of
documentation, it is rather scattered and overwhelming.
All the more happy we are to announce that the current release is accompanied
by a new book "Genode Applications" that can be downloaded for free and
provides a smooth gateway for application developers into the world of Genode
(Section [New "Genode Applications" book]).
Regarding hardware-related technical topics, the release focuses on the
ARM-based i.MX SoC family, taking our ambition to run Sculpt OS on the MNT
Pocket Reform laptop as guiding theme. Section [Device drivers and platforms]
covers our driver and platform-related work in detail.
New "Genode Applications" book
##############################
Complementary to our _Genode Foundations_ and _Genode Platforms_ books, we have
been working on a new book that concentrates on application development.
_Genode Applications_ centers on the Goa SDK that we introduced with
[https://genode.org/documentation/release-notes/19.11#New_tooling_for_bridging_existing_build_systems_with_Genode - Genode 19.11]
and which has seen significant improvements over the past year
([https://genode.org/documentation/release-notes/23.08#Goa_tool_gets_usability_improvements_and_depot-index_publishing_support - 23.08],
[https://genode.org/documentation/release-notes/24.02#Sculpt_OS_as_remote_test_target_for_the_Goa_SDK - 24.02],
[https://genode.org/documentation/release-notes/24.08#Goa_SDK - 24.08]).
: <div class="visualClear"><!-- --></div>
: <p>
: <div style="clear: both; float: left; margin-right:20px;">
: <a class="internal-link" href="https://genode.org">
: <img class="image-inline" src="https://genode.org/documentation/genode-applications-title.png">
: </a>
: </div>
: </p>
The book intends to provide a beginner-friendly starting point for application
development and porting for Genode and Sculpt OS in particular. It starts off
with a getting-started tutorial for the Goa tool, and further recapitulates
Genode's architecture and a subset of its libraries, components, and
conventions such as the C runtime, VFS, NIC router, and package management.
With these essentials in place, the book is topped off with instructions for
application debugging and a collection of advanced tutorials.
Aligned with the release of Sculpt 24.10, we updated the Goa tool with the
corresponding depot archive versions. Furthermore, the Sculpt-integrated and
updated _Goa testbed_ preset is now prepared for remote debugging.
: <div class="visualClear"><!-- --></div>
:First revision of the Genode Applications document:
[https://genode.org/documentation/genode-applications-24-11.pdf]
Multi-monitor support
#####################
Among the users of the Genode-based Sculpt OS, the flexible use of multiple
monitors was certainly the most longed-after desire raised during our public
road-map discussion roughly one year ago. We quickly identified that a
profound solution cannot focus on piecemeal extensions of individual
components but must embrace an architectural step forward. The step turned
out being quite a leap.
In fact, besides reconsidering the roles of display and input drivers in
[https://genode.org/documentation/release-notes/20.08#The_GUI_stack__restacked - version 20.08],
the GUI stack has remained largely unchanged since
[https://genode.org/documentation/release-notes/14.08#New_GUI_architecture - version 14.08].
So we took our multi-monitor ambitions as welcome opportunity to incorporate
our experiences of the past ten years into a new design for the next ten
years.
Tickless GUI server and display drivers
=======================================
Up to now, the nitpicker GUI server as well as the display drivers used to
operate in a strictly periodic fashion. At a rate of 10 milliseconds, the GUI
server would route input events to the designated GUI clients and flush
graphical changes of the GUI clients to the display driver.
This simple mode of execution has benefits such as the natural ability of
batching input events and the robustness of the GUI server against overload
situations. However, in Sculpt OS, we observed that the fixed rate induces
little but constant load into an otherwise idle system, rendering
energy-saving regimes of modern CPUs less effective than they could be.
This problem would become amplified in the presence of multiple output channels
operating at independent frame rates. Moreover, with panel self-refresh
support of recent Intel graphics devices, the notion of a fixed continuous
frame rate has become antiquated.
Hence, it was time to move to a tickless GUI-server design where the GUI
server acts as a mere broker between events triggered by applications (e.g.,
pushing pixels) and drivers (e.g., occurrence of input, scanout to a display).
Depending on the behavior of its clients (GUI applications and drivers alike),
the GUI server notifies the affected parties about events of interest but
does not assert an active role.
For example, if a display driver does not observe any changed pixels for 50
ms, it goes to sleep. Once an application updates pixels affecting a display,
the GUI server wakes up the respective display driver, which then polls the
pixels at a driver-defined frame rate until observing when the pixels remain
static for 50 ms. Vice versa, the point in time when a display driver requests
updated pixels is reflected as a sync event to GUI applications visible on
that display, enabling such applications to synchronize their output to the
frame rate of the driver. The GUI server thereby asserts the role of steering
the sleep cycles of drivers and applications. Unless anything happens on
screen, neither the GUI server nor the display driver are active. When two
applications are visible on distinct monitors, the change of one application
does not induce any activity regarding the unrelated display. This allows for
scaling up the number of monitors without increasing the idle CPU load.
This change implies that the former practice of using sync signals as a
time source for application-side animation timing is no longer viable.
Sync signals occur only when a driver is active after all. GUI applications
may best use sync signals for redraw scheduling but need to use a real time
source as basis for calculating the progress of animations.
Paving the ground for tearing-free motion
=========================================
Tearing artifacts during animations are rightfully frowned upon. It goes
without saying that we strive to attain tearing-free motion in Genode. Two
preconditions must be met. First, the GUI server must be able to get hold
of a _consistent_ picture at any time. Second, the flushing of the picture
to the display hardware must be timed with _vsync_ of the physical display.
Up to now, the GUI stack was unable to meet the first precondition by design.
If the picture is composed of multiple clients, the visual representation of
each client must be present in a consistent state.
The textures used as input of the compositing of the final picture are buffers
shared between server and client. Even though clients traditionally employ
double-buffering to hide intermediate drawing states, the final back-to-front
copy into the shared buffer violated the consistency of the buffer during
the client-side copy operation - when looking at the buffer from the server
side. To overcome this deficiency, we have now equipped the GUI server with
atomic blitting and panning operations, which support atomic updates in two
fashions.
_Atomic back-to-front blitting_ allows GUI clients that partially update their
user interface - like regular application dialogs - to implement double
buffering by placing both the back buffer and front buffer within the GUI
session's shared buffer and configuring a view that shows only the front
buffer. The new blit operation ('Framebuffer::Session::blit') allows the client
to atomically flush pixels from the back buffer to the front buffer.
_Atomic buffer flipping_ allows GUI clients that always update all pixels -
like a media player or a game - to leverage panning
('Framebuffer::Session::panning') to atomically redirect the displayed pixels to
a different portion of the GUI session's shared buffer without any copy
operation needed. The buffer contains two frames, the displayed one and the
next one. Once the next frame is complete, the client changes the panning
position to the portion containing the next frame.
Almost all GUI clients of the Genode OS framework have been updated to use
these new facilities.
The vsync timing as the second precondition for tearing-free motion lies in
the hands of the display driver, which can in principle capture pixel updates
from the GUI server driven by vsync interrupts. In the presence of multiple
monitors with different vsync rates, a GUI client may deliberately select
a synchronization source ('Framebuffer::Session::sync_source'). That said,
even though the interfaces are in place, vsync timing is not yet provided by
the current display drivers.
Mirrored and panoramic monitor setups
=====================================
A display driver interacts with the nitpicker GUI server as a capture client.
One can think of a display driver as a screen-capturing application.
Up until now, the nitpicker GUI server handed out the same picture to each
capture client. So each client obtained a mirror of the same picture. By
subjecting each client to a policy defining a window within a larger panorama,
a driver creating one capture session per monitor becomes able to display the
larger panorama spanning the connected displays. The assignment of capture
clients to different parts of the panorama follows Genode's established
label-based policy-selection approach as explained in the
[https://github.com/genodelabs/genode/blob/master/repos/os/src/server/nitpicker/README - documentation]
of the nitpicker GUI server.
Special care has been taken to ensure that the pointer is always visible. It
cannot be moved to any area that is not captured. Should the only capture
client displaying the pointer disappear, the pointer is warped to the center
of (any) remaining capture client.
A mirrored monitor setup can in principle be attained by placing multiple
capture clients at the same part of nitpicker's panorama. However, there is
a better way: Our Intel display-driver component supports both discrete and
merged output channels. The driver's configuration subsumes all connectors
listed within a '<merge>' node as a single encompassing capture session at the
GUI server. The mirroring of the picture is done by the hardware. Each
connector declared outside the '<merge>' node is handled as a discrete capture
session labeled after the corresponding connector. The driver's
[https://github.com/genodelabs/genode/blob/master/repos/pc/src/driver/framebuffer/intel/pc/README - documentation]
describes the configuration in detail.
Sculpt OS integration
=====================
All the changes described above are featured in the recently released
Sculpt OS version 24.10, which gives the user the ability to attain mirrored
or panoramic monitor setups or a combination thereof by the means of manual
configuration or by using interactive controls.
[image sculpt_24_10_intel_fb]
You can find the multi-monitor use of Sculpt OS covered by the
[https://genode.org/documentation/articles/sculpt-24-10#Multi-monitor_support - documentation].
Revised inter-component interfaces
==================================
Strict resource partitioning between GUI clients
------------------------------------------------
Even though Genode gives server components the opportunity to strictly operate
on client-provided resources only, the two prominent GUI servers - nitpicker
and the window manager (wm) - did not leverage these mechanisms to full
extent. In particular the wm eschewed strict resource accounting by paying out
of its own pocket. This deficiency has been rectified by the current release,
thereby making the GUI stack much more robust against potential resource
denial-of-service issues. Both the nitpicker GUI server and the window manager
now account all allocations to the resource budgets of the respective clients.
This change has the effect that GUI clients must now be equipped with the
actual cap and RAM quotas needed.
Note that not all central parts of the GUI stack operate on client-provided
resources. In particular, a window decorator is a mere client of the window
manager despite playing a role transcending multiple applications. As the
costs needed for the decorations depend on the number of applications present
on screen, the resources of the decorator must be dimensioned with a sensible
upper bound. Fortunately, however, as the decorator is a plain client of the
window manager, it can be restarted, replaced, and upgraded without affecting
any application.
Structured mode information for applications
--------------------------------------------
Up to now, GUI clients were able to request mode information via a plain
RPC call that returned the dimensions and color depth of the display.
Multi-monitor setups call for more flexibility, which prompted us to
replace the mode information by XML-structured information delivered as
an 'info' dataspace. This is in line with how meta information is handled
in other modern session interfaces like the platform or USB sessions.
The new representation gives us room to annotate information that could
previously not be exposed to GUI clients, in particular:
* The total panorama dimensions.
* Captured areas within the panorama, which can be used by multi-monitor
aware GUI clients as intelligence for placing GUI views.
* DPI information carried by 'width_mm' and 'height_mm' attributes.
This information is defined by the display driver and passed to the GUI
server as 'Capture::Connection::buffer' argument.
* The closed state of a window interactively closed by the user.
Note that the window manager (wm) virtualizes the information of the nitpicker
GUI server. Instead of exposing nitpicker's panorama to its clients, the wm
reports the logical screen hosting the client's window as panorama and the
window size as a single captured rectangle within the panorama.
Mouse grabbing
--------------
Since the inception of the nitpicker GUI server, its clients observed absolute
pointer positions only. The GUI server unconditionally translated relative
mouse-motion events to absolute motion events.
To accommodate applications like games or a VM emulating a relative pointer
device, we have now extended the GUI server(s) with the ability to selectively
expose relative motion events while locking the absolute pointer position.
This is usually called pointer grabbing. It goes without saying that the user
must always retain a way to forcefully reassert control over the pointer
without the cooperation of the application.
The solution is the enhancement of the 'Input::Session' interface by a new RPC
function that allows a client to request exclusive input. The nitpicker GUI
server grants this request if the application owns the focus. In scenarios
using the window manager (wm), the focus is always defined by the wm, which
happens to intercept all input sessions of GUI applications. Hence, the wm is
in the natural position of arbitrating the grabbing/ungrabbing of the pointer.
For each GUI client, the wm records whether the client is interested in
exclusive input but does not forward this request to nitpicker. Only if a GUI
client receives the focus and has requested exclusive input, the wm enables
exclusive input for this client at nitpicker when observing a mouse click on
the application window. Whenever the user presses the global wm key (super),
the wm forcefully releases the exclusive input at nitpicker until the user
clicks into the client window the next time.
Furthermore, an application may enable exclusive input transiently during a
key sequence, e.g., when dragging the mouse while holding the mouse button.
Transient exclusive input is revoked as soon as the last button/key is
released. It thereby would in principle allow for GUI controls like knobs to
lock the pointer position while the user adjusts the value by moving the mouse
while the mouse button is held. So the pointer retains its original position
at the knob.
While operating in exclusive input mode, there is no useful notion of an
absolute pointer position at the nitpicker GUI server. Hence, nitpicker hides
GUI domains that use the pointer position as coordinate origin. Thereby, the
mouse cursor automatically disappears while the pointer is grabbed.
Current state and ongoing work
==============================
All the advances described above are in full effect in the recently released
version 24.10 of [https://genode.org/download/sculpt - Sculpt OS]. All
components hosted in Genode's main and world repositories have been updated
accordingly, including Genode-specific components like the widget toolkit
used by the administrative user interface of Sculpt OS, window decorators,
over Qt5 and Qt6, to SDL and SDL2.
[image multiple_monitors]
Current work is underway to implement multi-monitor window management and to
make multiple monitors seamlessly available to guest OSes hosted in VirtualBox.
Furthermore, the Intel display driver is currently getting equipped with the
ability to use vsync interrupts for driving the interaction with the GUI
server, taking the final step to attain tearing-free motion.
Device drivers and platforms
############################
Linux device-driver environment (DDE)
=====================================
With our
[https://genode.org/documentation/release-notes/24.08#Linux_device-driver_environment__DDE_ - recent]
update of the DDE Linux kernel to version 6.6 for PC platforms and as a
prerequisite to support the MNT Pocket Reform, we have adapted all drivers for
the i.MX5/6/7/8 platforms to Linux kernel version 6.6.47. The list of drivers
includes Wifi, NIC, display, GPU, USB and SD-card.
MNT Pocket Reform
~~~~~~~~~~~~~~~~~
The [https://shop.mntre.com/products/mnt-pocket-reform - MNT Pocket Reform] is
a Mini Laptop by MNT aiming to be modular, upgradable, and repairable while
being assembled completely using open-source hardware. Being modular implies
that a range of CPU modules is available for the MNT Pocket. Some of these
chips, like the Rockchip based modules, are not officially supported by
Genode, yet. But there is a choice of an i.MX8MP based module available which
fits nicely into Genode's i.MX infrastructure.
Genode already supports the MNT Reform 2 i.MX8MQ based
[https://genodians.org/skalk/2020-06-29-mnt-reform - laptop]. So an update from
MQ to MP doesn't sound like a big issue because only one letter changed,
right? It turns out that there are more changes to the platform than mere
adjustments of I/O resources and interrupt numbers. Additionally, the MNT
Reform team offers quite a large patch set for each supported Linux kernel
version. Luckily there is
[https://source.mnt.re/reform/reform-debian-packages/-/tree/main/linux/patches6.6?ref_type=heads - one]
for our just updated Linux 6.6 kernel. With this patch set, we were able to
produce a Linux source tree (imx_linux) that we now take as basis for driver
development on Genode. Note that these Linux kernel sources are shared by all
supported i.MX platforms. Of course, additional patch series were necessary to
include device-tree sources from other vendor kernels, for instance from
Compulab.
With the development environment in place and after putting lots of effort in,
we ultimately achieved initial Genode support for the MNT Pocket Reform with
Genode 24.11.
On the device-driver side of things, we did not have to port lots of new
drivers but were able to extend drivers already available for the i.MX8MQ
platform. In particular these drivers are for the wired network card, USB host
controller, display, and SD card.
For the wireless network device that is found on the i.MX8MP SoM in the MNT
Pocket Reform, we needed to port a new driver. It has a Qualcomm QCA9377
chipset and is attached via SDIO. Unfortunately the available _ath10k_ driver
in the vanilla kernel does not work properly with such a device and therefore
is also not used in the regular Linux kernel for the MNT Pocket Reform. A
slightly adapted external QCACLD2 reference driver is used instead. So we
followed suit by incorporating this particular driver in our _imx_linux_
source tree as well.
[image sculpt_mnt_pocket]
Sculpt OS running on the MNT Pocket Reform
Being the initial enablement, there are still some limitations.
For example, the display of the MNT Pocket is physically
[https://mntre.com/documentation/pocket-reform-handbook.pdf - rotated] by 90
degrees. So, we had to find a way to accommodate for that. Unfortunately,
there seems to be no hardware support other than using the GPU to perform
a fast rotation. With GPU support still missing on this system, we had to
resort to perform the rotation in software on the CPU, which is obviously
far from optimal.
Those early inefficiencies notwithstanding, Sculpt OS has become able to run
on the MNT Pocket Reform. We will provide a preview image that exercises the
available features soon.
Platform driver for i.MX 8M Plus
================================
While enabling support for the MNT Pocket Reform (Section [MNT Pocket Reform]),
it was necessary to adjust the i.MX8MP specific platform driver, which was
originally introduced in the previous
[https://genode.org/documentation/release-notes/24.08#Improvements_for_NXP_s_i.MX_family - release 24.08]
to drive the Compulab i.MX 8M Plus IOT Gateway.
Some of the I/O pin configurations necessary to set up the SoC properly are
statically compiled into this driver because they do not change at runtime.
However, the pin configuration is specific to the actual board. Therefore, the
i.MX8MP platform driver now needs to distinguish between different boards (IOT
Gateway and MNT Pocket) by evaluating the 'platform_info' ROM provided by
core.
Moreover, while working on different drivers, we detected a few missing clocks
that were added to the platform driver. It turned out that some clocks that we
initially turned off to save energy, have to be enabled to ensure the
liveliness of the ARM Trusted Firmware (ATF) and thereby the platform. Also,
we had to adapt the communication in between ATF and our platform driver to
control power-domains. The first version of the i.MX8MP platform driver shared
the ATF power-domains protocol with the i.MX8MQ version. However, the
power-domain enumerations of the different firmwares varies also and we
adapted that.
Finally, the watchdog hardware is now served by the platform driver in a
recurrent way. Originally our driver used the watchdog only to implement reset
functionality. But in case of the MNT Pocket Reform, the watchdog hardware is
already armed by the bootloader. Therefore, it needs to get served in time, to
prevent the system from rebooting. As a consequence, the platform driver is
mandatory on this platform if it needs to run longer than a minute.
Wifi management rework
======================
Our management interface in the wifi driver served us well over the years
and concealed the underlying complexity of the wireless stack. At the same
time it gained some complexity itself to satisfy a variety of use-cases.
Thus, we took the past release cycle as opportunity to rework the management
layer to reduce its complexity by streamlining the interaction between
various parts, like the manager layer itself, 'wpa_supplicant' as well as
the device driver in order to provide a sound foundation for future
adaptions.
Included is also an update of the 'wpa_supplicant' to version 2.11.
The following segments detail the changes made to the configuration options as
they were altered quite a bit to no longer mix different tasks (e.g. joining a
network and scanning for hidden networks) while removing obsolete options.
At the top-level '<wifi_config>' node, the following alterations were made:
* The 'log_level' attribute was added and configures the supplicant's
verbosity. Valid values correspond to levels used by the supplicant
and are as follows: 'excessive', 'msgdump', 'debug', 'info', 'warning',
and 'error'. The default value is 'error' and configures the least
amount of verbosity. This option was introduced to ease the investigation
of connectivity issues.
* The 'bgscan' attribute may be used to configure the way the
supplicant performs background-scanning to steer or rather optimize
roaming decision within the same network. The default value is set
to 'simple:30:-70:600'. The attribute is forwarded unmodified to the WPA
supplicant and thus provides the syntax supported by the supplicant
implementation. It can be disabled by specifying an empty value, e.g.
'bgscan=""'.
* The 'connected_scan_interval' attribute was removed as this functionality
is now covered by background scanning.
* The 'verbose_state' attribute was removed altogether and similar
functionality is now covered by the 'verbose' attribute.
The network management received the following changes:
* Every configured network, denoted by a '<network>' node, is now implicitly
considered an option for joining. The 'auto_connect' attribute was
removed and a '<network>' node must be renamed or removed to deactivate
automatic connection establishment.
* The intent to scan for a hidden network is now managed by the newly
introduced '<explicit_scan>' node that like the '<network>' node has
an 'ssid' attribute. If the specified SSID is valid, it is incorporated
into the scan request to actively probe for this network. As the node
requests explicit scanning only, a corresponding '<network>' node is
required to actually connect to the hidden network.
The 'explicit_scan' attribute of the '<network>' node has been removed.
The following exemplary configuration shows how to configure the driver
for attempting to join two different networks where one of them is hidden.
The initial scan interval is set 10 seconds and the signal quality will be
updated every 30 seconds while connected to a network.
!<wifi_config scan_interval="10" update_quality_interval="30">
! <explicit_scan ssid="Skynet"/>
! <network ssid="Zero" protection="WPA2" passphrase="allyourbase"/>
! <network ssid="Skynet" protection="WPA3" passphrase="illbeback"/>
!</wifi_config>
For more information please consult the driver's
[https://github.com/genodelabs/genode/blob/master/repos/dde_linux/src/driver/wifi/README - documentation]
that now features a best-practices section explaining how the driver should be
operated at best, and highlights the difference between a managed (as used in
Sculpt OS) and a user-generated configuration.
Audio driver updated to OpenBSD 7.6
===================================
With this release, we updated our OpenBSD-based audio driver to a more recent
revision that correlates to version 7.6. It supports newer devices, e.g. Alder
Lake-N, and includes a fix for using message-signaled interrupts (MSI) with
HDA devices as found in AMD-based systems.
AVX and hardware-based AES in virtual machines
==============================================
The current release adds support for requesting and transferring the AVX FPU
state via Genode's VM-session interface. With this prerequisite fulfilled, we
enabled the announcement of the AVX feature to guest VMs in our port of
VirtualBox6.
Additionally, we enabled the announcement of AES and RDRAND CPU features to
guest VMs to further improve the utilization of the hardware.
Build system and tools
######################
Extended depot-tool safeguards
------------------------------
When using the run tool's '--depot-auto-update' feature while switching
between different git topic branches with committed recipe hashes, a binary
archive present in the depot may accidentally not match its ingredients
because the depot/build tool's 'REBUILD=' mode - as used by the depot
auto-update mechanism - merely looks at the archive versions. This situation
is arguably rare. But when it occurs, its reach and effects are hard to
predict. To rule out this corner case early, the depot/build tool has now been
extended by recording the hashes of the ingredients of binary archives. When
skipping a rebuild because the desired version presumably already exists as a
binary archive, the recorded hashes are compared to the current state of the
ingredients (src and api archives). Thereby inconsistencies are promptly
reported to the user.
Users of the depot tool will notice .hash files appearing alongside src and
api archives. Those files contain the hash value of the content of the
respective archive. Each binary archive built is now also accompanied by
a .hash file, which contains a list of hash values of the ingredients that went
into the binary archive. Thanks to these .hash files, the consistency between
binaries and their ingredients can be checked quickly.
_As a note of caution, when switching to the Genode 24.11 with existing depot,_
_one will possibly need to remove existing depot archives (as listed by the_
_diagnostic messages) because the existing archives are not accompanied by_
_.hash files yet._

View File

@@ -1 +1 @@
2024-10-07 19deb3e25765c65bef7ec4b7dd7c657b0c63a49d 2024-12-10 408b474f632eefaaa19db35812a9aa94a48e6bdb

View File

@@ -1 +1 @@
2024-10-07 a906ce16359b301dda1031906fc6c9b3dd386d52 2024-12-10 4247239f4d3ce9a840be368ac9e054e8064c01c6

View File

@@ -1 +1 @@
2024-10-07 8f88c87b67888d3aedccf628ceb214ec7305b2eb 2024-12-10 39609d3553422b8c7c6acff2db845c67c5f8912b

View File

@@ -1 +1 @@
2024-10-07 f2d0f8e9e79f3c4a11498d1df055488910d87279 2024-12-10 7867db59531dc9086e76b74800125ee61ccc310e

View File

@@ -1 +1 @@
2024-10-07 af4a1a784fac28f321443a0659914f0aeb92e466 2024-12-10 3fc7c1b2cae2b9af835c97bf384b10411ec9c511

View File

@@ -1 +1 @@
2024-10-07 184b4030fb20c203ab996d46ddc029a1d6856f9c 2024-12-10 68ee5bc5640e1d32c33f46072256d5b1c71bef9b

View File

@@ -23,7 +23,8 @@
void Core::platform_add_local_services(Rpc_entrypoint &ep, void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &heap, Sliced_heap &heap,
Registry<Service> &services, Registry<Service> &services,
Trace::Source_registry &trace_sources) Trace::Source_registry &trace_sources,
Ram_allocator &)
{ {
static Vm_root vm_root(ep, heap, core_env().ram_allocator(), static Vm_root vm_root(ep, heap, core_env().ram_allocator(),
core_env().local_rm(), trace_sources); core_env().local_rm(), trace_sources);

View File

@@ -1 +1 @@
2024-10-29 4bbe6a0fc64e242b3df0a4f888c69592a35c2c59 2024-12-10 ca4eabba0cf0313545712015ae6e9ebb4d968b2a

View File

@@ -1 +1 @@
2024-10-29 eb59e89aad9feb09d6a868c30f68e6295439f06a 2024-12-10 dad50ef2ab70aa5a7bd316ad116bfb1d59c5df5c

View File

@@ -1 +1 @@
2024-10-29 499d391798850ba3b784250d3b8a63bbb0ac27ab 2024-12-10 58d8cb90d04a52f53a9797d964568dc0d1e7c45d

View File

@@ -1 +1 @@
2024-10-29 bc12b6bd4fc27e351d863c79752fcdc1173797ce 2024-12-10 1a5d21d207bb12797d285e1c3173cdaec7559afe

View File

@@ -82,4 +82,11 @@ Core_region_map::attach(Dataspace_capability ds_cap, Attr const &attr)
} }
void Core_region_map::detach(addr_t) { } void Core_region_map::detach(addr_t core_local_addr)
{
size_t size = platform_specific().region_alloc_size_at((void *)core_local_addr);
unmap_local(core_local_addr, size >> get_page_size_log2());
platform().region_alloc().free((void *)core_local_addr);
}

View File

@@ -158,10 +158,8 @@ class Kernel::Signal_context
* *
* \param r receiver that the context shall be assigned to * \param r receiver that the context shall be assigned to
* \param imprint userland identification of the context * \param imprint userland identification of the context
*
* \throw Assign_to_receiver_failed
*/ */
Signal_context(Signal_receiver & r, addr_t const imprint); Signal_context(Signal_receiver &, addr_t const imprint);
/** /**
* Submit the signal * Submit the signal

View File

@@ -33,45 +33,42 @@ extern "C" void _core_start(void);
using namespace Kernel; using namespace Kernel;
void Thread::_ipc_alloc_recv_caps(unsigned cap_count) Thread::Ipc_alloc_result Thread::_ipc_alloc_recv_caps(unsigned cap_count)
{ {
using Allocator = Genode::Allocator; using Allocator = Genode::Allocator;
using Result = Ipc_alloc_result;
Allocator &slab = pd().platform_pd().capability_slab(); Allocator &slab = pd().platform_pd().capability_slab();
for (unsigned i = 0; i < cap_count; i++) { for (unsigned i = 0; i < cap_count; i++) {
if (_obj_id_ref_ptr[i] != nullptr) if (_obj_id_ref_ptr[i] != nullptr)
continue; continue;
slab.try_alloc(sizeof(Object_identity_reference)).with_result( Result const result =
slab.try_alloc(sizeof(Object_identity_reference)).convert<Result>(
[&] (void *ptr) { [&] (void *ptr) {
_obj_id_ref_ptr[i] = ptr; }, _obj_id_ref_ptr[i] = ptr;
return Result::OK; },
[&] (Allocator::Alloc_error e) { [&] (Allocator::Alloc_error e) {
switch (e) { /*
case Allocator::Alloc_error::DENIED: * Conditions other than DENIED cannot happen because the slab
* does not try to grow automatically. It is explicitely
/* * expanded by the client as response to the EXHAUSTED return
* Slab is exhausted, reflect condition to the client. * value.
*/ */
throw Genode::Out_of_ram(); if (e != Allocator::Alloc_error::DENIED)
case Allocator::Alloc_error::OUT_OF_CAPS:
case Allocator::Alloc_error::OUT_OF_RAM:
/*
* These conditions cannot happen because the slab
* does not try to grow automatically. It is
* explicitely expanded by the client as response to
* the 'Out_of_ram' condition above.
*/
Genode::raw("unexpected recv_caps allocation failure"); Genode::raw("unexpected recv_caps allocation failure");
}
return Result::EXHAUSTED;
} }
); );
if (result == Result::EXHAUSTED)
return result;
} }
_ipc_rcv_caps = cap_count; _ipc_rcv_caps = cap_count;
return Result::OK;
} }
@@ -87,11 +84,20 @@ void Thread::_ipc_free_recv_caps()
} }
void Thread::_ipc_init(Genode::Native_utcb &utcb, Thread &starter) Thread::Ipc_alloc_result Thread::_ipc_init(Genode::Native_utcb &utcb, Thread &starter)
{ {
_utcb = &utcb; _utcb = &utcb;
_ipc_alloc_recv_caps((unsigned)(starter._utcb->cap_cnt()));
ipc_copy_msg(starter); switch (_ipc_alloc_recv_caps((unsigned)(starter._utcb->cap_cnt()))) {
case Ipc_alloc_result::OK:
ipc_copy_msg(starter);
break;
case Ipc_alloc_result::EXHAUSTED:
return Ipc_alloc_result::EXHAUSTED;
}
return Ipc_alloc_result::OK;
} }
@@ -330,7 +336,13 @@ void Thread::_call_start_thread()
/* join protection domain */ /* join protection domain */
thread._pd = (Pd *) user_arg_3(); thread._pd = (Pd *) user_arg_3();
thread._ipc_init(*(Native_utcb *)user_arg_4(), *this); switch (thread._ipc_init(*(Native_utcb *)user_arg_4(), *this)) {
case Ipc_alloc_result::OK:
break;
case Ipc_alloc_result::EXHAUSTED:
user_arg_0(-2);
return;
}
/* /*
* Sanity check core threads! * Sanity check core threads!
@@ -482,7 +494,14 @@ void Thread::_call_delete_pd()
void Thread::_call_await_request_msg() void Thread::_call_await_request_msg()
{ {
if (_ipc_node.ready_to_wait()) { if (_ipc_node.ready_to_wait()) {
_ipc_alloc_recv_caps((unsigned)user_arg_1());
switch (_ipc_alloc_recv_caps((unsigned)user_arg_1())) {
case Ipc_alloc_result::OK:
break;
case Ipc_alloc_result::EXHAUSTED:
user_arg_0(-2);
return;
}
_ipc_node.wait(); _ipc_node.wait();
if (_ipc_node.waiting()) { if (_ipc_node.waiting()) {
_become_inactive(AWAITS_IPC); _become_inactive(AWAITS_IPC);
@@ -545,8 +564,14 @@ void Thread::_call_send_request_msg()
if (!_ipc_node.ready_to_send()) { if (!_ipc_node.ready_to_send()) {
Genode::raw("IPC send request: bad state"); Genode::raw("IPC send request: bad state");
} else { } else {
_ipc_alloc_recv_caps((unsigned)user_arg_2()); switch (_ipc_alloc_recv_caps((unsigned)user_arg_2())) {
_ipc_capid = oir ? oir->capid() : cap_id_invalid(); case Ipc_alloc_result::OK:
break;
case Ipc_alloc_result::EXHAUSTED:
user_arg_0(-2);
return;
}
_ipc_capid = oir ? oir->capid() : cap_id_invalid();
_ipc_node.send(dst->_ipc_node, help); _ipc_node.send(dst->_ipc_node, help);
} }
@@ -822,8 +847,6 @@ void Thread::_call_single_step() {
void Thread::_call() void Thread::_call()
{ {
try {
/* switch over unrestricted kernel calls */ /* switch over unrestricted kernel calls */
unsigned const call_id = (unsigned)user_arg_0(); unsigned const call_id = (unsigned)user_arg_0();
switch (call_id) { switch (call_id) {
@@ -907,25 +930,36 @@ void Thread::_call()
_die(); _die();
return; return;
} }
} catch (Genode::Allocator::Out_of_memory &e) { user_arg_0(-2); }
} }
void Thread::_mmu_exception() void Thread::_mmu_exception()
{ {
using namespace Genode;
using Genode::log;
_become_inactive(AWAITS_RESTART); _become_inactive(AWAITS_RESTART);
_exception_state = MMU_FAULT; _exception_state = MMU_FAULT;
Cpu::mmu_fault(*regs, _fault); Cpu::mmu_fault(*regs, _fault);
_fault.ip = regs->ip; _fault.ip = regs->ip;
if (_fault.type == Thread_fault::UNKNOWN) { if (_fault.type == Thread_fault::UNKNOWN) {
Genode::raw(*this, " raised unhandled MMU fault ", _fault); Genode::warning(*this, " raised unhandled MMU fault ", _fault);
return; return;
} }
if (_type != USER) if (_type != USER) {
Genode::raw(*this, " raised a fault, which should never happen ", error(*this, " raised a fault, which should never happen ",
_fault); _fault);
log("Register dump: ", *regs);
log("Backtrace:");
Const_byte_range_ptr const stack {
(char const*)Hw::Mm::core_stack_area().base,
Hw::Mm::core_stack_area().size };
regs->for_each_return_address(stack, [&] (void **p) {
log(*p); });
}
if (_pager && _pager->can_submit(1)) { if (_pager && _pager->can_submit(1)) {
_pager->submit(1); _pager->submit(1);

View File

@@ -322,9 +322,13 @@ class Kernel::Thread : private Kernel::Object, public Cpu_job, private Timeout
kobj.destruct(); kobj.destruct();
} }
void _ipc_alloc_recv_caps(unsigned rcv_cap_count); enum Ipc_alloc_result { OK, EXHAUSTED };
[[nodiscard]] Ipc_alloc_result _ipc_alloc_recv_caps(unsigned rcv_cap_count);
void _ipc_free_recv_caps(); void _ipc_free_recv_caps();
void _ipc_init(Genode::Native_utcb &utcb, Thread &callee);
[[nodiscard]] Ipc_alloc_result _ipc_init(Genode::Native_utcb &utcb, Thread &callee);
public: public:

View File

@@ -119,6 +119,18 @@ class Core::Platform : public Platform_generic
static addr_t core_page_table(); static addr_t core_page_table();
static Hw::Page_table::Allocator & core_page_table_allocator(); static Hw::Page_table::Allocator & core_page_table_allocator();
/**
* Determine size of a core local mapping required for a
* Core_region_map::detach().
*/
size_t region_alloc_size_at(void * addr)
{
using Size_at_error = Allocator_avl::Size_at_error;
return (_core_mem_alloc.virt_alloc())()->size_at(addr).convert<size_t>(
[ ] (size_t s) { return s; },
[ ] (Size_at_error) { return 0U; });
}
/******************************** /********************************
** Platform_generic interface ** ** Platform_generic interface **

View File

@@ -60,6 +60,13 @@ bool Hw::Address_space::insert_translation(addr_t virt, addr_t phys,
_tt.insert_translation(virt, phys, size, flags, _tt_alloc); _tt.insert_translation(virt, phys, size, flags, _tt_alloc);
return true; return true;
} catch(Hw::Out_of_tables &) { } catch(Hw::Out_of_tables &) {
/* core/kernel's page-tables should never get flushed */
if (_tt_phys == Platform::core_page_table()) {
error("core's page-table allocator is empty!");
return false;
}
flush(platform().vm_start(), platform().vm_size()); flush(platform().vm_start(), platform().vm_size());
} }
} }

View File

@@ -30,6 +30,79 @@
using namespace Core; using namespace Core;
Ram_dataspace_capability Platform_thread::Utcb::_allocate_utcb(bool core_thread)
{
Ram_dataspace_capability ds;
if (core_thread)
return ds;
try {
ds = core_env().pd_session()->alloc(sizeof(Native_utcb), CACHED);
} catch (...) {
error("failed to allocate UTCB");
throw Out_of_ram();
}
return ds;
}
addr_t Platform_thread::Utcb::_core_local_address(addr_t utcb_addr,
bool core_thread)
{
if (core_thread)
return utcb_addr;
addr_t ret = 0;
Region_map::Attr attr { };
attr.writeable = true;
core_env().rm_session()->attach(_ds, attr).with_result(
[&] (Region_map::Range range) {
ret = range.start; },
[&] (Region_map::Attach_error) {
error("failed to attach UTCB of new thread within core"); });
return ret;
}
Platform_thread::Utcb::Utcb(addr_t pd_addr, bool core_thread)
:
_ds(_allocate_utcb(core_thread)),
_core_addr(_core_local_address(pd_addr, core_thread))
{
/*
* All non-core threads use the typical dataspace/rm_session
* mechanisms to allocate and attach its UTCB.
* But for the very first core threads, we need to use plain
* physical and virtual memory allocators to create/attach its
* UTCBs. Therefore, we've to allocate and map those here.
*/
if (core_thread) {
platform().ram_alloc().try_alloc(sizeof(Native_utcb)).with_result(
[&] (void *utcb_phys) {
map_local((addr_t)utcb_phys, _core_addr,
sizeof(Native_utcb) / get_page_size());
},
[&] (Range_allocator::Alloc_error) {
error("failed to allocate UTCB for core/kernel thread!");
throw Out_of_ram();
}
);
}
}
Platform_thread::Utcb::~Utcb()
{
/* detach UTCB from core/kernel */
core_env().rm_session()->detach((addr_t)_core_addr);
}
void Platform_thread::_init() { } void Platform_thread::_init() { }
@@ -37,21 +110,6 @@ Weak_ptr<Address_space>& Platform_thread::address_space() {
return _address_space; } return _address_space; }
Platform_thread::~Platform_thread()
{
/* detach UTCB of main threads */
if (_main_thread) {
Locked_ptr<Address_space> locked_ptr(_address_space);
if (locked_ptr.valid())
locked_ptr->flush((addr_t)_utcb_pd_addr, sizeof(Native_utcb),
Address_space::Core_local_addr{0});
}
/* free UTCB */
core_env().pd_session()->free(_utcb);
}
void Platform_thread::quota(size_t const quota) void Platform_thread::quota(size_t const quota)
{ {
_quota = (unsigned)quota; _quota = (unsigned)quota;
@@ -64,26 +122,10 @@ Platform_thread::Platform_thread(Label const &label, Native_utcb &utcb)
_label(label), _label(label),
_pd(_kernel_main_get_core_platform_pd()), _pd(_kernel_main_get_core_platform_pd()),
_pager(nullptr), _pager(nullptr),
_utcb_core_addr(&utcb), _utcb((addr_t)&utcb, true),
_utcb_pd_addr(&utcb),
_main_thread(false), _main_thread(false),
_location(Affinity::Location()), _location(Affinity::Location()),
_kobj(_kobj.CALLED_FROM_CORE, _label.string()) _kobj(_kobj.CALLED_FROM_CORE, _label.string()) { }
{
/* create UTCB for a core thread */
platform().ram_alloc().try_alloc(sizeof(Native_utcb)).with_result(
[&] (void *utcb_phys) {
map_local((addr_t)utcb_phys, (addr_t)_utcb_core_addr,
sizeof(Native_utcb) / get_page_size());
},
[&] (Range_allocator::Alloc_error) {
error("failed to allocate UTCB");
/* XXX distinguish error conditions */
throw Out_of_ram();
}
);
}
Platform_thread::Platform_thread(Platform_pd &pd, Platform_thread::Platform_thread(Platform_pd &pd,
@@ -96,33 +138,39 @@ Platform_thread::Platform_thread(Platform_pd &pd,
_label(label), _label(label),
_pd(pd), _pd(pd),
_pager(nullptr), _pager(nullptr),
_utcb_pd_addr((Native_utcb *)utcb), _utcb(utcb, false),
_priority(_scale_priority(virt_prio)), _priority(_scale_priority(virt_prio)),
_quota((unsigned)quota), _quota((unsigned)quota),
_main_thread(!pd.has_any_thread), _main_thread(!pd.has_any_thread),
_location(location), _location(location),
_kobj(_kobj.CALLED_FROM_CORE, _priority, _quota, _label.string()) _kobj(_kobj.CALLED_FROM_CORE, _priority, _quota, _label.string())
{ {
try {
_utcb = core_env().pd_session()->alloc(sizeof(Native_utcb), CACHED);
} catch (...) {
error("failed to allocate UTCB");
throw Out_of_ram();
}
Region_map::Attr attr { };
attr.writeable = true;
core_env().rm_session()->attach(_utcb, attr).with_result(
[&] (Region_map::Range range) {
_utcb_core_addr = (Native_utcb *)range.start; },
[&] (Region_map::Attach_error) {
error("failed to attach UTCB of new thread within core"); });
_address_space = pd.weak_ptr(); _address_space = pd.weak_ptr();
pd.has_any_thread = true; pd.has_any_thread = true;
} }
Platform_thread::~Platform_thread()
{
/* core/kernel threads have no dataspace, but plain memory as UTCB */
if (!_utcb._ds.valid()) {
error("UTCB of core/kernel thread gets destructed!");
return;
}
/* detach UTCB of main threads */
if (_main_thread) {
Locked_ptr<Address_space> locked_ptr(_address_space);
if (locked_ptr.valid())
locked_ptr->flush(user_utcb_main_thread(), sizeof(Native_utcb),
Address_space::Core_local_addr{0});
}
/* free UTCB */
core_env().pd_session()->free(_utcb._ds);
}
void Platform_thread::affinity(Affinity::Location const &) void Platform_thread::affinity(Affinity::Location const &)
{ {
/* yet no migration support, don't claim wrong location, e.g. for tracing */ /* yet no migration support, don't claim wrong location, e.g. for tracing */
@@ -147,16 +195,15 @@ void Platform_thread::start(void * const ip, void * const sp)
error("invalid RM client"); error("invalid RM client");
return -1; return -1;
}; };
_utcb_pd_addr = (Native_utcb *)user_utcb_main_thread();
Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr); Hw::Address_space * as = static_cast<Hw::Address_space*>(&*locked_ptr);
if (!as->insert_translation((addr_t)_utcb_pd_addr, dsc->phys_addr(), if (!as->insert_translation(user_utcb_main_thread(), dsc->phys_addr(),
sizeof(Native_utcb), Hw::PAGE_FLAGS_UTCB)) { sizeof(Native_utcb), Hw::PAGE_FLAGS_UTCB)) {
error("failed to attach UTCB"); error("failed to attach UTCB");
return -1; return -1;
} }
return 0; return 0;
}; };
if (core_env().entrypoint().apply(_utcb, lambda)) if (core_env().entrypoint().apply(_utcb._ds, lambda))
return; return;
} }
@@ -174,9 +221,9 @@ void Platform_thread::start(void * const ip, void * const sp)
utcb.cap_add(Capability_space::capid(_kobj.cap())); utcb.cap_add(Capability_space::capid(_kobj.cap()));
if (_main_thread) { if (_main_thread) {
utcb.cap_add(Capability_space::capid(_pd.parent())); utcb.cap_add(Capability_space::capid(_pd.parent()));
utcb.cap_add(Capability_space::capid(_utcb)); utcb.cap_add(Capability_space::capid(_utcb._ds));
} }
Kernel::start_thread(*_kobj, cpu, _pd.kernel_pd(), *_utcb_core_addr); Kernel::start_thread(*_kobj, cpu, _pd.kernel_pd(), *(Native_utcb*)_utcb._core_addr);
} }

View File

@@ -55,13 +55,24 @@ class Core::Platform_thread : Noncopyable
using Label = String<32>; using Label = String<32>;
struct Utcb
{
Ram_dataspace_capability _ds { }; /* UTCB ds of non-core threads */
addr_t const _core_addr; /* UTCB address within core/kernel */
Ram_dataspace_capability _allocate_utcb(bool core_thread);
addr_t _core_local_address(addr_t utcb_addr, bool core_thread);
Utcb(addr_t pd_addr, bool core_thread);
~Utcb();
};
Label const _label; Label const _label;
Platform_pd &_pd; Platform_pd &_pd;
Weak_ptr<Address_space> _address_space { }; Weak_ptr<Address_space> _address_space { };
Pager_object * _pager; Pager_object * _pager;
Native_utcb * _utcb_core_addr { }; /* UTCB addr in core */ Utcb _utcb;
Native_utcb * _utcb_pd_addr; /* UTCB addr in pd */
Ram_dataspace_capability _utcb { }; /* UTCB dataspace */
unsigned _priority {0}; unsigned _priority {0};
unsigned _quota {0}; unsigned _quota {0};
@@ -241,7 +252,7 @@ class Core::Platform_thread : Noncopyable
Platform_pd &pd() const { return _pd; } Platform_pd &pd() const { return _pd; }
Ram_dataspace_capability utcb() const { return _utcb; } Ram_dataspace_capability utcb() const { return _utcb._ds; }
}; };
#endif /* _CORE__PLATFORM_THREAD_H_ */ #endif /* _CORE__PLATFORM_THREAD_H_ */

View File

@@ -22,6 +22,32 @@
using namespace Core; using namespace Core;
void Arm_cpu::Context::print(Output &output) const
{
using namespace Genode;
using Genode::print;
print(output, "\n");
print(output, " r0 = ", Hex(r0), "\n");
print(output, " r1 = ", Hex(r1), "\n");
print(output, " r2 = ", Hex(r2), "\n");
print(output, " r3 = ", Hex(r3), "\n");
print(output, " r4 = ", Hex(r4), "\n");
print(output, " r5 = ", Hex(r5), "\n");
print(output, " r6 = ", Hex(r6), "\n");
print(output, " r7 = ", Hex(r7), "\n");
print(output, " r8 = ", Hex(r8), "\n");
print(output, " r9 = ", Hex(r9), "\n");
print(output, " r10 = ", Hex(r10), "\n");
print(output, " r11 = ", Hex(r11), "\n");
print(output, " r12 = ", Hex(r12), "\n");
print(output, " ip = ", Hex(ip), "\n");
print(output, " sp = ", Hex(sp), "\n");
print(output, " lr = ", Hex(lr), "\n");
print(output, " cpsr = ", Hex(cpsr));
}
Arm_cpu::Context::Context(bool privileged) Arm_cpu::Context::Context(bool privileged)
{ {
using Psr = Arm_cpu::Psr; using Psr = Arm_cpu::Psr;

View File

@@ -49,6 +49,18 @@ struct Core::Arm_cpu : public Hw::Arm_cpu
struct alignas(8) Context : Cpu_state, Fpu_context struct alignas(8) Context : Cpu_state, Fpu_context
{ {
Context(bool privileged); Context(bool privileged);
void print(Output &output) const;
void for_each_return_address(Const_byte_range_ptr const &stack,
auto const &fn)
{
void **fp = (void**)r11;
while (stack.contains(fp-1) && stack.contains(fp) && fp[0]) {
fn(fp);
fp = (void **) fp[-1];
}
}
}; };
/** /**

View File

@@ -35,7 +35,8 @@ extern addr_t hypervisor_exception_vector;
void Core::platform_add_local_services(Rpc_entrypoint &ep, void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sh, Sliced_heap &sh,
Registry<Service> &services, Registry<Service> &services,
Core::Trace::Source_registry &trace_sources) Core::Trace::Source_registry &trace_sources,
Ram_allocator &)
{ {
map_local(Platform::core_phys_addr((addr_t)&hypervisor_exception_vector), map_local(Platform::core_phys_addr((addr_t)&hypervisor_exception_vector),
Hw::Mm::hypervisor_exception_vector().base, Hw::Mm::hypervisor_exception_vector().base,

View File

@@ -32,7 +32,8 @@ extern int monitor_mode_exception_vector;
void Core::platform_add_local_services(Rpc_entrypoint &ep, void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sliced_heap, Sliced_heap &sliced_heap,
Registry<Service> &local_services, Registry<Service> &local_services,
Core::Trace::Source_registry &trace_sources) Core::Trace::Source_registry &trace_sources,
Ram_allocator &)
{ {
static addr_t const phys_base = static addr_t const phys_base =
Platform::core_phys_addr((addr_t)&monitor_mode_exception_vector); Platform::core_phys_addr((addr_t)&monitor_mode_exception_vector);

View File

@@ -22,6 +22,22 @@
using namespace Core; using namespace Core;
void Cpu::Context::print(Output &output) const
{
using namespace Genode;
using Genode::print;
print(output, "\n");
for (unsigned i = 0; i < 31; i++)
print(output, " x", i, " = ", Hex(r[i]), "\n");
print(output, " ip = ", Hex(ip), "\n");
print(output, " sp = ", Hex(sp), "\n");
print(output, " esr = ", Hex(esr_el1), "\n");
print(output, " pstate = ", Hex(pstate), "\n");
print(output, " mdscr = ", Hex(mdscr_el1));
}
Cpu::Context::Context(bool privileged) Cpu::Context::Context(bool privileged)
{ {
Spsr::El::set(pstate, privileged ? 1 : 0); Spsr::El::set(pstate, privileged ? 1 : 0);

View File

@@ -79,6 +79,18 @@ struct Core::Cpu : Hw::Arm_64_cpu
Fpu_state fpu_state { }; Fpu_state fpu_state { };
Context(bool privileged); Context(bool privileged);
void print(Output &output) const;
void for_each_return_address(Const_byte_range_ptr const &stack,
auto const &fn)
{
void **fp = (void**)r[29];
while (stack.contains(fp) && stack.contains(fp + 1) && fp[1]) {
fn(fp + 1);
fp = (void **) fp[0];
}
}
}; };
class Mmu_context class Mmu_context

View File

@@ -25,6 +25,47 @@ using Mmu_context = Core::Cpu::Mmu_context;
using namespace Core; using namespace Core;
void Cpu::Context::print(Output &output) const
{
using namespace Genode;
using Genode::print;
print(output, "\n");
print(output, " ip = ", Hex(ip), "\n");
print(output, " ra = ", Hex(ra), "\n");
print(output, " sp = ", Hex(sp), "\n");
print(output, " gp = ", Hex(gp), "\n");
print(output, " tp = ", Hex(tp), "\n");
print(output, " t0 = ", Hex(t0), "\n");
print(output, " t1 = ", Hex(t1), "\n");
print(output, " t2 = ", Hex(t2), "\n");
print(output, " s0 = ", Hex(s0), "\n");
print(output, " s1 = ", Hex(s1), "\n");
print(output, " a0 = ", Hex(a0), "\n");
print(output, " a1 = ", Hex(a1), "\n");
print(output, " a2 = ", Hex(a2), "\n");
print(output, " a3 = ", Hex(a3), "\n");
print(output, " a4 = ", Hex(a4), "\n");
print(output, " a5 = ", Hex(a5), "\n");
print(output, " a6 = ", Hex(a6), "\n");
print(output, " a7 = ", Hex(a7), "\n");
print(output, " s2 = ", Hex(s2), "\n");
print(output, " s3 = ", Hex(s3), "\n");
print(output, " s4 = ", Hex(s4), "\n");
print(output, " s5 = ", Hex(s5), "\n");
print(output, " s6 = ", Hex(s6), "\n");
print(output, " s7 = ", Hex(s7), "\n");
print(output, " s8 = ", Hex(s8), "\n");
print(output, " s9 = ", Hex(s9), "\n");
print(output, " s10 = ", Hex(s10), "\n");
print(output, " s11 = ", Hex(s11), "\n");
print(output, " t3 = ", Hex(t3), "\n");
print(output, " t4 = ", Hex(t4), "\n");
print(output, " t5 = ", Hex(t5), "\n");
print(output, " t6 = ", Hex(t6));
}
Cpu::Context::Context(bool) Cpu::Context::Context(bool)
{ {
/* /*

View File

@@ -56,6 +56,11 @@ class Core::Cpu : public Hw::Riscv_cpu
struct alignas(8) Context : Genode::Cpu_state struct alignas(8) Context : Genode::Cpu_state
{ {
Context(bool); Context(bool);
void print(Output &output) const;
void for_each_return_address(Const_byte_range_ptr const &,
auto const &) { }
}; };
class Mmu_context class Mmu_context

View File

@@ -37,6 +37,27 @@ struct Pseudo_descriptor
} __attribute__((packed)); } __attribute__((packed));
void Cpu::Context::print(Output &output) const
{
using namespace Genode;
using Genode::print;
print(output, "\n");
print(output, " ip = ", Hex(ip), "\n");
print(output, " sp = ", Hex(sp), "\n");
print(output, " cs = ", Hex(cs), "\n");
print(output, " ss = ", Hex(ss), "\n");
print(output, " eflags = ", Hex(eflags), "\n");
print(output, " rax = ", Hex(rax), "\n");
print(output, " rbx = ", Hex(rbx), "\n");
print(output, " rcx = ", Hex(rcx), "\n");
print(output, " rdx = ", Hex(rdx), "\n");
print(output, " rdi = ", Hex(rdi), "\n");
print(output, " rsi = ", Hex(rsi), "\n");
print(output, " rbp = ", Hex(rbp));
}
Cpu::Context::Context(bool core) Cpu::Context::Context(bool core)
{ {
eflags = EFLAGS_IF_SET; eflags = EFLAGS_IF_SET;

View File

@@ -100,6 +100,18 @@ class Core::Cpu : public Hw::X86_64_cpu
}; };
Context(bool privileged); Context(bool privileged);
void print(Output &output) const;
void for_each_return_address(Const_byte_range_ptr const &stack,
auto const &fn)
{
void **fp = (void**)rbp;
while (stack.contains(fp) && stack.contains(fp + 1) && fp[1]) {
fn(fp + 1);
fp = (void **) fp[0];
}
}
} __attribute__((packed)); } __attribute__((packed));

View File

@@ -27,9 +27,10 @@
* Add x86 specific ioport and virtualization service * Add x86 specific ioport and virtualization service
*/ */
void Core::platform_add_local_services(Rpc_entrypoint &ep, void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &sliced_heap, Sliced_heap &sliced_heap,
Registry<Service> &local_services, Registry<Service> &local_services,
Trace::Source_registry &trace_sources) Trace::Source_registry &trace_sources,
Ram_allocator &)
{ {
static Io_port_root io_port_root(*core_env().pd_session(), static Io_port_root io_port_root(*core_env().pd_session(),
platform().io_port_alloc(), sliced_heap); platform().io_port_alloc(), sliced_heap);

View File

@@ -1 +1 @@
2024-10-07 acb4a463ec1b2b6868f923900c2b04ee1a6487a8 2024-12-10 bcad5355367be159df49abba05f4975f8391ef4b

View File

@@ -22,5 +22,6 @@
void Core::platform_add_local_services(Rpc_entrypoint &, void Core::platform_add_local_services(Rpc_entrypoint &,
Sliced_heap &, Sliced_heap &,
Registry<Service> &, Registry<Service> &,
Trace::Source_registry &) Trace::Source_registry &,
Ram_allocator &)
{ } { }

View File

@@ -28,7 +28,8 @@ using namespace Core;
void Core::platform_add_local_services(Rpc_entrypoint &, void Core::platform_add_local_services(Rpc_entrypoint &,
Sliced_heap &md, Sliced_heap &md,
Registry<Service> &reg, Registry<Service> &reg,
Core::Trace::Source_registry &) Core::Trace::Source_registry &,
Ram_allocator &)
{ {
if (!lx_iopl(3)) { if (!lx_iopl(3)) {
static Io_port_root io_port_root(*core_env().pd_session(), static Io_port_root io_port_root(*core_env().pd_session(),

View File

@@ -1 +1 @@
dd4d2aba9dd83ec08e956cb31274c62cbcaf91f6 d58086480d6a21a06bbd956e2d2e605d0f39b6b2

View File

@@ -4,7 +4,7 @@ DOWNLOADS := nova.git
# r10 branch # r10 branch
URL(nova) := https://github.com/alex-ab/NOVA.git URL(nova) := https://github.com/alex-ab/NOVA.git
REV(nova) := 60419b83599fbe506308b0375371c49136e00985 REV(nova) := fc9ad04ecec3911302451fcbf6cd87063be66ad0
DIR(nova) := src/kernel/nova DIR(nova) := src/kernel/nova
PATCHES := $(sort $(wildcard $(REP_DIR)/patches/*.patch)) PATCHES := $(sort $(wildcard $(REP_DIR)/patches/*.patch))

View File

@@ -1 +1 @@
2024-10-07 3fdd714b4cf479987b027a09aa2b470dfc46a92a 2024-12-10 bb446406fbb1173c3f243fe323d5cad8423ff958

View File

@@ -91,7 +91,6 @@ class Core::Pager_object : public Object_pool<Pager_object>::Entry
DEAD = 0x2U, DEAD = 0x2U,
SINGLESTEP = 0x4U, SINGLESTEP = 0x4U,
SIGNAL_SM = 0x8U, SIGNAL_SM = 0x8U,
DISSOLVED = 0x10U,
SUBMIT_SIGNAL = 0x20U, SUBMIT_SIGNAL = 0x20U,
BLOCKED_PAUSE_SM = 0x40U, BLOCKED_PAUSE_SM = 0x40U,
MIGRATE = 0x80U MIGRATE = 0x80U
@@ -115,9 +114,6 @@ class Core::Pager_object : public Object_pool<Pager_object>::Entry
inline void mark_signal_sm() { _status |= SIGNAL_SM; } inline void mark_signal_sm() { _status |= SIGNAL_SM; }
inline bool has_signal_sm() { return _status & SIGNAL_SM; } inline bool has_signal_sm() { return _status & SIGNAL_SM; }
inline void mark_dissolved() { _status |= DISSOLVED; }
inline bool dissolved() { return _status & DISSOLVED; }
inline bool to_submit() { return _status & SUBMIT_SIGNAL; } inline bool to_submit() { return _status & SUBMIT_SIGNAL; }
inline void submit_signal() { _status |= SUBMIT_SIGNAL; } inline void submit_signal() { _status |= SUBMIT_SIGNAL; }
inline void reset_submit() { _status &= (uint8_t)(~SUBMIT_SIGNAL); } inline void reset_submit() { _status &= (uint8_t)(~SUBMIT_SIGNAL); }

View File

@@ -523,8 +523,6 @@ uint8_t Pager_object::_unsynchronized_client_recall(bool get_state_and_block)
void Pager_object::cleanup_call() void Pager_object::cleanup_call()
{ {
_state.mark_dissolved();
/* revoke ec and sc cap of client before the sm cap */ /* revoke ec and sc cap of client before the sm cap */
if (_state.sel_client_ec != Native_thread::INVALID_INDEX) if (_state.sel_client_ec != Native_thread::INVALID_INDEX)
revoke(Obj_crd(_state.sel_client_ec, 2)); revoke(Obj_crd(_state.sel_client_ec, 2));
@@ -750,10 +748,6 @@ void Pager_object::migrate(Affinity::Location location)
Pager_object::~Pager_object() Pager_object::~Pager_object()
{ {
/* sanity check that object got dissolved already - otherwise bug */
if (!_state.dissolved())
nova_die();
/* revoke portal used for the cleanup call and sm cap for blocking state */ /* revoke portal used for the cleanup call and sm cap for blocking state */
revoke(Obj_crd(_selectors, 2)); revoke(Obj_crd(_selectors, 2));
cap_map().remove(_selectors, 2, false); cap_map().remove(_selectors, 2, false);

View File

@@ -23,7 +23,8 @@
void Core::platform_add_local_services(Rpc_entrypoint &ep, void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &heap, Sliced_heap &heap,
Registry<Service> &services, Registry<Service> &services,
Trace::Source_registry &trace_sources) Trace::Source_registry &trace_sources,
Ram_allocator &)
{ {
static Vm_root vm_root(ep, heap, core_env().ram_allocator(), static Vm_root vm_root(ep, heap, core_env().ram_allocator(),
core_env().local_rm(), trace_sources); core_env().local_rm(), trace_sources);

View File

@@ -1 +1 @@
2024-10-07 0dfc585477f27e02dbe67acf3a23f48718c113f7 2024-12-10 ed00306cd3e097b95bf2cbd0e9238ccb22d1f0c2

View File

@@ -1 +1 @@
2024-10-07 4ec8eaa528d706f5b2a267f95443a48430a87a91 2024-12-10 d6f79e327a46e48890b32d90d2eb8be604a8534d

View File

@@ -1 +1 @@
2024-10-07 b23d40bed9c8b9ed8fc8d8d1e2c5cac4e719580a 2024-12-10 4f6772b2b52b10c6462c2be4460981484d2f413f

View File

@@ -1 +1 @@
2024-10-07 9929145b5c04edaf41b0f726743eabb2de9cc832 2024-12-10 e10d664cb493e322f029ef8a4bb22dabb3276137

View File

@@ -23,7 +23,8 @@
void Core::platform_add_local_services(Rpc_entrypoint &ep, void Core::platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &heap, Sliced_heap &heap,
Registry<Service> &services, Registry<Service> &services,
Core::Trace::Source_registry &trace_sources) Core::Trace::Source_registry &trace_sources,
Ram_allocator &)
{ {
static Vm_root vm_root(ep, heap, core_env().ram_allocator(), static Vm_root vm_root(ep, heap, core_env().ram_allocator(),
core_env().local_rm(), trace_sources); core_env().local_rm(), trace_sources);

View File

@@ -139,6 +139,8 @@ class Genode::Attached_ram_dataspace
*/ */
size_t size() const { return _size; } size_t size() const { return _size; }
void clear() { if (_at) memset((void *)_at, 0, _size); }
void swap(Attached_ram_dataspace &other) void swap(Attached_ram_dataspace &other)
{ {
_swap(_size, other._size); _swap(_size, other._size);

View File

@@ -25,7 +25,10 @@ namespace Genode {
* \param size number of bytes to copy * \param size number of bytes to copy
* *
* \return number of bytes not copied * \return number of bytes not copied
*
* The compiler attribute prevents array-bounds warnings with gcc 12.3.
*/ */
__attribute((optimize("no-tree-loop-distribute-patterns")))
inline size_t memcpy_cpu(void * dst, const void * src, size_t size) inline size_t memcpy_cpu(void * dst, const void * src, size_t size)
{ {
using word_t = unsigned long; using word_t = unsigned long;

View File

@@ -130,7 +130,7 @@ namespace Genode {
/** /**
* Return length of null-terminated string in bytes * Return length of null-terminated string in bytes
*/ */
__attribute((optimize("no-tree-loop-distribute-patterns"))) __attribute((optimize("no-tree-loop-distribute-patterns")))
inline size_t strlen(const char *s) inline size_t strlen(const char *s)
{ {
size_t res = 0; size_t res = 0;
@@ -190,6 +190,9 @@ namespace Genode {
*/ */
inline void *memcpy(void *dst, const void *src, size_t size) inline void *memcpy(void *dst, const void *src, size_t size)
{ {
if (!size)
return dst;
char *d = (char *)dst, *s = (char *)src; char *d = (char *)dst, *s = (char *)src;
size_t i; size_t i;
@@ -281,7 +284,7 @@ namespace Genode {
* generation of a 'memset()' call in the 'while' loop * generation of a 'memset()' call in the 'while' loop
* with gcc 10. * with gcc 10.
*/ */
__attribute((optimize("no-tree-loop-distribute-patterns"))) __attribute((optimize("no-tree-loop-distribute-patterns")))
inline void *memset(void *dst, uint8_t i, size_t size) inline void *memset(void *dst, uint8_t i, size_t size)
{ {
using word_t = unsigned long; using word_t = unsigned long;

View File

@@ -1 +1 @@
2024-10-07 513edfd85657a2cea7224093f02ad8f2525056f3 2024-12-10 679ce4322aa21ae89a006562d0b7ee2525f49f79

View File

@@ -1 +1 @@
2024-10-29 123419fe0f679a41e6e836a82f46be56644d36af 2024-12-10 3ffa5a2865cae514a74efae0db61d377b15d763c

View File

@@ -1 +1 @@
2024-10-29 4121baf09683470114b15c6768cb939b7b78de65 2024-12-10 bbe7bf0ec279122d824bee5c1195b01afc84dd4b

View File

@@ -1 +1 @@
2024-10-29 964df1f1eebb297399d37ffc42cd5b717f81ac65 2024-12-10 a7b0af32394148e425159d04d66eaf68d547e235

View File

@@ -1 +1 @@
2024-10-29 8acfdc522214ab48f3ba2662deca84562aedb62f 2024-12-10 2b25d7303e7618744a76789b0bd3b67e6ac6118c

View File

@@ -1 +1 @@
2024-10-29 511875353e93f226ff52c3a79be5e6847fa95c19 2024-12-10 53cd2f02ef23b3a499863c6d8ece98a34d5f6c1b

View File

@@ -1 +1 @@
2024-10-29 4eb04b36a4589581afe4b8e5c09cd1a8202e389e 2024-12-10 d598d8570fdf0fac3c44800df6d102a685fefd4f

View File

@@ -1 +1 @@
2024-10-29 ec0ee7597f35852ed2a1f0a3250ebc9de125141d 2024-12-10 a0c00bbc6afd51d935a027aac2f99c724b511475

View File

@@ -1 +1 @@
2024-10-29 2093c0cd6b5e73f07e682f8448abdf4bcc07ba90 2024-12-10 562d5813faeaea8af1f0d0113eaf8a34bac68278

View File

@@ -1 +1 @@
2024-10-29 ea454149147f8535060bf076432dd67d473658dd 2024-12-10 768c693ffc03d646b6d17db9dba0e09b5824831a

View File

@@ -1 +1 @@
2024-10-29 dde16843a89b11f0c76e73ab66906ccc68d65a0c 2024-12-10 b0c035bf11e2c58d7d1d63bf419f77f10bb5029c

View File

@@ -1 +1 @@
2024-10-29 0816774e285a09f371e1c7b11c661aeb9925e4dc 2024-12-10 aa6bb600bf3fa65d998647297a92bfd156bac23a

View File

@@ -1 +1 @@
2024-10-29 1d173f7880db1f1252ab54cea4d8e6ba8e7ea77c 2024-12-10 4f8eed217aacbb49e55c92f4a3478e6480ab7d44

View File

@@ -1 +1 @@
2024-10-29 50994e52f5f229a19d33169d3ae3aa1e1b06a83a 2024-12-10 4446fdee68a430143a7f3fad9967cce025313cda

View File

@@ -1 +1 @@
2024-10-29 54f3e743c82cc972eebb16023de863fa5e539812 2024-12-10 86c625e9f1ee148ca591133b949201eb8e35bd62

View File

@@ -1 +1 @@
2024-10-29 5fde64bdb410c3b1d31e97f0d21cf3d064bbc2ce 2024-12-10 4a3be510164139564f2c0fee666b2a44440c1b63

View File

@@ -1 +1 @@
2024-10-29 f5a700f137cf815f7ce15b3fa9f5634b7828425e 2024-12-10 ca74ef5ce15156297109d276e4c84de0eb53ea1a

View File

@@ -1 +1 @@
2024-10-29 d326468bba3e8af5f49179103fb74567410c016b 2024-12-10 78cfc1241bbeb91375ea7d868dc643da2cb15fdd

View File

@@ -1 +1 @@
2024-10-29 f5aa7c4c7928b4aa750133798ab5ced2862627ef 2024-12-10 87328a1454e601631109593eea99f67d67f6cb58

View File

@@ -1 +1 @@
2024-10-29 0c302928b9f742d2dca1a7a7392217afc7d99518 2024-12-10 24326fce4a9a146c83a56b39ac47225704842e77

View File

@@ -1 +1 @@
2024-10-29 778c0e244d381f663a04913e3fd490c9029454e7 2024-12-10 ca7dc59df2e889b786ba71945267bc2312921aa2

View File

@@ -1 +1 @@
2024-10-07 a6a04c7c48e562c4450603757520e447af6ba3b7 2024-12-10 4c5fdf8c9ec55ce700941db5e92e762b59eadf9d

View File

@@ -1 +1 @@
2024-10-07 c65f3e117d0576e35c93be32795ae44e40905806 2024-12-10 4b2f31583181dd5ea91853d185a3891b5de51991

View File

@@ -1 +1 @@
2024-10-07 fd12392cdb6e8a73927eddb831b5a4a03f1dba99 2024-12-10 4ce6ec42b350c805f049a20a150de0bdb9450870

View File

@@ -1 +1 @@
2024-10-07 2d27a4ea191a544959688dddebc99c0158a2cc87 2024-12-10 97a0bfdbe1db15c82b063e3db2047e4b0137951d

View File

@@ -1 +1 @@
2024-10-07 a2f19169350426a2c92bfa82cd57ea1126ff40f9 2024-12-10 19da5bfd961e5bd3388a6bf771e902edf8720fcc

View File

@@ -1 +1 @@
2024-10-07 fdd7695261277dcba03643f98bb2804c30c0cf04 2024-12-10 0f5f88878de68adcbadca0ff916f6550fe166691

View File

@@ -1 +1 @@
2024-10-07 45ffa221854a16794e7325d8ed6d6b5ce90635a5 2024-12-10 c462994f5183b4fa2e7b510953c61a47758efc8f

View File

@@ -1 +1 @@
2024-10-07 96b39a93b70d6732cd2f6a7e30ac3404f4f017d5 2024-12-10 73f1c2695f605e5ffb4b436ba46861408a1808f5

View File

@@ -1 +1 @@
2024-10-07 9843a171d4e781aec5597ce0b48ccc17431918d7 2024-12-10 edb8428080fb555fce9b579b27d5f1a7b64ba98d

View File

@@ -1 +1 @@
2024-10-07 b3d5f7577d058fd7d3a34846af555ba2ffec842b 2024-12-10 58e5fe045c8f3364f940b1f279b00feb3a490b31

View File

@@ -1 +1 @@
2024-10-07 40512d1349569b95538ce6c61613c295cf7f9487 2024-12-10 ae413cc25fbcced7f064f2ec3483c85e2add1043

View File

@@ -1 +1 @@
2024-10-07 4725441ba5bbe72651a38e5e13ac2ed3bda4af37 2024-12-10 4721bcab9444c1e20213fc97332435c6a8003c4d

View File

@@ -1 +1 @@
2024-10-07 5c9f35640dd353fe429cac6d93c022a99a609fb5 2024-12-10 192c06392264f53e181c6f92a5bbcb86273e3918

View File

@@ -1 +1 @@
2024-10-07 635ede2b3116c2dd4817b120ba9c346f72032fae 2024-12-10 30d45f61656e670e8cb5e18b060a14f356b88ace

View File

@@ -1 +1 @@
2024-10-07 a1eb033228bdbe39301dd2fae6502870c3162b26 2024-12-10 99611181980e7a782fd155eaf36525218350abd2

View File

@@ -1 +1 @@
2024-10-07 f4fc22ec46e47ed91c121beb24657d911f6b7411 2024-12-10 d6fa422e41e40f7da917ea6fc4eb75896945c7ec

View File

@@ -1 +1 @@
2024-10-07 83118776eba8117187854ababa680cb1fe4b2b77 2024-12-10 380e2f625f1289606d0b1366829b640e0805eb7c

View File

@@ -1 +1 @@
2024-10-07 b1c7aacfa16f9ecbd103e3df98a0a2bb36bae272 2024-12-10 906f19242a142f336c9abec8924594920cb68dcd

View File

@@ -1 +1 @@
2024-10-07 472cfb4a2918d0ecb9251eba8cd3b624275a94ab 2024-12-10 da695ee628e068a05af6e85177612a94409573a5

View File

@@ -1 +1 @@
2024-10-07 17a0ac2f692ff963d89fab0ae111606a837bcaaf 2024-12-10 1510f2592e0579d9e404288e020b2096d652e465

View File

@@ -15,6 +15,7 @@
#define _CORE__INCLUDE__PLATFORM_SERVICES_H_ #define _CORE__INCLUDE__PLATFORM_SERVICES_H_
/* core includes */ /* core includes */
#include "base/ram_allocator.h"
#include <core_service.h> #include <core_service.h>
#include <trace/source_registry.h> #include <trace/source_registry.h>
@@ -38,7 +39,8 @@ namespace Core {
void platform_add_local_services(Rpc_entrypoint &ep, void platform_add_local_services(Rpc_entrypoint &ep,
Sliced_heap &md, Sliced_heap &md,
Registry<Service> &reg, Registry<Service> &reg,
Trace::Source_registry &trace); Trace::Source_registry &trace,
Ram_allocator &core_ram_alloc);
} }
#endif /* _CORE__INCLUDE__PLATFORM_SERVICES_H_ */ #endif /* _CORE__INCLUDE__PLATFORM_SERVICES_H_ */

View File

@@ -289,7 +289,7 @@ void Genode::bootstrap_component(Genode::Platform &)
static Core_service<Trace_session_component> trace_service (services, trace_root); static Core_service<Trace_session_component> trace_service (services, trace_root);
/* make platform-specific services known to service pool */ /* make platform-specific services known to service pool */
platform_add_local_services(ep, sliced_heap, services, Core::Trace::sources()); platform_add_local_services(ep, sliced_heap, services, Core::Trace::sources(), core_ram_alloc);
size_t const avail_ram_quota = core_pd.avail_ram().value; size_t const avail_ram_quota = core_pd.avail_ram().value;
size_t const avail_cap_quota = core_pd.avail_caps().value; size_t const avail_cap_quota = core_pd.avail_caps().value;

Some files were not shown because too many files have changed in this diff Show More