Initial commit

This commit is contained in:
ssimnb 2026-03-04 07:19:48 +01:00
commit ef80f65fbf
136 changed files with 13728 additions and 0 deletions

674
LICENSE Normal file
View file

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

94
Makefile Normal file
View file

@ -0,0 +1,94 @@
BUILD_DIR=build
CC = gcc
AS = nasm
LD = ld
SRC_DIR := src build/flanterm
C_SOURCES := $(shell find $(SRC_DIR) -type f -name '*.c')
C_OBJECTS := $(patsubst %.c,$(BUILD_DIR)/%.o,$(C_SOURCES))
ASM_SOURCES := $(shell find $(SRC_DIR) -type f -name '*.asm')
ASM_OBJECTS := $(patsubst %.asm,$(BUILD_DIR)/%asm.o,$(ASM_SOURCES))
CFLAGS += -Wall \
-Wextra \
-std=gnu11 \
-ffreestanding \
-fno-stack-protector \
-fno-stack-check \
-fno-lto \
-fPIE \
-m64 \
-march=x86-64 \
-mno-80387 \
-mno-mmx \
-mno-sse \
-mno-sse2 \
-mno-red-zone \
-I ./include \
-O0 \
-ggdb3 \
-g
LDFLAGS += -m elf_x86_64 \
-nostdlib \
-static \
-pie \
--no-dynamic-linker \
-z text \
-z max-page-size=0x1000 \
-T linker.ld
NASMFLAGS = -f elf64 -g -F dwarf
all: amd64
deps:
mkdir -p $(BUILD_DIR) || true
rm -rf build/limine
git clone https://github.com/limine-bootloader/limine.git --branch=v10.x-binary --depth=1 build/limine
git clone https://codeberg.org/Limine/limine-protocol/ build/limine-protocol
make -C build/limine
cp build/limine-protocol/include/limine.h include/
rm -rf build/flanterm
git clone https://codeberg.org/mintsuki/flanterm build/flanterm
rm -rf build/uACPI
rm -rf include/uACPI
git clone https://github.com/uACPI/uACPI.git build/uACPI
mkdir include/uACPI
cp -r build/uACPI/include/* include/
$(BUILD_DIR)/%.o: %.c
mkdir -p $(dir $@)
$(CC) -c $< -o $@ $(CFLAGS)
$(BUILD_DIR)/%asm.o: %.asm
mkdir -p $(dir $@)
$(AS) $< -o $@ $(NASMFLAGS)
amd64: $(C_OBJECTS) $(ASM_OBJECTS)
$(LD) -o $(BUILD_DIR)/Neobbo.elf $(C_OBJECTS) $(ASM_OBJECTS) $(LDFLAGS)
mkdir -p iso_root
cp -v $(BUILD_DIR)/Neobbo.elf limine.conf build/limine/limine-bios.sys \
build/limine/limine-bios-cd.bin build/limine/limine-uefi-cd.bin iso_root/
mkdir -p iso_root/EFI/BOOT
cp -v build/limine/BOOTX64.EFI iso_root/EFI/BOOT/
cp -v build/limine/BOOTIA32.EFI iso_root/EFI/BOOT/
xorriso -as mkisofs -b limine-bios-cd.bin \
-no-emul-boot -boot-load-size 4 -boot-info-table \
--efi-boot limine-uefi-cd.bin \
-efi-boot-part --efi-boot-image --protective-msdos-label \
iso_root -o $(BUILD_DIR)/Neobbo.iso
./build/limine/limine bios-install $(BUILD_DIR)/Neobbo.iso
disk:
dd if=/dev/zero of=disk.img bs=1M count=128
elftest:
$(CC) src/elf/elftest.c -o $(BUILD_DIR)/elftest -ffreestanding -Isrc/include -static -fPIE -nostdlib
clean:
rm -rf build/ iso_root

21
README.md Normal file
View file

@ -0,0 +1,21 @@
# Neobbo
Hobby operating system for the x86_64 architecture written in C. Licensed under GPLv3
## How to build
First run `make dependencies` to clone and build Limine and Flanterm
Then run `make all` - make sure to adjust the `CC`, `AS` and `LD` flags to match your cross-compiling toolchain
in the `build` folder you should have a `SFB25.iso` file.
To try out Neobbo you can use QEMU:
`qemu-system-x86_64 build/SFB25.iso -machine q35 -m 512M`
## External projects
- [Limine bootloader](https://github.com/limine-bootloader/limine) for the bootloader
- [Flanterm](https://codeberg.org/mintsuki/flanterm) for the terminal
- [uACPI](https://github.com/uacpi/uacpi) for the AML interpreter and other ACPI stuff

18
autodebug.sh Executable file
View file

@ -0,0 +1,18 @@
#!/bin/bash
# how it works:
# Args are fed to QEMU, then GDB and everything else does shit automagically
# 1st arg: terminal name to spawn GDB
# 2nd arg: place to breakpoint in
# 3rd+ arguments all get passed to QEMU
termname=$1
breakpoint=$2
shift 2
qemu-system-x86_64 -s -S "$@" &
sleep 1
"$termname" -e gdb -ex 'target remote localhost:1234' -ex 'break _start' build/Neobbo.elf

5
bochsrc Normal file
View file

@ -0,0 +1,5 @@
display_library: x, options="gui_debug"
ata0-master: type=cdrom, path="build/SFB25.iso", status=inserted
boot: cdrom
memory: guest=512, host=512
cpu: count=3, ips=95000000

BIN
build/Neobbo.elf Executable file

Binary file not shown.

BIN
build/Neobbo.iso Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

1
build/flanterm Submodule

@ -0,0 +1 @@
Subproject commit 26f631fcc15bb7faea83572213cae5a0287fc3de

1
build/limine Submodule

@ -0,0 +1 @@
Subproject commit 38ff2c855aabb92e4cfa2cc7ef0c8af665ecba94

1
build/limine-protocol Submodule

@ -0,0 +1 @@
Subproject commit fd3197997ec608484a2eb4e3d2a8591378087e7d

BIN
build/src/amd64_smp.o Normal file

Binary file not shown.

BIN
build/src/gdt.o Normal file

Binary file not shown.

BIN
build/src/gdtasm.o Normal file

Binary file not shown.

BIN
build/src/idt.o Normal file

Binary file not shown.

BIN
build/src/idtasm.o Normal file

Binary file not shown.

BIN
build/src/io.o Normal file

Binary file not shown.

BIN
build/src/kinfo.o Normal file

Binary file not shown.

BIN
build/src/lib/assert.o Normal file

Binary file not shown.

BIN
build/src/lib/kprint.o Normal file

Binary file not shown.

BIN
build/src/lib/lock.o Normal file

Binary file not shown.

BIN
build/src/lib/string.o Normal file

Binary file not shown.

BIN
build/src/main.o Normal file

Binary file not shown.

BIN
build/src/mm/kmalloc.o Normal file

Binary file not shown.

BIN
build/src/mm/page.o Normal file

Binary file not shown.

BIN
build/src/mm/pmm.o Normal file

Binary file not shown.

BIN
build/src/mm/slab.o Normal file

Binary file not shown.

BIN
build/src/mm/vmm.o Normal file

Binary file not shown.

BIN
build/src/smp.o Normal file

Binary file not shown.

1
build/uACPI Submodule

@ -0,0 +1 @@
Subproject commit e05715b2e6a3ae913aecdb86f4fd2dba30304e45

26
bx_enh_dbg.ini Normal file
View file

@ -0,0 +1,26 @@
# bx_enh_dbg_ini
SeeReg[0] = TRUE
SeeReg[1] = TRUE
SeeReg[2] = TRUE
SeeReg[3] = TRUE
SeeReg[4] = FALSE
SeeReg[5] = FALSE
SeeReg[6] = FALSE
SeeReg[7] = FALSE
SingleCPU = FALSE
ShowIOWindows = TRUE
ShowButtons = TRUE
SeeRegColors = TRUE
ignoreNxtT = TRUE
ignSSDisasm = TRUE
UprCase = 0
DumpInAsciiMode = 3
isLittleEndian = TRUE
DefaultAsmLines = 512
DumpWSIndex = 0
DockOrder = 0x123
ListWidthPix[0] = 158
ListWidthPix[1] = 218
ListWidthPix[2] = 250
MainWindow = 0, 0, 714, 500
FontName = Normal

4
compile_flags.txt Normal file
View file

@ -0,0 +1,4 @@
-I./include
-Wall
-Wno-incompatible-library-redeclaration
-Wextra

View file

@ -0,0 +1,4 @@
void apic_init(void);
void ap_apic_init();
void apic_sleep(int ms);

View file

@ -0,0 +1,17 @@
#include <stdint.h>
typedef struct gdt_descriptor {
uint16_t limit_low;
uint16_t base_low;
uint8_t base_middle;
uint8_t access;
uint8_t granularity;
uint8_t base_high;
} __attribute((packed)) gdt_descriptor;
typedef struct gdt_register {
uint16_t limit;
uint64_t base_address;
} __attribute((packed)) gdt_register;
void set_gdt(void);

View file

@ -0,0 +1,42 @@
#include <error.h>
#include <stdbool.h>
#include <stdint.h>
typedef struct idt_descriptor {
uint16_t offset_low;
uint16_t segment_sel;
uint8_t ist;
uint8_t attributes;
uint16_t offset_high;
uint32_t offset_higher;
uint32_t reserved;
} __attribute((packed))idt_descriptor;
typedef struct idt_register {
uint16_t limit;
uint64_t base_address;
} __attribute((packed)) idt_register;
typedef struct interrupt_frame {
uint64_t r15, r14, r13, r12, r11, r10, r9, r8, rdi, rsi, rbp, rdx, rcx, rbx, rax;
uint64_t int_no, err;
uint64_t rip, cs, rflags, rsp, ss;
} __attribute((packed)) interrupt_frame;
typedef struct stack_frame {
struct stack_frame *rbp;
uint64_t rip;
}__attribute((packed)) stack_frame;
typedef struct irq_t {
void *base;
bool in_use;
}irq_t;
void set_idt_descriptor(uint8_t vector, void *base, uint8_t flags);
kstatus register_irq_vector(uint8_t vector, void *base, uint8_t flags);
int register_irq(void *base, uint8_t flags);
void set_idt(void);

View file

@ -0,0 +1,13 @@
#include "error.h"
#include <stdint.h>
void ioapic_init(void);
void write_redir_entry(uint8_t reg, uint64_t data);
kstatus set_redir_entry(uint8_t pin, uint8_t vector, uint8_t delivery, uint8_t trigger, uint8_t destination_field, uint8_t destination_mode);
#define IOREGSEL 0x0
#define IOWIN 0x10
#define IOAPICID 0x0
#define IOAPICVER 0x1
#define IOAPICARB 0x2
#define IOREDTBL(x) (0x10 + (x * 2)) // 0-23 registers

View file

@ -0,0 +1,11 @@
#include <stdint.h>
enum USABLE_TIMERS {
HPET = 0,
PMT,
PIT,
};
void timer_init(void);
void apic_timer_handler(void);
void sleep(int ms);

View file

@ -0,0 +1,6 @@
#include "error.h"
#include <stdint.h>
kstatus tsc_init();
uint64_t tsc_get_timestamp();

12
include/arch/amd64/io.h Normal file
View file

@ -0,0 +1,12 @@
#include <stdint.h>
void outb(uint16_t port, uint8_t val);
void outw(uint16_t port, uint16_t val);
void outl(uint16_t port, uint32_t val);
uint8_t inb(uint16_t port);
uint16_t inw(uint16_t port);
uint32_t inl(uint16_t port);
void wrmsr(uint64_t msr, uint64_t value);
uint64_t rdmsr(uint64_t msr);

10
include/assert.h Normal file
View file

@ -0,0 +1,10 @@
#pragma once
// Thanks to Managarm:
// https://github.com/managarm/managarm/blob/master/kernel/klibc/assert.h
void __assert_fail(const char *assertion, const char *file, unsigned int line,
const char *function);
#define assert(assertion) ((void)((assertion) \
|| (__assert_fail(#assertion, __FILE__, __LINE__, __func__), 0)))

1
include/drivers/ahci.h Normal file
View file

@ -0,0 +1 @@
void ahci_init();

3
include/drivers/pmt.h Normal file
View file

@ -0,0 +1,3 @@
#include <stdint.h>
int pmt_init();
void pmt_delay(uint64_t us);

8
include/drivers/serial.h Normal file
View file

@ -0,0 +1,8 @@
#include <stdint.h>
void serial_write(uint8_t data);
uint8_t serial_read();
void serial_print(char *str);
void serial_init();

17
include/error.h Normal file
View file

@ -0,0 +1,17 @@
#ifndef ERROR_H
#define ERROR_H
typedef enum {
/* Success */
KERNEL_STATUS_SUCCESS,
KERNEL_MUTEX_ACQUIRED,
KERNEL_MUTEX_LOCKED,
/* General error */
KERNEL_STATUS_ERROR,
} kstatus;
#endif

1
include/kmath.h Normal file
View file

@ -0,0 +1 @@
#define abs(x) (x<0) ? -x : x

42
include/kprint.h Normal file
View file

@ -0,0 +1,42 @@
#include <stdint.h>
#include "../build/flanterm/src/flanterm.h"
#include "../build/flanterm/src/flanterm_backends/fb.h"
enum {
LOG_INFO = 0,
LOG_WARN,
LOG_ERROR,
LOG_SUCCESS,
};
void klog(const char *func, const char *msg, ...);
int kprintf(const char *format_string, ...);
int serial_kprintf(const char *format_string, ...);
void print_char(struct flanterm_context *ft_ctx, char c);
void print_str(struct flanterm_context *ft_ctx, char *str);
void print_int(struct flanterm_context *ft_ctx, uint64_t i);
void print_hex(struct flanterm_context *ft_ctx, uint64_t num);
void print_bin(struct flanterm_context *ft_ctx, uint64_t num);
void serial_print_char(char c);
void serial_print_int(uint64_t i);
void serial_print_hex(uint64_t num);
void serial_print_bin(uint64_t num);
void kernel_framebuffer_print(char *buffer, size_t n);
void kernel_serial_print(char *buffer, size_t n);
char toupper(char c);
char dtoc(int digit);
#define ANSI_COLOR_RED "\x1b[31m"
#define ANSI_COLOR_GREEN "\x1b[32m"
#define ANSI_COLOR_YELLOW "\x1b[33m"
#define ANSI_COLOR_BLUE "\x1b[34m"
#define ANSI_COLOR_MAGENTA "\x1b[35m"
#define ANSI_COLOR_CYAN "\x1b[36m"
#define ANSI_COLOR_RESET "\x1b[0m"

587
include/limine.h Normal file
View file

@ -0,0 +1,587 @@
/* SPDX-License-Identifier: 0BSD */
/* Copyright (C) 2022-2026 Mintsuki and contributors.
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted.
*
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
* SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
* OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
* CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
*/
#ifndef LIMINE_H
#define LIMINE_H 1
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
#endif
/* Misc */
#ifdef LIMINE_NO_POINTERS
# define LIMINE_PTR(TYPE) uint64_t
#else
# define LIMINE_PTR(TYPE) TYPE
#endif
#define LIMINE_REQUESTS_START_MARKER { 0xf6b8f4b39de7d1ae, 0xfab91a6940fcb9cf, \
0x785c6ed015d3e316, 0x181e920a7852b9d9 }
#define LIMINE_REQUESTS_END_MARKER { 0xadc0e0531bb10d03, 0x9572709f31764c62 }
#define LIMINE_BASE_REVISION(N) { 0xf9562b2d5c95a6c8, 0x6a7b384944536bdc, (N) }
#define LIMINE_BASE_REVISION_SUPPORTED(VAR) ((VAR)[2] == 0)
#define LIMINE_LOADED_BASE_REVISION_VALID(VAR) ((VAR)[1] != 0x6a7b384944536bdc)
#define LIMINE_LOADED_BASE_REVISION(VAR) ((VAR)[1])
#define LIMINE_COMMON_MAGIC 0xc7b1dd30df4c8b88, 0x0a82e883a194f07b
struct limine_uuid {
uint32_t a;
uint16_t b;
uint16_t c;
uint8_t d[8];
};
#define LIMINE_MEDIA_TYPE_GENERIC 0
#define LIMINE_MEDIA_TYPE_OPTICAL 1
#define LIMINE_MEDIA_TYPE_TFTP 2
struct limine_file {
uint64_t revision;
LIMINE_PTR(void *) address;
uint64_t size;
LIMINE_PTR(char *) path;
LIMINE_PTR(char *) string;
uint32_t media_type;
uint32_t unused;
uint32_t tftp_ip;
uint32_t tftp_port;
uint32_t partition_index;
uint32_t mbr_disk_id;
struct limine_uuid gpt_disk_uuid;
struct limine_uuid gpt_part_uuid;
struct limine_uuid part_uuid;
};
/* Boot info */
#define LIMINE_BOOTLOADER_INFO_REQUEST_ID { LIMINE_COMMON_MAGIC, 0xf55038d8e2a1202f, 0x279426fcf5f59740 }
struct limine_bootloader_info_response {
uint64_t revision;
LIMINE_PTR(char *) name;
LIMINE_PTR(char *) version;
};
struct limine_bootloader_info_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_bootloader_info_response *) response;
};
/* Executable command line */
#define LIMINE_EXECUTABLE_CMDLINE_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x4b161536e598651e, 0xb390ad4a2f1f303a }
struct limine_executable_cmdline_response {
uint64_t revision;
LIMINE_PTR(char *) cmdline;
};
struct limine_executable_cmdline_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_executable_cmdline_response *) response;
};
/* Firmware type */
#define LIMINE_FIRMWARE_TYPE_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x8c2f75d90bef28a8, 0x7045a4688eac00c3 }
#define LIMINE_FIRMWARE_TYPE_X86BIOS 0
#define LIMINE_FIRMWARE_TYPE_EFI32 1
#define LIMINE_FIRMWARE_TYPE_EFI64 2
#define LIMINE_FIRMWARE_TYPE_SBI 3
struct limine_firmware_type_response {
uint64_t revision;
uint64_t firmware_type;
};
struct limine_firmware_type_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_firmware_type_response *) response;
};
/* Stack size */
#define LIMINE_STACK_SIZE_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x224ef0460a8e8926, 0xe1cb0fc25f46ea3d }
struct limine_stack_size_response {
uint64_t revision;
};
struct limine_stack_size_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_stack_size_response *) response;
uint64_t stack_size;
};
/* HHDM */
#define LIMINE_HHDM_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x48dcf1cb8ad2b852, 0x63984e959a98244b }
struct limine_hhdm_response {
uint64_t revision;
uint64_t offset;
};
struct limine_hhdm_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_hhdm_response *) response;
};
/* Framebuffer */
#define LIMINE_FRAMEBUFFER_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x9d5827dcd881dd75, 0xa3148604f6fab11b }
#define LIMINE_FRAMEBUFFER_RGB 1
struct limine_video_mode {
uint64_t pitch;
uint64_t width;
uint64_t height;
uint16_t bpp;
uint8_t memory_model;
uint8_t red_mask_size;
uint8_t red_mask_shift;
uint8_t green_mask_size;
uint8_t green_mask_shift;
uint8_t blue_mask_size;
uint8_t blue_mask_shift;
};
struct limine_framebuffer {
LIMINE_PTR(void *) address;
uint64_t width;
uint64_t height;
uint64_t pitch;
uint16_t bpp;
uint8_t memory_model;
uint8_t red_mask_size;
uint8_t red_mask_shift;
uint8_t green_mask_size;
uint8_t green_mask_shift;
uint8_t blue_mask_size;
uint8_t blue_mask_shift;
uint8_t unused[7];
uint64_t edid_size;
LIMINE_PTR(void *) edid;
/* Response revision 1 */
uint64_t mode_count;
LIMINE_PTR(struct limine_video_mode **) modes;
};
struct limine_framebuffer_response {
uint64_t revision;
uint64_t framebuffer_count;
LIMINE_PTR(struct limine_framebuffer **) framebuffers;
};
struct limine_framebuffer_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_framebuffer_response *) response;
};
/* Paging mode */
#define LIMINE_PAGING_MODE_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x95c1a0edab0944cb, 0xa4e5cb3842f7488a }
#define LIMINE_PAGING_MODE_X86_64_4LVL 0
#define LIMINE_PAGING_MODE_X86_64_5LVL 1
#define LIMINE_PAGING_MODE_X86_64_MIN LIMINE_PAGING_MODE_X86_64_4LVL
#define LIMINE_PAGING_MODE_X86_64_DEFAULT LIMINE_PAGING_MODE_X86_64_4LVL
#define LIMINE_PAGING_MODE_AARCH64_4LVL 0
#define LIMINE_PAGING_MODE_AARCH64_5LVL 1
#define LIMINE_PAGING_MODE_AARCH64_MIN LIMINE_PAGING_MODE_AARCH64_4LVL
#define LIMINE_PAGING_MODE_AARCH64_DEFAULT LIMINE_PAGING_MODE_AARCH64_4LVL
#define LIMINE_PAGING_MODE_RISCV_SV39 0
#define LIMINE_PAGING_MODE_RISCV_SV48 1
#define LIMINE_PAGING_MODE_RISCV_SV57 2
#define LIMINE_PAGING_MODE_RISCV_MIN LIMINE_PAGING_MODE_RISCV_SV39
#define LIMINE_PAGING_MODE_RISCV_DEFAULT LIMINE_PAGING_MODE_RISCV_SV48
#define LIMINE_PAGING_MODE_LOONGARCH_4LVL 0
#define LIMINE_PAGING_MODE_LOONGARCH_MIN LIMINE_PAGING_MODE_LOONGARCH_4LVL
#define LIMINE_PAGING_MODE_LOONGARCH_DEFAULT LIMINE_PAGING_MODE_LOONGARCH_4LVL
struct limine_paging_mode_response {
uint64_t revision;
uint64_t mode;
};
struct limine_paging_mode_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_paging_mode_response *) response;
uint64_t mode;
uint64_t max_mode;
uint64_t min_mode;
};
/* MP */
#define LIMINE_MP_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x95a67b819a1b857e, 0xa0b61b723b6a73e0 }
struct limine_mp_info;
typedef void (*limine_goto_address)(struct limine_mp_info *);
#if defined (__x86_64__) || defined (__i386__)
#define LIMINE_MP_RESPONSE_X86_64_X2APIC (1 << 0)
struct limine_mp_info {
uint32_t processor_id;
uint32_t lapic_id;
uint64_t reserved;
LIMINE_PTR(limine_goto_address) goto_address;
uint64_t extra_argument;
};
struct limine_mp_response {
uint64_t revision;
uint32_t flags;
uint32_t bsp_lapic_id;
uint64_t cpu_count;
LIMINE_PTR(struct limine_mp_info **) cpus;
};
#elif defined (__aarch64__)
struct limine_mp_info {
uint32_t processor_id;
uint32_t reserved1;
uint64_t mpidr;
uint64_t reserved;
LIMINE_PTR(limine_goto_address) goto_address;
uint64_t extra_argument;
};
struct limine_mp_response {
uint64_t revision;
uint64_t flags;
uint64_t bsp_mpidr;
uint64_t cpu_count;
LIMINE_PTR(struct limine_mp_info **) cpus;
};
#elif defined (__riscv) && (__riscv_xlen == 64)
struct limine_mp_info {
uint64_t processor_id;
uint64_t hartid;
uint64_t reserved;
LIMINE_PTR(limine_goto_address) goto_address;
uint64_t extra_argument;
};
struct limine_mp_response {
uint64_t revision;
uint64_t flags;
uint64_t bsp_hartid;
uint64_t cpu_count;
LIMINE_PTR(struct limine_mp_info **) cpus;
};
#elif defined (__loongarch__) && (__loongarch_grlen == 64)
struct limine_mp_info {
uint64_t reserved;
};
struct limine_mp_response {
uint64_t cpu_count;
LIMINE_PTR(struct limine_mp_info **) cpus;
};
#else
#error Unknown architecture
#endif
#define LIMINE_MP_REQUEST_X86_64_X2APIC (1 << 0)
struct limine_mp_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_mp_response *) response;
uint64_t flags;
};
/* Memory map */
#define LIMINE_MEMMAP_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x67cf3d9d378a806f, 0xe304acdfc50c3c62 }
#define LIMINE_MEMMAP_USABLE 0
#define LIMINE_MEMMAP_RESERVED 1
#define LIMINE_MEMMAP_ACPI_RECLAIMABLE 2
#define LIMINE_MEMMAP_ACPI_NVS 3
#define LIMINE_MEMMAP_BAD_MEMORY 4
#define LIMINE_MEMMAP_BOOTLOADER_RECLAIMABLE 5
#define LIMINE_MEMMAP_EXECUTABLE_AND_MODULES 6
#define LIMINE_MEMMAP_FRAMEBUFFER 7
#define LIMINE_MEMMAP_RESERVED_MAPPED 8
struct limine_memmap_entry {
uint64_t base;
uint64_t length;
uint64_t type;
};
struct limine_memmap_response {
uint64_t revision;
uint64_t entry_count;
LIMINE_PTR(struct limine_memmap_entry **) entries;
};
struct limine_memmap_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_memmap_response *) response;
};
/* Entry point */
#define LIMINE_ENTRY_POINT_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x13d86c035a1cd3e1, 0x2b0caa89d8f3026a }
typedef void (*limine_entry_point)(void);
struct limine_entry_point_response {
uint64_t revision;
};
struct limine_entry_point_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_entry_point_response *) response;
LIMINE_PTR(limine_entry_point) entry;
};
/* Executable File */
#define LIMINE_EXECUTABLE_FILE_REQUEST_ID { LIMINE_COMMON_MAGIC, 0xad97e90e83f1ed67, 0x31eb5d1c5ff23b69 }
struct limine_executable_file_response {
uint64_t revision;
LIMINE_PTR(struct limine_file *) executable_file;
};
struct limine_executable_file_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_executable_file_response *) response;
};
/* Module */
#define LIMINE_MODULE_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x3e7e279702be32af, 0xca1c4f3bd1280cee }
#define LIMINE_INTERNAL_MODULE_REQUIRED (1 << 0)
#define LIMINE_INTERNAL_MODULE_COMPRESSED (1 << 1)
struct limine_internal_module {
LIMINE_PTR(const char *) path;
LIMINE_PTR(const char *) string;
uint64_t flags;
};
struct limine_module_response {
uint64_t revision;
uint64_t module_count;
LIMINE_PTR(struct limine_file **) modules;
};
struct limine_module_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_module_response *) response;
/* Request revision 1 */
uint64_t internal_module_count;
LIMINE_PTR(struct limine_internal_module **) internal_modules;
};
/* RSDP */
#define LIMINE_RSDP_REQUEST_ID { LIMINE_COMMON_MAGIC, 0xc5e77b6b397e7b43, 0x27637845accdcf3c }
struct limine_rsdp_response {
uint64_t revision;
LIMINE_PTR(void *) address;
};
struct limine_rsdp_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_rsdp_response *) response;
};
/* SMBIOS */
#define LIMINE_SMBIOS_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x9e9046f11e095391, 0xaa4a520fefbde5ee }
struct limine_smbios_response {
uint64_t revision;
LIMINE_PTR(void *) entry_32;
LIMINE_PTR(void *) entry_64;
};
struct limine_smbios_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_smbios_response *) response;
};
/* EFI system table */
#define LIMINE_EFI_SYSTEM_TABLE_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x5ceba5163eaaf6d6, 0x0a6981610cf65fcc }
struct limine_efi_system_table_response {
uint64_t revision;
LIMINE_PTR(void *) address;
};
struct limine_efi_system_table_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_efi_system_table_response *) response;
};
/* EFI memory map */
#define LIMINE_EFI_MEMMAP_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x7df62a431d6872d5, 0xa4fcdfb3e57306c8 }
struct limine_efi_memmap_response {
uint64_t revision;
LIMINE_PTR(void *) memmap;
uint64_t memmap_size;
uint64_t desc_size;
uint64_t desc_version;
};
struct limine_efi_memmap_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_efi_memmap_response *) response;
};
/* Date at boot */
#define LIMINE_DATE_AT_BOOT_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x502746e184c088aa, 0xfbc5ec83e6327893 }
struct limine_date_at_boot_response {
uint64_t revision;
int64_t timestamp;
};
struct limine_date_at_boot_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_date_at_boot_response *) response;
};
/* Executable address */
#define LIMINE_EXECUTABLE_ADDRESS_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x71ba76863cc55f63, 0xb2644a48c516a487 }
struct limine_executable_address_response {
uint64_t revision;
uint64_t physical_base;
uint64_t virtual_base;
};
struct limine_executable_address_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_executable_address_response *) response;
};
/* Device Tree Blob */
#define LIMINE_DTB_REQUEST_ID { LIMINE_COMMON_MAGIC, 0xb40ddb48fb54bac7, 0x545081493f81ffb7 }
struct limine_dtb_response {
uint64_t revision;
LIMINE_PTR(void *) dtb_ptr;
};
struct limine_dtb_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_dtb_response *) response;
};
/* RISC-V Boot Hart ID */
#define LIMINE_RISCV_BSP_HARTID_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x1369359f025525f9, 0x2ff2a56178391bb6 }
struct limine_riscv_bsp_hartid_response {
uint64_t revision;
uint64_t bsp_hartid;
};
struct limine_riscv_bsp_hartid_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_riscv_bsp_hartid_response *) response;
};
/* Bootloader Performance */
#define LIMINE_BOOTLOADER_PERFORMANCE_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x6b50ad9bf36d13ad, 0xdc4c7e88fc759e17 }
struct limine_bootloader_performance_response {
uint64_t revision;
uint64_t reset_usec;
uint64_t init_usec;
uint64_t exec_usec;
};
struct limine_bootloader_performance_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_bootloader_performance_response *) response;
};
#define LIMINE_X86_64_KEEP_IOMMU_REQUEST_ID { LIMINE_COMMON_MAGIC, 0x8ebaabe51f490179, 0x2aa86a59ffb4ab0f }
struct limine_x86_64_keep_iommu_response {
uint64_t revision;
};
struct limine_x86_64_keep_iommu_request {
uint64_t id[4];
uint64_t revision;
LIMINE_PTR(struct limine_x86_64_keep_iommu_response *) response;
};
#ifdef __cplusplus
}
#endif
#endif

23
include/lock.h Normal file
View file

@ -0,0 +1,23 @@
#include <error.h>
#include <stdatomic.h>
#include <stdbool.h>
#ifndef SPINLOCK_H
#define SPINLOCK_H
struct mutex {
atomic_flag lock;
bool locked;
struct thread *holder;
};
void acquire_spinlock(atomic_flag *lock);
void free_spinlock(atomic_flag *lock);
struct mutex *init_mutex();
kstatus acquire_mutex(struct mutex *mut);
void free_mutex(struct mutex *mut);
kstatus try_mutex(struct mutex *mut);
#endif

9
include/mm/kmalloc.h Normal file
View file

@ -0,0 +1,9 @@
#include <error.h>
#include <stddef.h>
#include <stdint.h>
void _kmalloc_init(void);
void *kmalloc(size_t size);
void *kzalloc(size_t size);
kstatus kfree(void *addr);

10
include/mm/page.h Normal file
View file

@ -0,0 +1,10 @@
#include "slab.h"
typedef struct page {
struct ma_bufctl *bufctls; // The bufctls associated with the slab stored on this page. NULL if page isn't associated with a slab
struct ma_slab *slab;
}page;
struct page *get_page(void *addr);
void init_page_array();

13
include/mm/pmm.h Normal file
View file

@ -0,0 +1,13 @@
#include <stdbool.h>
#include <stdint.h>
#define BLOCK_SIZE 4096
typedef struct free_page_t {
struct free_page_t *next;
uint8_t _padding[4088];
} __attribute((packed)) free_page_t;
void pmm_init(void);
uint64_t *pmm_alloc();
void pmm_free(uint64_t *addr);

60
include/mm/slab.h Normal file
View file

@ -0,0 +1,60 @@
#include <stdatomic.h>
#include <stddef.h>
#include <stdint.h>
#include <error.h>
#include <stdbool.h>
#pragma once
#define KCACHE_NAME_LEN 16
struct ma_bufctl {
struct ma_bufctl *next;
size_t *startaddr;
};
// ADD COLORING
struct ma_slab {
struct ma_cache *cache;
struct ma_slab *next;
struct ma_slab *prev;
uint32_t refcount; // The amount of active (not free) objects in the slabs
atomic_flag lock;
struct ma_bufctl *free; // Linked list of free buffers in the slab. Is equal to NULL once there are no more free objects
};
/* objrefs are used to be able to quickly find out which slab and cache a object belongs to. objrefs belonging to the same slab are kept in one page, there is no mixing. */
struct ma_objref {
struct ma_objref *next;
struct ma_objref *prev;
void *addr; // Addr of the object
struct ma_slab *slab; // The slab which the obj belongs to
struct ma_cache *kcache; // The cache which the obj belongs to
};
struct ma_cache {
struct ma_cache *next;
struct ma_cache *prev;
uint32_t objsize; // Size of the object which the cache stores
uint16_t flags; // Not useful yet
uint32_t num; // Number of objects per slab
uint32_t slabsize; // How many pages does a single slab take up. Useful for objects > PAGE_SIZE
struct ma_slab *slabs_free;
struct ma_slab *slabs_partial;
struct ma_slab *slabs_used;
atomic_flag lock;
char name[KCACHE_NAME_LEN];
};
void *ma_cache_alloc(struct ma_cache *kcache, uint32_t flags);
kstatus ma_cache_dealloc(void *object);
struct ma_cache *ma_cache_create(char *name, size_t size, uint32_t flags, void (*constructor)(void *, size_t), void (*destructor)(void *, size_t));
void cache_info(struct ma_cache *cache);
void create_base_caches();

25
include/mm/vmm.h Normal file
View file

@ -0,0 +1,25 @@
#include <stdint.h>
#define PTE_BIT_PRESENT 0x1 // Present bit
#define PTE_BIT_RW 0x2 // Read/write bit
#define PTE_BIT_US 0x4 // User and Supervisor bit
#define PTE_BIT_NX 0x4000000000000000 // Non-executable bit
#define PTE_BIT_UNCACHABLE (1 << 4)
#define PAGE_SIZE 4096
void tlb_flush(void);
void vmm_map_page(uint64_t *page_map, uint64_t virt_address, uint64_t phys_address, uint64_t flags);
int vmm_map_contigious_pages(uint64_t *page_map, uint64_t virt_addr, uint64_t phys_addr, uint64_t size, uint64_t flags);
void vmm_free_page(uint64_t *page_map, uint64_t virt_addr);
void vmm_init();
void vmm_set_ctx(uint64_t *page_map);
uint64_t vmm_get_phys_addr(uint64_t *page_map, uint64_t virt_addr);
uint64_t kget_phys_addr(uint64_t *virt_addr);
void *va_alloc_contigious_pages(uint64_t size);
void kmap_pages(void *phys_addr, uint64_t size, uint64_t flags);
void kunmap_pages(void *addr, uint64_t size);
typedef char link_symbol_ptr[];

32
include/neobbo.h Normal file
View file

@ -0,0 +1,32 @@
#pragma once
#include <stdint.h>
typedef struct kernel_info {
char *cmdline; // kernel commandline options (maybe split into char**'s?)
uint64_t hhdmoffset; // HHDM offset
uint64_t cpu_count; // number of cpus
uint64_t usable_memory; // amount of usable memory the system has
uint64_t bsp_id; // id of the bsp cpu
int64_t boot_timestamp; // timestamp at boot
} kernel_info;
typedef char link_symbol_ptr[];
#define ALIGN_UP_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN_UP(x, val) ALIGN_UP_MASK(x, (typeof(x))(val) - 1)
#define ALIGN_DOWN_MASK(x, mask) ((x) & ~(mask))
#define ALIGN_DOWN(x, val) ALIGN_DOWN_MASK(x, (typeof(x))(val) - 1)
#define IS_ALIGNED_MASK(x, mask) (((x) & (mask)) == 0)
#define IS_ALIGNED(x, val) IS_ALIGNED_MASK(x, (typeof(x))(val) - 1)
#define PAGE_ROUND_UP(size) ALIGN_UP(size, PAGE_SIZE)
#define PAGE_ROUND_DOWN(size) ALIGN_DOWN(size, PAGE_SIZE)
#define SIZE_IN_PAGES(size) size/PAGE_SIZE
struct kernel_info *get_kinfo();
void initialize_kinfo();
void kkill(void); // phase this out in favor of assert

33
include/scheduler/sched.h Normal file
View file

@ -0,0 +1,33 @@
#include <stdint.h>
#pragma once
typedef enum proc_state {
ZOMBIE = 4,
RUNNING = 3,
READY = 2,
SLEEPING = 1,
UNUSED = 0
}proc_state;
struct context {
uint64_t r15, r14, r13, r12, rbp, rbx, rip;
};
struct thread {
struct thread *next;
struct thread *prev;
uint64_t *mem;
uint64_t *kstack;
proc_state state;
uint16_t pid;
struct context *context;
char name[8];
};
void scheduler_init();
[[noreturn]] void sched();
void yield();
#define PROC_MAX 1024 // Max number of processes per cpu

28
include/smp.h Normal file
View file

@ -0,0 +1,28 @@
#include <stdbool.h>
#include <stdint.h>
#include <scheduler/sched.h>
#pragma once
#define GSBASE 0xC0000101
#define KERNELGSBASE 0xC0000102
typedef struct cpu_state {
uint32_t id;
uint64_t lapic_timer_ticks;
struct thread *head;
struct thread *base;
struct thread *current_process;
uint16_t process_count;
struct context *scheduler_context;
uint64_t *scheduler_stack;
bool scheduler_initialized;
}cpu_state;
void smp_init();
cpu_state *get_current_cpu_state();
cpu_state *get_cpu_state(int);
uint64_t get_cpu_count();
void bsp_early_init();
bool get_cpu_struct_initialized();

18
include/string.h Normal file
View file

@ -0,0 +1,18 @@
#ifndef STRING_H
#define STRING_H
#include <stdint.h>
void *memset(void *addr, int c, uint64_t n);
void *memcpy(void *dest, void *src, uint64_t n);
void *memmove(void *dest, const void *src, uint64_t n);
int memcmp(const void *s1, const void *s2, uint64_t n);
uint64_t strlen(const char* str);
void itoa(char *str, int number);
#endif

196
include/sys/acpi.h Normal file
View file

@ -0,0 +1,196 @@
#include <stdint.h>
#include <stdbool.h>
typedef struct rsdp_t {
uint64_t signature;
uint8_t checksum;
uint8_t oemid[6];
uint8_t revision;
uint32_t rsdt_address;
uint32_t length;
uint64_t xsdt_address;
uint8_t ext_checksum;
uint8_t reserved[3];
} __attribute((packed)) rsdp_t;
typedef struct desc_header_t {
uint8_t signature[4];
uint32_t length;
uint8_t revision;
uint8_t checksum;
uint8_t oemid[6];
uint8_t oem_tableid[8];
uint32_t oem_revision;
uint32_t creator_id;
uint32_t creator_revision;
} __attribute((packed)) desc_header_t;
typedef struct rsdt_t {
desc_header_t header;
uint32_t entries_base[];
} __attribute((packed)) rsdt_t;
typedef struct xsdt_t {
desc_header_t header;
uint64_t entries_base[];
} __attribute((packed)) xsdt_t;
typedef struct ics_t {
uint8_t type;
uint8_t length;
}__attribute((packed)) ics_t;
typedef struct madt_t {
desc_header_t header;
uint32_t lic_address;
uint32_t flags;
ics_t ics[];
} __attribute((packed)) madt_t;
typedef struct lapic_ao_t {
ics_t ics;
uint16_t reserved;
uint64_t lapic_address;
}__attribute((packed)) lapic_ao_t;
typedef struct gas_t {
uint8_t address_space_id;
uint8_t reg_bit_width;
uint8_t reg_bit_offset;
uint8_t access_size;
uint64_t address;
}__attribute((packed)) gas_t;
typedef struct hpet_t {
desc_header_t header;
uint32_t event_timer_blkid;
gas_t base_address;
uint8_t hpet_number;
uint16_t minimum_clk_tick;
uint8_t oem_attribute;
}__attribute((packed)) hpet_t;
typedef struct ioapic_t{
ics_t ics;
uint8_t ioapic_id;
uint8_t reserved;
uint32_t ioapic_address;
uint32_t gsi_base;
}__attribute((packed)) ioapic_t;
typedef struct iso_t{
ics_t ics;
uint8_t bus;
uint8_t source;
uint32_t gsi;
uint16_t flags;
}__attribute((packed)) iso_t;
/* Copied from OSDEV wiki */
typedef struct fadt_t{
desc_header_t header;
uint32_t FirmwareCtrl;
uint32_t Dsdt;
// field used in ACPI 1.0; no longer in use, for compatibility only
uint8_t Reserved;
uint8_t PreferredPowerManagementProfile;
uint16_t SCI_Interrupt;
uint32_t SMI_CommandPort;
uint8_t AcpiEnable;
uint8_t AcpiDisable;
uint8_t S4BIOS_REQ;
uint8_t PSTATE_Control;
uint32_t PM1aEventBlock;
uint32_t PM1bEventBlock;
uint32_t PM1aControlBlock;
uint32_t PM1bControlBlock;
uint32_t PM2ControlBlock;
uint32_t PMTimerBlock;
uint32_t GPE0Block;
uint32_t GPE1Block;
uint8_t PM1EventLength;
uint8_t PM1ControlLength;
uint8_t PM2ControlLength;
uint8_t PMTimerLength;
uint8_t GPE0Length;
uint8_t GPE1Length;
uint8_t GPE1Base;
uint8_t CStateControl;
uint16_t WorstC2Latency;
uint16_t WorstC3Latency;
uint16_t FlushSize;
uint16_t FlushStride;
uint8_t DutyOffset;
uint8_t DutyWidth;
uint8_t DayAlarm;
uint8_t MonthAlarm;
uint8_t Century;
// reserved in ACPI 1.0; used since ACPI 2.0+
uint16_t BootArchitectureFlags;
uint8_t Reserved2;
uint32_t Flags;
// 12 byte structure; see below for details
gas_t ResetReg;
uint8_t ResetValue;
uint8_t Reserved3[3];
// 64bit pointers - Available on ACPI 2.0+
uint64_t X_FirmwareControl;
uint64_t X_Dsdt;
gas_t X_PM1aEventBlock;
gas_t X_PM1bEventBlock;
gas_t X_PM1aControlBlock;
gas_t X_PM1bControlBlock;
gas_t X_PM2ControlBlock;
gas_t X_PMTimerBlock;
gas_t X_GPE0Block;
gas_t X_GPE1Block;
gas_t sleep_ctrl_reg;
gas_t sleep_status_reg;
uint64_t hypervisor_vendor_id;
uint8_t wbinvd;
uint8_t wbinvd_flush;
uint8_t proc_c1;
uint8_t p_lvl2_up;
uint8_t pwr_button;
uint8_t slp_button;
uint8_t fix_rtc;
uint8_t rtc_s4;
uint8_t tmr_val_ext;
uint8_t dck_cap;
}__attribute((packed)) fadt_t;
typedef struct conf_space_t {
uint64_t base_ecm;
uint16_t pci_seg_group;
uint8_t start_pci_num;
uint8_t end_pci_num;
uint32_t reserved;
}__attribute((packed)) conf_space_t;
typedef struct mcfg_t {
desc_header_t header;
uint64_t reserved;
conf_space_t conf_spaces[];
}__attribute((packed)) mcfg_t;
void acpi_init(void);
uint64_t *find_acpi_table(char *signature);
uint64_t *find_ics(uint64_t type);
uint32_t find_iso(uint8_t legacy);

102
include/sys/pci.h Normal file
View file

@ -0,0 +1,102 @@
#include <stdbool.h>
#include <stdint.h>
void pci_init();
typedef struct pci_header_t {
uint16_t vendor_id;
uint16_t device_id;
uint16_t command;
uint16_t status;
uint8_t revision_id;
uint8_t prog_if;
uint8_t subclass;
uint8_t class_code;
uint8_t cache_line_size;
uint8_t latency_timer;
uint8_t header_type;
uint8_t bist;
}__attribute((packed)) pci_header_t;
typedef struct pci_header_0_t {
pci_header_t header;
uint32_t bar0;
uint32_t bar1;
uint32_t bar2;
uint32_t bar3;
uint32_t bar4;
uint32_t bar5;
uint32_t cardbus_cis_ptr;
uint16_t subsytem_vendor_id;
uint16_t subsystem_id;
uint32_t expansion_rom_base;
uint8_t capabilities_ptr;
uint8_t reserved1;
uint16_t reserved2;
uint32_t reserved3;
uint8_t interrupt_line;
uint8_t interrupt_pin;
uint8_t min_grant;
uint8_t max_latency;
}__attribute((packed)) pci_header_0_t;
typedef struct pci_header_1_t {
pci_header_t header;
uint32_t bar0;
uint32_t bar1;
uint8_t primary_bus_number;
uint8_t secondary_bus_number;
uint8_t subordinate_bus_number;
uint8_t secondary_latency_timer;
uint8_t io_base;
uint8_t io_limit;
uint16_t secondary_status;
uint16_t memory_base;
uint16_t memory_limit;
uint16_t prefetch_base_;
uint16_t prefetch_limit;
uint32_t prefetch_base_upper;
uint32_t prefetch_limit_upper;
uint16_t io_base_upper;
uint16_t io_limit_upper;
uint8_t capability_ptr;
uint8_t reserved1;
uint16_t reserved2;
uint32_t expansion_rom_base;
uint8_t interrupt_line;
uint8_t interrupt_pin;
uint16_t bridge_control;
}__attribute((packed)) pci_header_1_t;
typedef struct pci_header_ahci_t {
pci_header_t header;
uint32_t bar[4];
uint32_t ahci_bar;
uint16_t subsystem_id;
uint16_t subsytem_vendor_id;
uint32_t expansion_rom_base;
uint8_t capabilities_ptr;
uint16_t interrupt_info;
uint8_t min_grant;
uint8_t max_latency;
}__attribute((packed)) pci_header_ahci_t;
/* For internal use */
typedef struct l84_pci_function_return {
bool multi; // If device has multiple functions this is set to 1, else set to 0. If set to 0, functions index 1-7 are ignored
uint64_t func_addr[8];
} l84_pci_function_return;
typedef struct pci_structure {
uint16_t segment;
uint8_t bus;
uint8_t device;
uint64_t func_addr[8];
} pci_structure;
l84_pci_function_return check_device(uint64_t bus, uint64_t device);
uint64_t get_header(uint64_t bus, uint64_t device, uint64_t function);
pci_header_t *pci_find_device(uint64_t class, int subclass);

3
include/sys/rand.h Normal file
View file

@ -0,0 +1,3 @@
#include <stddef.h>
void krand_init();
size_t rand(void);

3
include/sys/time.h Normal file
View file

@ -0,0 +1,3 @@
#include <stdint.h>
uint64_t get_timestamp_us();
void sleep(int ms);

1574
include/uacpi/acpi.h Normal file

File diff suppressed because it is too large Load diff

53
include/uacpi/context.h Normal file
View file

@ -0,0 +1,53 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/log.h>
#ifdef __cplusplus
extern "C" {
#endif
/*
* Set the minimum log level to be accepted by the logging facilities. Any logs
* below this level are discarded and not passed to uacpi_kernel_log, etc.
*
* 0 is treated as a special value that resets the setting to the default value.
*
* E.g. for a log level of UACPI_LOG_INFO:
* UACPI_LOG_DEBUG -> discarded
* UACPI_LOG_TRACE -> discarded
* UACPI_LOG_INFO -> allowed
* UACPI_LOG_WARN -> allowed
* UACPI_LOG_ERROR -> allowed
*/
void uacpi_context_set_log_level(uacpi_log_level);
/*
* Enables table checksum validation at installation time instead of first use.
* Note that this makes uACPI map the entire table at once, which not all
* hosts are able to handle at early init.
*/
void uacpi_context_set_proactive_table_checksum(uacpi_bool);
#ifndef UACPI_BAREBONES_MODE
/*
* Set the maximum number of seconds a While loop is allowed to run for before
* getting timed out.
*
* 0 is treated a special value that resets the setting to the default value.
*/
void uacpi_context_set_loop_timeout(uacpi_u32 seconds);
/*
* Set the maximum call stack depth AML can reach before getting aborted.
*
* 0 is treated as a special value that resets the setting to the default value.
*/
void uacpi_context_set_max_call_stack_depth(uacpi_u32 depth);
uacpi_u32 uacpi_context_get_loop_timeout(void);
#endif // !UACPI_BAREBONES_MODE
#ifdef __cplusplus
}
#endif

286
include/uacpi/event.h Normal file
View file

@ -0,0 +1,286 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/uacpi.h>
#include <uacpi/acpi.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifndef UACPI_BAREBONES_MODE
typedef enum uacpi_fixed_event {
UACPI_FIXED_EVENT_TIMER_STATUS = 1,
UACPI_FIXED_EVENT_POWER_BUTTON,
UACPI_FIXED_EVENT_SLEEP_BUTTON,
UACPI_FIXED_EVENT_RTC,
UACPI_FIXED_EVENT_MAX = UACPI_FIXED_EVENT_RTC,
} uacpi_fixed_event;
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_install_fixed_event_handler(
uacpi_fixed_event event, uacpi_interrupt_handler handler, uacpi_handle user
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_uninstall_fixed_event_handler(
uacpi_fixed_event event
))
/*
* Enable/disable a fixed event. Note that the event is automatically enabled
* upon installing a handler to it.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_fixed_event(uacpi_fixed_event event)
)
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_disable_fixed_event(uacpi_fixed_event event)
)
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_clear_fixed_event(uacpi_fixed_event event)
)
typedef enum uacpi_event_info {
// Event is enabled in software
UACPI_EVENT_INFO_ENABLED = (1 << 0),
// Event is enabled in software (only for wake)
UACPI_EVENT_INFO_ENABLED_FOR_WAKE = (1 << 1),
// Event is masked
UACPI_EVENT_INFO_MASKED = (1 << 2),
// Event has a handler attached
UACPI_EVENT_INFO_HAS_HANDLER = (1 << 3),
// Hardware enable bit is set
UACPI_EVENT_INFO_HW_ENABLED = (1 << 4),
// Hardware status bit is set
UACPI_EVENT_INFO_HW_STATUS = (1 << 5),
} uacpi_event_info;
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_fixed_event_info(
uacpi_fixed_event event, uacpi_event_info *out_info
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_gpe_info(
uacpi_namespace_node *gpe_device, uacpi_u16 idx,
uacpi_event_info *out_info
))
// Set if the handler wishes to reenable the GPE it just handled
#define UACPI_GPE_REENABLE (1 << 7)
typedef uacpi_interrupt_ret (*uacpi_gpe_handler)(
uacpi_handle ctx, uacpi_namespace_node *gpe_device, uacpi_u16 idx
);
typedef enum uacpi_gpe_triggering {
UACPI_GPE_TRIGGERING_LEVEL = 0,
UACPI_GPE_TRIGGERING_EDGE = 1,
UACPI_GPE_TRIGGERING_MAX = UACPI_GPE_TRIGGERING_EDGE,
} uacpi_gpe_triggering;
const uacpi_char *uacpi_gpe_triggering_to_string(
uacpi_gpe_triggering triggering
);
/*
* Installs a handler to the provided GPE at 'idx' controlled by device
* 'gpe_device'. The GPE is automatically disabled & cleared according to the
* configured triggering upon invoking the handler. The event is optionally
* re-enabled (by returning UACPI_GPE_REENABLE from the handler)
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_install_gpe_handler(
uacpi_namespace_node *gpe_device, uacpi_u16 idx,
uacpi_gpe_triggering triggering, uacpi_gpe_handler handler, uacpi_handle ctx
))
/*
* Installs a raw handler to the provided GPE at 'idx' controlled by device
* 'gpe_device'. The handler is dispatched immediately after the event is
* received, status & enable bits are untouched.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_install_gpe_handler_raw(
uacpi_namespace_node *gpe_device, uacpi_u16 idx,
uacpi_gpe_triggering triggering, uacpi_gpe_handler handler, uacpi_handle ctx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_uninstall_gpe_handler(
uacpi_namespace_node *gpe_device, uacpi_u16 idx, uacpi_gpe_handler handler
))
/*
* Marks the GPE 'idx' managed by 'gpe_device' as wake-capable. 'wake_device' is
* optional and configures the GPE to generate an implicit notification whenever
* an event occurs.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_setup_gpe_for_wake(
uacpi_namespace_node *gpe_device, uacpi_u16 idx,
uacpi_namespace_node *wake_device
))
/*
* Mark a GPE managed by 'gpe_device' as enabled/disabled for wake. The GPE must
* have previously been marked by calling uacpi_gpe_setup_for_wake. This
* function only affects the GPE enable register state following the call to
* uacpi_gpe_enable_all_for_wake.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_gpe_for_wake(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_disable_gpe_for_wake(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Finalize GPE initialization by enabling all GPEs not configured for wake and
* having a matching AML handler detected.
*
* This should be called after the kernel power managment subsystem has
* enumerated all of the devices, executing their _PRW methods etc., and
* marking those it wishes to use for wake by calling uacpi_setup_gpe_for_wake
* or uacpi_mark_gpe_for_wake.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_finalize_gpe_initialization(void)
)
/*
* Enable/disable a general purpose event managed by 'gpe_device'. Internally
* this uses reference counting to make sure a GPE is not disabled until all
* possible users of it do so. GPEs not marked for wake are enabled
* automatically so this API is only needed for wake events or those that don't
* have a corresponding AML handler.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_disable_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Clear the status bit of the event 'idx' managed by 'gpe_device'.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_clear_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Suspend/resume a general purpose event managed by 'gpe_device'. This bypasses
* the reference counting mechanism and unconditionally clears/sets the
* corresponding bit in the enable registers. This is used for switching the GPE
* to poll mode.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_suspend_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_resume_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Finish handling the GPE managed by 'gpe_device' at 'idx'. This clears the
* status registers if it hasn't been cleared yet and re-enables the event if
* it was enabled before.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_finish_handling_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Hard mask/umask a general purpose event at 'idx' managed by 'gpe_device'.
* This is used to permanently silence an event so that further calls to
* enable/disable as well as suspend/resume get ignored. This might be necessary
* for GPEs that cause an event storm due to the kernel's inability to properly
* handle them. The only way to enable a masked event is by a call to unmask.
*
* NOTE: 'gpe_device' may be null for GPEs managed by \_GPE
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_mask_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_unmask_gpe(
uacpi_namespace_node *gpe_device, uacpi_u16 idx
))
/*
* Disable all GPEs currently set up on the system.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_disable_all_gpes(void)
)
/*
* Enable all GPEs not marked as wake. This is only needed after the system
* wakes from a shallow sleep state and is called automatically by wake code.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_all_runtime_gpes(void)
)
/*
* Enable all GPEs marked as wake. This is only needed before the system goes
* to sleep is called automatically by sleep code.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_enable_all_wake_gpes(void)
)
/*
* Install/uninstall a new GPE block, usually defined by a device in the
* namespace with a _HID of ACPI0006.
*/
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_install_gpe_block(
uacpi_namespace_node *gpe_device, uacpi_u64 address,
uacpi_address_space address_space, uacpi_u16 num_registers,
uacpi_u32 irq
))
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_uninstall_gpe_block(
uacpi_namespace_node *gpe_device
))
#endif // !UACPI_BAREBONES_MODE
#ifdef __cplusplus
}
#endif

12
include/uacpi/helpers.h Normal file
View file

@ -0,0 +1,12 @@
#pragma once
#include <uacpi/platform/compiler.h>
#define UACPI_BUILD_BUG_ON_WITH_MSG(expr, msg) UACPI_STATIC_ASSERT(!(expr), msg)
#define UACPI_BUILD_BUG_ON(expr) \
UACPI_BUILD_BUG_ON_WITH_MSG(expr, "BUILD BUG: " #expr " evaluated to true")
#define UACPI_EXPECT_SIZEOF(type, size) \
UACPI_BUILD_BUG_ON_WITH_MSG(sizeof(type) != size, \
"BUILD BUG: invalid type size")

View file

@ -0,0 +1,3 @@
#pragma once
#include <uacpi/platform/compiler.h>

View file

@ -0,0 +1,155 @@
#pragma once
#include <uacpi/acpi.h>
#include <uacpi/types.h>
#include <uacpi/uacpi.h>
#include <uacpi/internal/dynamic_array.h>
#include <uacpi/internal/shareable.h>
#include <uacpi/context.h>
struct uacpi_runtime_context {
/*
* A local copy of FADT that has been verified & converted to most optimal
* format for faster access to the registers.
*/
struct acpi_fadt fadt;
uacpi_u64 flags;
#ifndef UACPI_BAREBONES_MODE
/*
* A cached pointer to FACS so that we don't have to look it up in interrupt
* contexts as we can't take mutexes.
*/
struct acpi_facs *facs;
/*
* pm1{a,b}_evt_blk split into two registers for convenience
*/
struct acpi_gas pm1a_status_blk;
struct acpi_gas pm1b_status_blk;
struct acpi_gas pm1a_enable_blk;
struct acpi_gas pm1b_enable_blk;
#define UACPI_SLEEP_TYP_INVALID 0xFF
uacpi_u8 last_sleep_typ_a;
uacpi_u8 last_sleep_typ_b;
uacpi_u8 s0_sleep_typ_a;
uacpi_u8 s0_sleep_typ_b;
uacpi_bool global_lock_acquired;
#ifndef UACPI_REDUCED_HARDWARE
uacpi_bool was_in_legacy_mode;
uacpi_bool has_global_lock;
uacpi_bool sci_handle_valid;
uacpi_handle sci_handle;
#endif
uacpi_u64 opcodes_executed;
uacpi_u32 loop_timeout_seconds;
uacpi_u32 max_call_stack_depth;
uacpi_u32 global_lock_seq_num;
/*
* These are stored here to protect against stuff like:
* - CopyObject(JUNK, \)
* - CopyObject(JUNK, \_GL)
*/
uacpi_mutex *global_lock_mutex;
uacpi_object *root_object;
#ifndef UACPI_REDUCED_HARDWARE
uacpi_handle *global_lock_event;
uacpi_handle *global_lock_spinlock;
uacpi_bool global_lock_pending;
#endif
uacpi_bool bad_timesource;
uacpi_u8 init_level;
#endif // !UACPI_BAREBONES_MODE
#ifndef UACPI_REDUCED_HARDWARE
uacpi_bool is_hardware_reduced;
#endif
/*
* This is a per-table value but we mimic the NT implementation:
* treat all other definition blocks as if they were the same revision
* as DSDT.
*/
uacpi_bool is_rev1;
uacpi_u8 log_level;
};
extern struct uacpi_runtime_context g_uacpi_rt_ctx;
static inline uacpi_bool uacpi_check_flag(uacpi_u64 flag)
{
return (g_uacpi_rt_ctx.flags & flag) == flag;
}
static inline uacpi_bool uacpi_should_log(enum uacpi_log_level lvl)
{
return lvl <= g_uacpi_rt_ctx.log_level;
}
static inline uacpi_bool uacpi_is_hardware_reduced(void)
{
#ifndef UACPI_REDUCED_HARDWARE
return g_uacpi_rt_ctx.is_hardware_reduced;
#else
return UACPI_TRUE;
#endif
}
#ifndef UACPI_BAREBONES_MODE
static inline const uacpi_char *uacpi_init_level_to_string(uacpi_u8 lvl)
{
switch (lvl) {
case UACPI_INIT_LEVEL_EARLY:
return "early";
case UACPI_INIT_LEVEL_SUBSYSTEM_INITIALIZED:
return "subsystem initialized";
case UACPI_INIT_LEVEL_NAMESPACE_LOADED:
return "namespace loaded";
case UACPI_INIT_LEVEL_NAMESPACE_INITIALIZED:
return "namespace initialized";
default:
return "<invalid>";
}
}
#define UACPI_ENSURE_INIT_LEVEL_AT_LEAST(lvl) \
do { \
if (uacpi_unlikely(g_uacpi_rt_ctx.init_level < lvl)) { \
uacpi_error( \
"while evaluating %s: init level %d (%s) is too low, " \
"expected at least %d (%s)\n", __FUNCTION__, \
g_uacpi_rt_ctx.init_level, \
uacpi_init_level_to_string(g_uacpi_rt_ctx.init_level), lvl, \
uacpi_init_level_to_string(lvl) \
); \
return UACPI_STATUS_INIT_LEVEL_MISMATCH; \
} \
} while (0)
#define UACPI_ENSURE_INIT_LEVEL_IS(lvl) \
do { \
if (uacpi_unlikely(g_uacpi_rt_ctx.init_level != lvl)) { \
uacpi_error( \
"while evaluating %s: invalid init level %d (%s), " \
"expected %d (%s)\n", __FUNCTION__, \
g_uacpi_rt_ctx.init_level, \
uacpi_init_level_to_string(g_uacpi_rt_ctx.init_level), lvl, \
uacpi_init_level_to_string(lvl) \
); \
return UACPI_STATUS_INIT_LEVEL_MISMATCH; \
} \
} while (0)
#endif // !UACPI_BAREBONES_MODE

View file

@ -0,0 +1,185 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/internal/stdlib.h>
#include <uacpi/kernel_api.h>
#define DYNAMIC_ARRAY_WITH_INLINE_STORAGE(name, type, inline_capacity) \
struct name { \
type inline_storage[inline_capacity]; \
type *dynamic_storage; \
uacpi_size dynamic_capacity; \
uacpi_size size_including_inline; \
}; \
#define DYNAMIC_ARRAY_SIZE(arr) ((arr)->size_including_inline)
#define DYNAMIC_ARRAY_WITH_INLINE_STORAGE_EXPORTS(name, type, prefix) \
prefix uacpi_size name##_inline_capacity(struct name *arr); \
prefix type *name##_at(struct name *arr, uacpi_size idx); \
prefix type *name##_alloc(struct name *arr); \
prefix type *name##_calloc(struct name *arr); \
prefix void name##_pop(struct name *arr); \
prefix uacpi_size name##_size(struct name *arr); \
prefix type *name##_last(struct name *arr) \
prefix void name##_clear(struct name *arr);
#ifndef UACPI_BAREBONES_MODE
#define DYNAMIC_ARRAY_ALLOC_FN(name, type, prefix) \
UACPI_MAYBE_UNUSED \
prefix type *name##_alloc(struct name *arr) \
{ \
uacpi_size inline_cap; \
type *out_ptr; \
\
inline_cap = name##_inline_capacity(arr); \
\
if (arr->size_including_inline >= inline_cap) { \
uacpi_size dynamic_size; \
\
dynamic_size = arr->size_including_inline - inline_cap; \
if (dynamic_size == arr->dynamic_capacity) { \
uacpi_size bytes, type_size; \
void *new_buf; \
\
type_size = sizeof(*arr->dynamic_storage); \
\
if (arr->dynamic_capacity == 0) { \
bytes = type_size * inline_cap; \
} else { \
bytes = (arr->dynamic_capacity / 2) * type_size; \
if (bytes == 0) \
bytes += type_size; \
\
bytes += arr->dynamic_capacity * type_size; \
} \
\
new_buf = uacpi_kernel_alloc(bytes); \
if (uacpi_unlikely(new_buf == UACPI_NULL)) \
return UACPI_NULL; \
\
arr->dynamic_capacity = bytes / type_size; \
\
if (arr->dynamic_storage) { \
uacpi_memcpy(new_buf, arr->dynamic_storage, \
dynamic_size * type_size); \
} \
uacpi_free(arr->dynamic_storage, dynamic_size * type_size); \
arr->dynamic_storage = new_buf; \
} \
\
out_ptr = &arr->dynamic_storage[dynamic_size]; \
goto ret; \
} \
out_ptr = &arr->inline_storage[arr->size_including_inline]; \
ret: \
arr->size_including_inline++; \
return out_ptr; \
}
#define DYNAMIC_ARRAY_CLEAR_FN(name, type, prefix) \
prefix void name##_clear(struct name *arr) \
{ \
uacpi_free( \
arr->dynamic_storage, \
arr->dynamic_capacity * sizeof(*arr->dynamic_storage) \
); \
arr->size_including_inline = 0; \
arr->dynamic_capacity = 0; \
arr->dynamic_storage = UACPI_NULL; \
}
#else
#define DYNAMIC_ARRAY_ALLOC_FN(name, type, prefix) \
UACPI_MAYBE_UNUSED \
prefix type *name##_alloc(struct name *arr) \
{ \
uacpi_size inline_cap; \
type *out_ptr; \
\
inline_cap = name##_inline_capacity(arr); \
\
if (arr->size_including_inline >= inline_cap) { \
uacpi_size dynamic_size; \
\
dynamic_size = arr->size_including_inline - inline_cap; \
if (uacpi_unlikely(dynamic_size == arr->dynamic_capacity)) \
return UACPI_NULL; \
\
out_ptr = &arr->dynamic_storage[dynamic_size]; \
goto ret; \
} \
out_ptr = &arr->inline_storage[arr->size_including_inline]; \
ret: \
arr->size_including_inline++; \
return out_ptr; \
}
#define DYNAMIC_ARRAY_CLEAR_FN(name, type, prefix) \
prefix void name##_clear(struct name *arr) \
{ \
arr->size_including_inline = 0; \
arr->dynamic_capacity = 0; \
arr->dynamic_storage = UACPI_NULL; \
}
#endif
#define DYNAMIC_ARRAY_WITH_INLINE_STORAGE_IMPL(name, type, prefix) \
UACPI_MAYBE_UNUSED \
prefix uacpi_size name##_inline_capacity(struct name *arr) \
{ \
return sizeof(arr->inline_storage) / sizeof(arr->inline_storage[0]); \
} \
\
UACPI_MAYBE_UNUSED \
prefix uacpi_size name##_capacity(struct name *arr) \
{ \
return name##_inline_capacity(arr) + arr->dynamic_capacity; \
} \
\
prefix type *name##_at(struct name *arr, uacpi_size idx) \
{ \
if (idx >= arr->size_including_inline) \
return UACPI_NULL; \
\
if (idx < name##_inline_capacity(arr)) \
return &arr->inline_storage[idx]; \
\
return &arr->dynamic_storage[idx - name##_inline_capacity(arr)]; \
} \
\
DYNAMIC_ARRAY_ALLOC_FN(name, type, prefix) \
\
UACPI_MAYBE_UNUSED \
prefix type *name##_calloc(struct name *arr) \
{ \
type *ret; \
\
ret = name##_alloc(arr); \
if (ret) \
uacpi_memzero(ret, sizeof(*ret)); \
\
return ret; \
} \
\
UACPI_MAYBE_UNUSED \
prefix void name##_pop(struct name *arr) \
{ \
if (arr->size_including_inline == 0) \
return; \
\
arr->size_including_inline--; \
} \
\
UACPI_MAYBE_UNUSED \
prefix uacpi_size name##_size(struct name *arr) \
{ \
return arr->size_including_inline; \
} \
\
UACPI_MAYBE_UNUSED \
prefix type *name##_last(struct name *arr) \
{ \
return name##_at(arr, arr->size_including_inline - 1); \
} \
\
DYNAMIC_ARRAY_CLEAR_FN(name, type, prefix)

View file

@ -0,0 +1,25 @@
#pragma once
#include <uacpi/event.h>
// This fixed event is internal-only, and we don't expose it in the enum
#define UACPI_FIXED_EVENT_GLOBAL_LOCK 0
UACPI_ALWAYS_OK_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_initialize_events_early(void)
)
UACPI_ALWAYS_OK_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_initialize_events(void)
)
UACPI_STUB_IF_REDUCED_HARDWARE(
void uacpi_deinitialize_events(void)
)
UACPI_STUB_IF_REDUCED_HARDWARE(
void uacpi_events_match_post_dynamic_table_load(void)
)
UACPI_ALWAYS_ERROR_FOR_REDUCED_HARDWARE(
uacpi_status uacpi_clear_all_events(void)
)

View file

@ -0,0 +1,7 @@
#pragma once
#include <uacpi/helpers.h>
#define UACPI_ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]))
#define UACPI_UNUSED(x) (void)(x)

View file

@ -0,0 +1,24 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#include <uacpi/internal/namespace.h>
#ifndef UACPI_BAREBONES_MODE
enum uacpi_table_load_cause {
UACPI_TABLE_LOAD_CAUSE_LOAD_OP,
UACPI_TABLE_LOAD_CAUSE_LOAD_TABLE_OP,
UACPI_TABLE_LOAD_CAUSE_INIT,
UACPI_TABLE_LOAD_CAUSE_HOST,
};
uacpi_status uacpi_execute_table(void*, enum uacpi_table_load_cause cause);
uacpi_status uacpi_osi(uacpi_handle handle, uacpi_object *retval);
uacpi_status uacpi_execute_control_method(
uacpi_namespace_node *scope, uacpi_control_method *method,
const uacpi_object_array *args, uacpi_object **ret
);
#endif // !UACPI_BAREBONES_MODE

View file

@ -0,0 +1,77 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/acpi.h>
#include <uacpi/io.h>
#ifndef UACPI_BAREBONES_MODE
typedef struct uacpi_mapped_gas {
uacpi_handle mapping;
uacpi_u8 access_bit_width;
uacpi_u8 total_bit_width;
uacpi_u8 bit_offset;
uacpi_status (*read)(
uacpi_handle, uacpi_size offset, uacpi_u8 width, uacpi_u64 *out
);
uacpi_status (*write)(
uacpi_handle, uacpi_size offset, uacpi_u8 width, uacpi_u64 in
);
void (*unmap)(uacpi_handle, uacpi_size);
} uacpi_mapped_gas;
uacpi_status uacpi_map_gas_noalloc(
const struct acpi_gas *gas, uacpi_mapped_gas *out_mapped
);
void uacpi_unmap_gas_nofree(uacpi_mapped_gas *gas);
uacpi_size uacpi_round_up_bits_to_bytes(uacpi_size bit_length);
void uacpi_read_buffer_field(
const uacpi_buffer_field *field, void *dst
);
void uacpi_write_buffer_field(
uacpi_buffer_field *field, const void *src, uacpi_size size
);
uacpi_status uacpi_field_unit_get_read_type(
struct uacpi_field_unit *field, uacpi_object_type *out_type
);
uacpi_status uacpi_field_unit_get_bit_length(
struct uacpi_field_unit *field, uacpi_size *out_length
);
uacpi_status uacpi_read_field_unit(
uacpi_field_unit *field, void *dst, uacpi_size size,
uacpi_data_view *wtr_response
);
uacpi_status uacpi_write_field_unit(
uacpi_field_unit *field, const void *src, uacpi_size size,
uacpi_data_view *wtr_response
);
uacpi_status uacpi_system_memory_read(
void *ptr, uacpi_size offset, uacpi_u8 width, uacpi_u64 *out
);
uacpi_status uacpi_system_memory_write(
void *ptr, uacpi_size offset, uacpi_u8 width, uacpi_u64 in
);
uacpi_status uacpi_system_io_read(
uacpi_handle handle, uacpi_size offset, uacpi_u8 width, uacpi_u64 *out
);
uacpi_status uacpi_system_io_write(
uacpi_handle handle, uacpi_size offset, uacpi_u8 width, uacpi_u64 in
);
uacpi_status uacpi_pci_read(
uacpi_handle handle, uacpi_size offset, uacpi_u8 width, uacpi_u64 *out
);
uacpi_status uacpi_pci_write(
uacpi_handle handle, uacpi_size offset, uacpi_u8 width, uacpi_u64 in
);
#endif // !UACPI_BAREBONES_MODE

View file

@ -0,0 +1,23 @@
#pragma once
#include <uacpi/kernel_api.h>
#include <uacpi/internal/context.h>
#include <uacpi/log.h>
#ifdef UACPI_FORMATTED_LOGGING
#define uacpi_log uacpi_kernel_log
#else
UACPI_PRINTF_DECL(2, 3)
void uacpi_log(uacpi_log_level, const uacpi_char*, ...);
#endif
#define uacpi_log_lvl(lvl, ...) \
do { if (uacpi_should_log(lvl)) uacpi_log(lvl, __VA_ARGS__); } while (0)
#define uacpi_debug(...) uacpi_log_lvl(UACPI_LOG_DEBUG, __VA_ARGS__)
#define uacpi_trace(...) uacpi_log_lvl(UACPI_LOG_TRACE, __VA_ARGS__)
#define uacpi_info(...) uacpi_log_lvl(UACPI_LOG_INFO, __VA_ARGS__)
#define uacpi_warn(...) uacpi_log_lvl(UACPI_LOG_WARN, __VA_ARGS__)
#define uacpi_error(...) uacpi_log_lvl(UACPI_LOG_ERROR, __VA_ARGS__)
void uacpi_logger_initialize(void);

View file

@ -0,0 +1,82 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/kernel_api.h>
#ifndef UACPI_BAREBONES_MODE
uacpi_bool uacpi_this_thread_owns_aml_mutex(uacpi_mutex*);
uacpi_status uacpi_acquire_aml_mutex(uacpi_mutex*, uacpi_u16 timeout);
uacpi_status uacpi_release_aml_mutex(uacpi_mutex*);
static inline uacpi_status uacpi_acquire_native_mutex(uacpi_handle mtx)
{
if (uacpi_unlikely(mtx == UACPI_NULL))
return UACPI_STATUS_INVALID_ARGUMENT;
return uacpi_kernel_acquire_mutex(mtx, 0xFFFF);
}
uacpi_status uacpi_acquire_native_mutex_with_timeout(
uacpi_handle mtx, uacpi_u16 timeout
);
static inline uacpi_status uacpi_release_native_mutex(uacpi_handle mtx)
{
if (uacpi_unlikely(mtx == UACPI_NULL))
return UACPI_STATUS_INVALID_ARGUMENT;
uacpi_kernel_release_mutex(mtx);
return UACPI_STATUS_OK;
}
static inline uacpi_status uacpi_acquire_native_mutex_may_be_null(
uacpi_handle mtx
)
{
if (mtx == UACPI_NULL)
return UACPI_STATUS_OK;
return uacpi_kernel_acquire_mutex(mtx, 0xFFFF);
}
static inline uacpi_status uacpi_release_native_mutex_may_be_null(
uacpi_handle mtx
)
{
if (mtx == UACPI_NULL)
return UACPI_STATUS_OK;
uacpi_kernel_release_mutex(mtx);
return UACPI_STATUS_OK;
}
struct uacpi_recursive_lock {
uacpi_handle mutex;
uacpi_size depth;
uacpi_thread_id owner;
};
uacpi_status uacpi_recursive_lock_init(struct uacpi_recursive_lock *lock);
uacpi_status uacpi_recursive_lock_deinit(struct uacpi_recursive_lock *lock);
uacpi_status uacpi_recursive_lock_acquire(struct uacpi_recursive_lock *lock);
uacpi_status uacpi_recursive_lock_release(struct uacpi_recursive_lock *lock);
struct uacpi_rw_lock {
uacpi_handle read_mutex;
uacpi_handle write_mutex;
uacpi_size num_readers;
};
uacpi_status uacpi_rw_lock_init(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_lock_deinit(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_lock_read(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_unlock_read(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_lock_write(struct uacpi_rw_lock *lock);
uacpi_status uacpi_rw_unlock_write(struct uacpi_rw_lock *lock);
#endif // !UACPI_BAREBONES_MODE

View file

@ -0,0 +1,123 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/internal/shareable.h>
#include <uacpi/status.h>
#include <uacpi/namespace.h>
#ifndef UACPI_BAREBONES_MODE
#define UACPI_NAMESPACE_NODE_FLAG_ALIAS (1 << 0)
/*
* This node has been uninstalled and has no object associated with it.
*
* This is used to handle edge cases where an object needs to reference
* a namespace node, where the node might end up going out of scope before
* the object lifetime ends.
*/
#define UACPI_NAMESPACE_NODE_FLAG_DANGLING (1u << 1)
/*
* This node is method-local and must not be exposed via public API as its
* lifetime is limited.
*/
#define UACPI_NAMESPACE_NODE_FLAG_TEMPORARY (1u << 2)
#define UACPI_NAMESPACE_NODE_PREDEFINED (1u << 31)
typedef struct uacpi_namespace_node {
struct uacpi_shareable shareable;
uacpi_object_name name;
uacpi_u32 flags;
uacpi_object *object;
struct uacpi_namespace_node *parent;
struct uacpi_namespace_node *child;
struct uacpi_namespace_node *next;
} uacpi_namespace_node;
uacpi_status uacpi_initialize_namespace(void);
void uacpi_deinitialize_namespace(void);
uacpi_namespace_node *uacpi_namespace_node_alloc(uacpi_object_name name);
void uacpi_namespace_node_unref(uacpi_namespace_node *node);
uacpi_status uacpi_namespace_node_type_unlocked(
const uacpi_namespace_node *node, uacpi_object_type *out_type
);
uacpi_status uacpi_namespace_node_is_one_of_unlocked(
const uacpi_namespace_node *node, uacpi_object_type_bits type_mask,
uacpi_bool *out
);
uacpi_object *uacpi_namespace_node_get_object(const uacpi_namespace_node *node);
uacpi_object *uacpi_namespace_node_get_object_typed(
const uacpi_namespace_node *node, uacpi_object_type_bits type_mask
);
uacpi_status uacpi_namespace_node_acquire_object(
const uacpi_namespace_node *node, uacpi_object **out_obj
);
uacpi_status uacpi_namespace_node_acquire_object_typed(
const uacpi_namespace_node *node, uacpi_object_type_bits,
uacpi_object **out_obj
);
uacpi_status uacpi_namespace_node_reacquire_object(
uacpi_object *obj
);
uacpi_status uacpi_namespace_node_release_object(
uacpi_object *obj
);
uacpi_status uacpi_namespace_node_install(
uacpi_namespace_node *parent, uacpi_namespace_node *node
);
uacpi_status uacpi_namespace_node_uninstall(uacpi_namespace_node *node);
uacpi_namespace_node *uacpi_namespace_node_find_sub_node(
uacpi_namespace_node *parent,
uacpi_object_name name
);
enum uacpi_may_search_above_parent {
UACPI_MAY_SEARCH_ABOVE_PARENT_NO,
UACPI_MAY_SEARCH_ABOVE_PARENT_YES,
};
enum uacpi_permanent_only {
UACPI_PERMANENT_ONLY_NO,
UACPI_PERMANENT_ONLY_YES,
};
enum uacpi_should_lock {
UACPI_SHOULD_LOCK_NO,
UACPI_SHOULD_LOCK_YES,
};
uacpi_status uacpi_namespace_node_resolve(
uacpi_namespace_node *scope, const uacpi_char *path, enum uacpi_should_lock,
enum uacpi_may_search_above_parent, enum uacpi_permanent_only,
uacpi_namespace_node **out_node
);
uacpi_status uacpi_namespace_do_for_each_child(
uacpi_namespace_node *parent, uacpi_iteration_callback descending_callback,
uacpi_iteration_callback ascending_callback,
uacpi_object_type_bits, uacpi_u32 max_depth, enum uacpi_should_lock,
enum uacpi_permanent_only, void *user
);
uacpi_bool uacpi_namespace_node_is_dangling(uacpi_namespace_node *node);
uacpi_bool uacpi_namespace_node_is_temporary(uacpi_namespace_node *node);
uacpi_bool uacpi_namespace_node_is_predefined(uacpi_namespace_node *node);
uacpi_status uacpi_namespace_read_lock(void);
uacpi_status uacpi_namespace_read_unlock(void);
uacpi_status uacpi_namespace_write_lock(void);
uacpi_status uacpi_namespace_write_unlock(void);
#endif // !UACPI_BAREBONES_MODE

View file

@ -0,0 +1,13 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/notify.h>
#ifndef UACPI_BAREBONES_MODE
uacpi_status uacpi_initialize_notify(void);
void uacpi_deinitialize_notify(void);
uacpi_status uacpi_notify_all(uacpi_namespace_node *node, uacpi_u64 value);
#endif // !UACPI_BAREBONES_MODE

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,49 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/opregion.h>
#ifndef UACPI_BAREBONES_MODE
uacpi_status uacpi_initialize_opregion(void);
void uacpi_deinitialize_opregion(void);
void uacpi_trace_region_error(
uacpi_namespace_node *node, uacpi_char *message, uacpi_status ret
);
uacpi_status uacpi_install_address_space_handler_with_flags(
uacpi_namespace_node *device_node, enum uacpi_address_space space,
uacpi_region_handler handler, uacpi_handle handler_context,
uacpi_u16 flags
);
void uacpi_opregion_uninstall_handler(uacpi_namespace_node *node);
uacpi_bool uacpi_address_space_handler_is_default(
uacpi_address_space_handler *handler
);
uacpi_address_space_handlers *uacpi_node_get_address_space_handlers(
uacpi_namespace_node *node
);
uacpi_status uacpi_initialize_opregion_node(uacpi_namespace_node *node);
uacpi_status uacpi_opregion_attach(uacpi_namespace_node *node);
void uacpi_install_default_address_space_handlers(void);
uacpi_bool uacpi_is_buffer_access_address_space(uacpi_address_space space);
union uacpi_opregion_io_data {
uacpi_u64 *integer;
uacpi_data_view buffer;
};
uacpi_status uacpi_dispatch_opregion_io(
uacpi_field_unit *field, uacpi_u32 offset,
uacpi_region_op op, union uacpi_opregion_io_data data
);
#endif // !UACPI_BAREBONES_MODE

View file

@ -0,0 +1,8 @@
#pragma once
#include <uacpi/osi.h>
uacpi_status uacpi_initialize_interfaces(void);
void uacpi_deinitialize_interfaces(void);
uacpi_status uacpi_handle_osi(const uacpi_char *string, uacpi_bool *out_value);

View file

@ -0,0 +1,7 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/registers.h>
uacpi_status uacpi_initialize_registers(void);
void uacpi_deinitialize_registers(void);

View file

@ -0,0 +1,327 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/resources.h>
#ifndef UACPI_BAREBONES_MODE
enum uacpi_aml_resource {
UACPI_AML_RESOURCE_TYPE_INVALID = 0,
// Small resources
UACPI_AML_RESOURCE_IRQ,
UACPI_AML_RESOURCE_DMA,
UACPI_AML_RESOURCE_START_DEPENDENT,
UACPI_AML_RESOURCE_END_DEPENDENT,
UACPI_AML_RESOURCE_IO,
UACPI_AML_RESOURCE_FIXED_IO,
UACPI_AML_RESOURCE_FIXED_DMA,
UACPI_AML_RESOURCE_VENDOR_TYPE0,
UACPI_AML_RESOURCE_END_TAG,
// Large resources
UACPI_AML_RESOURCE_MEMORY24,
UACPI_AML_RESOURCE_GENERIC_REGISTER,
UACPI_AML_RESOURCE_VENDOR_TYPE1,
UACPI_AML_RESOURCE_MEMORY32,
UACPI_AML_RESOURCE_FIXED_MEMORY32,
UACPI_AML_RESOURCE_ADDRESS32,
UACPI_AML_RESOURCE_ADDRESS16,
UACPI_AML_RESOURCE_EXTENDED_IRQ,
UACPI_AML_RESOURCE_ADDRESS64,
UACPI_AML_RESOURCE_ADDRESS64_EXTENDED,
UACPI_AML_RESOURCE_GPIO_CONNECTION,
UACPI_AML_RESOURCE_PIN_FUNCTION,
UACPI_AML_RESOURCE_SERIAL_CONNECTION,
UACPI_AML_RESOURCE_PIN_CONFIGURATION,
UACPI_AML_RESOURCE_PIN_GROUP,
UACPI_AML_RESOURCE_PIN_GROUP_FUNCTION,
UACPI_AML_RESOURCE_PIN_GROUP_CONFIGURATION,
UACPI_AML_RESOURCE_CLOCK_INPUT,
UACPI_AML_RESOURCE_MAX = UACPI_AML_RESOURCE_CLOCK_INPUT,
};
enum uacpi_aml_resource_size_kind {
UACPI_AML_RESOURCE_SIZE_KIND_FIXED,
UACPI_AML_RESOURCE_SIZE_KIND_FIXED_OR_ONE_LESS,
UACPI_AML_RESOURCE_SIZE_KIND_VARIABLE,
};
enum uacpi_aml_resource_kind {
UACPI_AML_RESOURCE_KIND_SMALL = 0,
UACPI_AML_RESOURCE_KIND_LARGE,
};
enum uacpi_resource_convert_opcode {
UACPI_RESOURCE_CONVERT_OPCODE_END = 0,
/*
* AML -> native:
* Take the mask at 'aml_offset' and convert to an array of uacpi_u8
* at 'native_offset' with the value corresponding to the bit index.
* The array size is written to the byte at offset 'arg2'.
*
* native -> AML:
* Walk each element of the array at 'native_offset' and set the
* corresponding bit in the mask at 'aml_offset' to 1. The array size is
* read from the byte at offset 'arg2'.
*/
UACPI_RESOURCE_CONVERT_OPCODE_PACKED_ARRAY_8,
UACPI_RESOURCE_CONVERT_OPCODE_PACKED_ARRAY_16,
/*
* AML -> native:
* Grab the bits at the byte at 'aml_offset' + 'bit_index', and copy its
* value into the byte at 'native_offset'.
*
* native -> AML:
* Grab first N bits at 'native_offset' and copy to 'aml_offset' starting
* at the 'bit_index'.
*
* NOTE:
* These must be contiguous in this order.
*/
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_1,
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_2,
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_3,
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_6 =
UACPI_RESOURCE_CONVERT_OPCODE_BIT_FIELD_3 + 3,
/*
* AML -> native:
* Copy N bytes at 'aml_offset' to 'native_offset'.
*
* native -> AML:
* Copy N bytes at 'native_offset' to 'aml_offset'.
*
* 'imm' is added to the accumulator.
*
* NOTE: These are affected by the current value in the accumulator. If it's
* set to 0 at the time of evalution, this is executed once, N times
* otherwise. 0xFF is considered a special value, which resets the
* accumulator to 0 unconditionally.
*/
UACPI_RESOURCE_CONVERT_OPCODE_FIELD_8,
UACPI_RESOURCE_CONVERT_OPCODE_FIELD_16,
UACPI_RESOURCE_CONVERT_OPCODE_FIELD_32,
UACPI_RESOURCE_CONVERT_OPCODE_FIELD_64,
/*
* If the length of the current resource is less than 'arg0', then skip
* 'imm' instructions.
*/
UACPI_RESOURCE_CONVERT_OPCODE_SKIP_IF_AML_SIZE_LESS_THAN,
/*
* Skip 'imm' instructions if 'arg0' is not equal to the value in the
* accumulator.
*/
UACPI_RESOURCE_CONVERT_OPCODE_SKIP_IF_NOT_EQUALS,
/*
* AML -> native:
* Set the byte at 'native_offset' to 'imm'.
*
* native -> AML:
* Set the byte at 'aml_offset' to 'imm'.
*/
UACPI_RESOURCE_CONVERT_OPCODE_SET_TO_IMM,
/*
* AML -> native:
* Load the AML resoruce length into the accumulator as well as the field at
* 'native_offset' of width N.
*
* native -> AML:
* Load the resource length into the accumulator.
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_AML_SIZE_32,
/*
* AML -> native:
* Load the 8 bit field at 'aml_offset' into the accumulator and store at
* 'native_offset'.
*
* native -> AML:
* Load the 8 bit field at 'native_offset' into the accumulator and store
* at 'aml_offset'.
*
* The accumulator is multiplied by 'imm' unless it's set to zero.
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_8_STORE,
/*
* Load the N bit field at 'native_offset' into the accumulator
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_8_NATIVE,
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_16_NATIVE,
/*
* Load 'imm' into the accumulator.
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_IMM,
/*
* AML -> native:
* Load the resource source at offset = aml size + accumulator into the
* uacpi_resource_source struct at 'native_offset'. The string bytes are
* written to the offset at resource size + accumulator. The presence is
* detected by comparing the length of the resource to the offset,
* 'arg2' optionally specifies the offset to the upper bound of the string.
*
* native -> AML:
* Load the resource source from the uacpi_resource_source struct at
* 'native_offset' to aml_size + accumulator. aml_size + accumulator is
* optionally written to 'aml_offset' if it's specified.
*/
UACPI_RESOURCE_CONVERT_OPCODE_RESOURCE_SOURCE,
UACPI_RESOURCE_CONVERT_OPCODE_RESOURCE_SOURCE_NO_INDEX,
UACPI_RESOURCE_CONVERT_OPCODE_RESOURCE_LABEL,
/*
* AML -> native:
* Load the pin table with upper bound specified at 'aml_offset'.
* The table length is calculated by subtracting the upper bound from
* aml_size and is written into the accumulator.
*
* native -> AML:
* Load the pin table length from 'native_offset' and multiply by 2, store
* the result in the accumulator.
*/
UACPI_RESOURCE_CONVERT_OPCODE_LOAD_PIN_TABLE_LENGTH,
/*
* AML -> native:
* Store the accumulator divided by 2 at 'native_offset'.
* The table is copied to the offset at resource size from offset at
* aml_size with the pointer written to the offset at 'arg2'.
*
* native -> AML:
* Read the pin table from resource size offset, write aml_size to
* 'aml_offset'. Copy accumulator bytes to the offset at aml_size.
*/
UACPI_RESOURCE_CONVERT_OPCODE_PIN_TABLE,
/*
* AML -> native:
* Load vendor data with offset stored at 'aml_offset'. The length is
* calculated as aml_size - aml_offset and is written to 'native_offset'.
* The data is written to offset - aml_size with the pointer written back
* to the offset at 'arg2'.
*
* native -> AML:
* Read vendor data from the pointer at offset 'arg2' and size at
* 'native_offset', the offset to write to is calculated as the difference
* between the data pointer and the native resource end pointer.
* offset + aml_size is written to 'aml_offset' and the data is copied
* there as well.
*/
UACPI_RESOURCE_CONVERT_OPCODE_VENDOR_DATA,
/*
* AML -> native:
* Read the serial type from the byte at 'aml_offset' and write it to the
* type field of the uacpi_resource_serial_bus_common structure. Convert
* the serial type to native and set the resource type to it. Copy the
* vendor data to the offset at native size, the length is calculated
* as type_data_length - extra-type-specific-size, and is written to
* vendor_data_length, as well as the accumulator. The data pointer is
* written to vendor_data.
*
* native -> AML:
* Set the serial type at 'aml_offset' to the value stored at
* 'native_offset'. Load the vendor data to the offset at aml_size,
* the length is read from 'vendor_data_length', and the data is copied from
* 'vendor_data'.
*/
UACPI_RESOURCE_CONVERT_OPCODE_SERIAL_TYPE_SPECIFIC,
/*
* Produces an error if encountered in the instruction stream.
* Used to trap invalid/unexpected code flow.
*/
UACPI_RESOURCE_CONVERT_OPCODE_UNREACHABLE,
};
struct uacpi_resource_convert_instruction {
uacpi_u8 code;
union {
uacpi_u8 aml_offset;
uacpi_u8 arg0;
} f1;
union {
uacpi_u8 native_offset;
uacpi_u8 arg1;
} f2;
union {
uacpi_u8 imm;
uacpi_u8 bit_index;
uacpi_u8 arg2;
} f3;
};
struct uacpi_resource_spec {
uacpi_u8 type : 5;
uacpi_u8 native_type : 5;
uacpi_u8 resource_kind : 1;
uacpi_u8 size_kind : 2;
/*
* Size of the resource as appears in the AML byte stream, for variable
* length resources this is the minimum.
*/
uacpi_u16 aml_size;
/*
* Size of the native human-readable uacpi resource, for variable length
* resources this is the minimum. The final length is this field plus the
* result of extra_size_for_native().
*/
uacpi_u16 native_size;
/*
* Calculate the amount of extra bytes that must be allocated for a specific
* native resource given the AML counterpart. This being NULL means no extra
* bytes are needed, aka native resources is always the same size.
*/
uacpi_size (*extra_size_for_native)(
const struct uacpi_resource_spec*, void*, uacpi_size
);
/*
* Calculate the number of bytes needed to represent a native resource as
* AML. The 'aml_size' field is used if this is NULL.
*/
uacpi_size (*size_for_aml)(
const struct uacpi_resource_spec*, uacpi_resource*
);
const struct uacpi_resource_convert_instruction *to_native;
const struct uacpi_resource_convert_instruction *to_aml;
};
typedef uacpi_iteration_decision (*uacpi_aml_resource_iteration_callback)(
void*, uacpi_u8 *data, uacpi_u16 resource_size,
const struct uacpi_resource_spec*
);
uacpi_status uacpi_for_each_aml_resource(
uacpi_data_view, uacpi_aml_resource_iteration_callback cb, void *user
);
uacpi_status uacpi_find_aml_resource_end_tag(
uacpi_data_view, uacpi_size *out_offset
);
uacpi_status uacpi_native_resources_from_aml(
uacpi_data_view, uacpi_resources **out_resources
);
uacpi_status uacpi_native_resources_to_aml(
uacpi_resources *resources, uacpi_object **out_template
);
#endif // !UACPI_BAREBONES_MODE

View file

@ -0,0 +1,21 @@
#pragma once
#include <uacpi/types.h>
struct uacpi_shareable {
uacpi_u32 reference_count;
};
void uacpi_shareable_init(uacpi_handle);
uacpi_bool uacpi_bugged_shareable(uacpi_handle);
void uacpi_make_shareable_bugged(uacpi_handle);
uacpi_u32 uacpi_shareable_ref(uacpi_handle);
uacpi_u32 uacpi_shareable_unref(uacpi_handle);
void uacpi_shareable_unref_and_delete_if_last(
uacpi_handle, void (*do_free)(uacpi_handle)
);
uacpi_u32 uacpi_shareable_refcount(uacpi_handle);

View file

@ -0,0 +1,128 @@
#pragma once
#include <uacpi/internal/types.h>
#include <uacpi/internal/helpers.h>
#include <uacpi/platform/libc.h>
#include <uacpi/platform/config.h>
#include <uacpi/kernel_api.h>
#ifdef UACPI_USE_BUILTIN_STRING
#ifndef uacpi_memcpy
void *uacpi_memcpy(void *dest, const void *src, uacpi_size count);
#endif
#ifndef uacpi_memmove
void *uacpi_memmove(void *dest, const void *src, uacpi_size count);
#endif
#ifndef uacpi_memset
void *uacpi_memset(void *dest, uacpi_i32 ch, uacpi_size count);
#endif
#ifndef uacpi_memcmp
uacpi_i32 uacpi_memcmp(const void *lhs, const void *rhs, uacpi_size count);
#endif
#else
#ifndef uacpi_memcpy
#ifdef UACPI_COMPILER_HAS_BUILTIN_MEMCPY
#define uacpi_memcpy __builtin_memcpy
#else
extern void *memcpy(void *dest, const void *src, uacpi_size count);
#define uacpi_memcpy memcpy
#endif
#endif
#ifndef uacpi_memmove
#ifdef UACPI_COMPILER_HAS_BUILTIN_MEMMOVE
#define uacpi_memmove __builtin_memmove
#else
extern void *memmove(void *dest, const void *src, uacpi_size count);
#define uacpi_memmove memmove
#endif
#endif
#ifndef uacpi_memset
#ifdef UACPI_COMPILER_HAS_BUILTIN_MEMSET
#define uacpi_memset __builtin_memset
#else
extern void *memset(void *dest, int ch, uacpi_size count);
#define uacpi_memset memset
#endif
#endif
#ifndef uacpi_memcmp
#ifdef UACPI_COMPILER_HAS_BUILTIN_MEMCMP
#define uacpi_memcmp __builtin_memcmp
#else
extern int memcmp(const void *lhs, const void *rhs, uacpi_size count);
#define uacpi_memcmp memcmp
#endif
#endif
#endif
#ifndef uacpi_strlen
uacpi_size uacpi_strlen(const uacpi_char *str);
#endif
#ifndef uacpi_strnlen
uacpi_size uacpi_strnlen(const uacpi_char *str, uacpi_size max);
#endif
#ifndef uacpi_strcmp
uacpi_i32 uacpi_strcmp(const uacpi_char *lhs, const uacpi_char *rhs);
#endif
#ifndef uacpi_snprintf
UACPI_PRINTF_DECL(3, 4)
uacpi_i32 uacpi_snprintf(
uacpi_char *buffer, uacpi_size capacity, const uacpi_char *fmt, ...
);
#endif
#ifndef uacpi_vsnprintf
uacpi_i32 uacpi_vsnprintf(
uacpi_char *buffer, uacpi_size capacity, const uacpi_char *fmt,
uacpi_va_list vlist
);
#endif
#ifdef UACPI_SIZED_FREES
#define uacpi_free(mem, size) uacpi_kernel_free(mem, size)
#else
#define uacpi_free(mem, _) uacpi_kernel_free(mem)
#endif
#define uacpi_memzero(ptr, size) uacpi_memset(ptr, 0, size)
#define UACPI_COMPARE(x, y, op) ((x) op (y) ? (x) : (y))
#define UACPI_MIN(x, y) UACPI_COMPARE(x, y, <)
#define UACPI_MAX(x, y) UACPI_COMPARE(x, y, >)
#define UACPI_ALIGN_UP_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define UACPI_ALIGN_UP(x, val, type) UACPI_ALIGN_UP_MASK(x, (type)(val) - 1)
#define UACPI_ALIGN_DOWN_MASK(x, mask) ((x) & ~(mask))
#define UACPI_ALIGN_DOWN(x, val, type) UACPI_ALIGN_DOWN_MASK(x, (type)(val) - 1)
#define UACPI_IS_ALIGNED_MASK(x, mask) (((x) & (mask)) == 0)
#define UACPI_IS_ALIGNED(x, val, type) UACPI_IS_ALIGNED_MASK(x, (type)(val) - 1)
#define UACPI_IS_POWER_OF_TWO(x, type) UACPI_IS_ALIGNED(x, x, type)
void uacpi_memcpy_zerout(void *dst, const void *src,
uacpi_size dst_size, uacpi_size src_size);
// Returns the one-based bit location of LSb or 0
uacpi_u8 uacpi_bit_scan_forward(uacpi_u64);
// Returns the one-based bit location of MSb or 0
uacpi_u8 uacpi_bit_scan_backward(uacpi_u64);
#ifndef UACPI_NATIVE_ALLOC_ZEROED
void *uacpi_builtin_alloc_zeroed(uacpi_size size);
#define uacpi_kernel_alloc_zeroed uacpi_builtin_alloc_zeroed
#endif

View file

@ -0,0 +1,70 @@
#pragma once
#include <uacpi/internal/context.h>
#include <uacpi/internal/interpreter.h>
#include <uacpi/types.h>
#include <uacpi/status.h>
#include <uacpi/tables.h>
enum uacpi_table_origin {
#ifndef UACPI_BAREBONES_MODE
UACPI_TABLE_ORIGIN_FIRMWARE_VIRTUAL = 0,
#endif
UACPI_TABLE_ORIGIN_FIRMWARE_PHYSICAL = 1,
UACPI_TABLE_ORIGIN_HOST_VIRTUAL,
UACPI_TABLE_ORIGIN_HOST_PHYSICAL,
};
struct uacpi_installed_table {
uacpi_phys_addr phys_addr;
struct acpi_sdt_hdr hdr;
void *ptr;
uacpi_u16 reference_count;
#define UACPI_TABLE_LOADED (1 << 0)
#define UACPI_TABLE_CSUM_VERIFIED (1 << 1)
#define UACPI_TABLE_INVALID (1 << 2)
uacpi_u8 flags;
uacpi_u8 origin;
};
uacpi_status uacpi_initialize_tables(void);
void uacpi_deinitialize_tables(void);
uacpi_bool uacpi_signatures_match(const void *const lhs, const void *const rhs);
uacpi_status uacpi_check_table_signature(void *table, const uacpi_char *expect);
uacpi_status uacpi_verify_table_checksum(void *table, uacpi_size size);
uacpi_status uacpi_table_install_physical_with_origin(
uacpi_phys_addr phys, enum uacpi_table_origin origin, uacpi_table *out_table
);
uacpi_status uacpi_table_install_with_origin(
void *virt, enum uacpi_table_origin origin, uacpi_table *out_table
);
#ifndef UACPI_BAREBONES_MODE
void uacpi_table_mark_as_loaded(uacpi_size idx);
uacpi_status uacpi_table_load_with_cause(
uacpi_size idx, enum uacpi_table_load_cause cause
);
#endif // !UACPI_BAREBONES_MODE
typedef uacpi_iteration_decision (*uacpi_table_iteration_callback)
(void *user, struct uacpi_installed_table *tbl, uacpi_size idx);
uacpi_status uacpi_for_each_table(
uacpi_size base_idx, uacpi_table_iteration_callback, void *user
);
typedef uacpi_bool (*uacpi_table_match_callback)
(struct uacpi_installed_table *tbl);
uacpi_status uacpi_table_match(
uacpi_size base_idx, uacpi_table_match_callback, uacpi_table *out_table
);
#define UACPI_PRI_TBL_HDR "'%.4s' (OEM ID '%.6s' OEM Table ID '%.8s')"
#define UACPI_FMT_TBL_HDR(hdr) (hdr)->signature, (hdr)->oemid, (hdr)->oem_table_id

View file

@ -0,0 +1,310 @@
#pragma once
#include <uacpi/status.h>
#include <uacpi/types.h>
#include <uacpi/internal/shareable.h>
#ifndef UACPI_BAREBONES_MODE
// object->flags field if object->type == UACPI_OBJECT_REFERENCE
enum uacpi_reference_kind {
UACPI_REFERENCE_KIND_REFOF = 0,
UACPI_REFERENCE_KIND_LOCAL = 1,
UACPI_REFERENCE_KIND_ARG = 2,
UACPI_REFERENCE_KIND_NAMED = 3,
UACPI_REFERENCE_KIND_PKG_INDEX = 4,
};
// object->flags field if object->type == UACPI_OBJECT_STRING
enum uacpi_string_kind {
UACPI_STRING_KIND_NORMAL = 0,
UACPI_STRING_KIND_PATH,
};
typedef struct uacpi_buffer {
struct uacpi_shareable shareable;
union {
void *data;
uacpi_u8 *byte_data;
uacpi_char *text;
};
uacpi_size size;
} uacpi_buffer;
typedef struct uacpi_package {
struct uacpi_shareable shareable;
uacpi_object **objects;
uacpi_size count;
} uacpi_package;
typedef struct uacpi_buffer_field {
uacpi_buffer *backing;
uacpi_size bit_index;
uacpi_u32 bit_length;
uacpi_bool force_buffer;
} uacpi_buffer_field;
typedef struct uacpi_buffer_index {
uacpi_size idx;
uacpi_buffer *buffer;
} uacpi_buffer_index;
typedef struct uacpi_mutex {
struct uacpi_shareable shareable;
uacpi_handle handle;
uacpi_thread_id owner;
uacpi_u16 depth;
uacpi_u8 sync_level;
} uacpi_mutex;
typedef struct uacpi_event {
struct uacpi_shareable shareable;
uacpi_handle handle;
} uacpi_event;
typedef struct uacpi_address_space_handler {
struct uacpi_shareable shareable;
uacpi_region_handler callback;
uacpi_handle user_context;
struct uacpi_address_space_handler *next;
struct uacpi_operation_region *regions;
uacpi_u16 space;
#define UACPI_ADDRESS_SPACE_HANDLER_DEFAULT (1 << 0)
uacpi_u16 flags;
} uacpi_address_space_handler;
/*
* NOTE: These are common object headers.
* Any changes to these structs must be propagated to all objects.
* ==============================================================
* Common for the following objects:
* - UACPI_OBJECT_OPERATION_REGION
* - UACPI_OBJECT_PROCESSOR
* - UACPI_OBJECT_DEVICE
* - UACPI_OBJECT_THERMAL_ZONE
*/
typedef struct uacpi_address_space_handlers {
struct uacpi_shareable shareable;
uacpi_address_space_handler *head;
} uacpi_address_space_handlers;
typedef struct uacpi_device_notify_handler {
uacpi_notify_handler callback;
uacpi_handle user_context;
struct uacpi_device_notify_handler *next;
} uacpi_device_notify_handler;
/*
* Common for the following objects:
* - UACPI_OBJECT_PROCESSOR
* - UACPI_OBJECT_DEVICE
* - UACPI_OBJECT_THERMAL_ZONE
*/
typedef struct uacpi_handlers {
struct uacpi_shareable shareable;
uacpi_address_space_handler *address_space_head;
uacpi_device_notify_handler *notify_head;
} uacpi_handlers;
// This region has a corresponding _REG method that was succesfully executed
#define UACPI_OP_REGION_STATE_REG_EXECUTED (1 << 0)
// This region was successfully attached to a handler
#define UACPI_OP_REGION_STATE_ATTACHED (1 << 1)
typedef struct uacpi_operation_region {
struct uacpi_shareable shareable;
uacpi_address_space_handler *handler;
uacpi_handle user_context;
uacpi_u16 space;
uacpi_u8 state_flags;
uacpi_u64 offset;
uacpi_u64 length;
union {
// If space == TABLE_DATA
uacpi_u64 table_idx;
// If space == PCC
uacpi_u8 *internal_buffer;
};
// Used to link regions sharing the same handler
struct uacpi_operation_region *next;
} uacpi_operation_region;
typedef struct uacpi_device {
struct uacpi_shareable shareable;
uacpi_address_space_handler *address_space_handlers;
uacpi_device_notify_handler *notify_handlers;
} uacpi_device;
typedef struct uacpi_processor {
struct uacpi_shareable shareable;
uacpi_address_space_handler *address_space_handlers;
uacpi_device_notify_handler *notify_handlers;
uacpi_u8 id;
uacpi_u32 block_address;
uacpi_u8 block_length;
} uacpi_processor;
typedef struct uacpi_thermal_zone {
struct uacpi_shareable shareable;
uacpi_address_space_handler *address_space_handlers;
uacpi_device_notify_handler *notify_handlers;
} uacpi_thermal_zone;
typedef struct uacpi_power_resource {
uacpi_u8 system_level;
uacpi_u16 resource_order;
} uacpi_power_resource;
typedef uacpi_status (*uacpi_native_call_handler)(
uacpi_handle ctx, uacpi_object *retval
);
typedef struct uacpi_control_method {
struct uacpi_shareable shareable;
union {
uacpi_u8 *code;
uacpi_native_call_handler handler;
};
uacpi_mutex *mutex;
uacpi_u32 size;
uacpi_u8 sync_level : 4;
uacpi_u8 args : 3;
uacpi_u8 is_serialized : 1;
uacpi_u8 named_objects_persist: 1;
uacpi_u8 native_call : 1;
uacpi_u8 owns_code : 1;
} uacpi_control_method;
typedef enum uacpi_access_type {
UACPI_ACCESS_TYPE_ANY = 0,
UACPI_ACCESS_TYPE_BYTE = 1,
UACPI_ACCESS_TYPE_WORD = 2,
UACPI_ACCESS_TYPE_DWORD = 3,
UACPI_ACCESS_TYPE_QWORD = 4,
UACPI_ACCESS_TYPE_BUFFER = 5,
} uacpi_access_type;
typedef enum uacpi_lock_rule {
UACPI_LOCK_RULE_NO_LOCK = 0,
UACPI_LOCK_RULE_LOCK = 1,
} uacpi_lock_rule;
typedef enum uacpi_update_rule {
UACPI_UPDATE_RULE_PRESERVE = 0,
UACPI_UPDATE_RULE_WRITE_AS_ONES = 1,
UACPI_UPDATE_RULE_WRITE_AS_ZEROES = 2,
} uacpi_update_rule;
typedef enum uacpi_field_unit_kind {
UACPI_FIELD_UNIT_KIND_NORMAL = 0,
UACPI_FIELD_UNIT_KIND_INDEX = 1,
UACPI_FIELD_UNIT_KIND_BANK = 2,
} uacpi_field_unit_kind;
typedef struct uacpi_field_unit {
struct uacpi_shareable shareable;
union {
// UACPI_FIELD_UNIT_KIND_NORMAL
struct {
uacpi_namespace_node *region;
};
// UACPI_FIELD_UNIT_KIND_INDEX
struct {
struct uacpi_field_unit *index;
struct uacpi_field_unit *data;
};
// UACPI_FIELD_UNIT_KIND_BANK
struct {
uacpi_namespace_node *bank_region;
struct uacpi_field_unit *bank_selection;
uacpi_u64 bank_value;
};
};
uacpi_object *connection;
uacpi_u32 byte_offset;
uacpi_u32 bit_length;
uacpi_u32 pin_offset;
uacpi_u8 bit_offset_within_first_byte;
uacpi_u8 access_width_bytes;
uacpi_u8 access_length;
uacpi_u8 attributes : 4;
uacpi_u8 update_rule : 2;
uacpi_u8 kind : 2;
uacpi_u8 lock_rule : 1;
} uacpi_field_unit;
typedef struct uacpi_object {
struct uacpi_shareable shareable;
uacpi_u8 type;
uacpi_u8 flags;
union {
uacpi_u64 integer;
uacpi_package *package;
uacpi_buffer_field buffer_field;
uacpi_object *inner_object;
uacpi_control_method *method;
uacpi_buffer *buffer;
uacpi_mutex *mutex;
uacpi_event *event;
uacpi_buffer_index buffer_index;
uacpi_operation_region *op_region;
uacpi_device *device;
uacpi_processor *processor;
uacpi_thermal_zone *thermal_zone;
uacpi_address_space_handlers *address_space_handlers;
uacpi_handlers *handlers;
uacpi_power_resource power_resource;
uacpi_field_unit *field_unit;
};
} uacpi_object;
uacpi_object *uacpi_create_object(uacpi_object_type type);
enum uacpi_assign_behavior {
UACPI_ASSIGN_BEHAVIOR_DEEP_COPY,
UACPI_ASSIGN_BEHAVIOR_SHALLOW_COPY,
};
uacpi_status uacpi_object_assign(uacpi_object *dst, uacpi_object *src,
enum uacpi_assign_behavior);
void uacpi_object_attach_child(uacpi_object *parent, uacpi_object *child);
void uacpi_object_detach_child(uacpi_object *parent);
struct uacpi_object *uacpi_create_internal_reference(
enum uacpi_reference_kind kind, uacpi_object *child
);
uacpi_object *uacpi_unwrap_internal_reference(uacpi_object *object);
enum uacpi_prealloc_objects {
UACPI_PREALLOC_OBJECTS_NO,
UACPI_PREALLOC_OBJECTS_YES,
};
uacpi_bool uacpi_package_fill(
uacpi_package *pkg, uacpi_size num_elements,
enum uacpi_prealloc_objects prealloc_objects
);
uacpi_mutex *uacpi_create_mutex(void);
void uacpi_mutex_unref(uacpi_mutex*);
void uacpi_method_unref(uacpi_control_method*);
void uacpi_address_space_handler_unref(uacpi_address_space_handler *handler);
void uacpi_buffer_to_view(uacpi_buffer*, uacpi_data_view*);
#endif // !UACPI_BAREBONES_MODE

View file

@ -0,0 +1,45 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/utilities.h>
#include <uacpi/internal/log.h>
#include <uacpi/internal/stdlib.h>
static inline uacpi_phys_addr uacpi_truncate_phys_addr_with_warn(uacpi_u64 large_addr)
{
if (sizeof(uacpi_phys_addr) < 8 && large_addr > 0xFFFFFFFF) {
uacpi_warn(
"truncating a physical address 0x%"UACPI_PRIX64
" outside of address space\n", UACPI_FMT64(large_addr)
);
}
return (uacpi_phys_addr)large_addr;
}
#define UACPI_PTR_TO_VIRT_ADDR(ptr) ((uacpi_virt_addr)(ptr))
#define UACPI_VIRT_ADDR_TO_PTR(vaddr) ((void*)(vaddr))
#define UACPI_PTR_ADD(ptr, value) ((void*)(((uacpi_u8*)(ptr)) + value))
/*
* Target buffer must have a length of at least 8 bytes.
*/
void uacpi_eisa_id_to_string(uacpi_u32, uacpi_char *out_string);
enum uacpi_base {
UACPI_BASE_AUTO,
UACPI_BASE_OCT = 8,
UACPI_BASE_DEC = 10,
UACPI_BASE_HEX = 16,
};
uacpi_status uacpi_string_to_integer(
const uacpi_char *str, uacpi_size max_chars, enum uacpi_base base,
uacpi_u64 *out_value
);
uacpi_bool uacpi_is_valid_nameseg(uacpi_u8 *nameseg);
void uacpi_free_dynamic_string(const uacpi_char *str);
#define UACPI_NANOSECONDS_PER_SEC (1000ull * 1000ull * 1000ull)

36
include/uacpi/io.h Normal file
View file

@ -0,0 +1,36 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/acpi.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifndef UACPI_BAREBONES_MODE
uacpi_status uacpi_gas_read(const struct acpi_gas *gas, uacpi_u64 *value);
uacpi_status uacpi_gas_write(const struct acpi_gas *gas, uacpi_u64 value);
typedef struct uacpi_mapped_gas uacpi_mapped_gas;
/*
* Map a GAS for faster access in the future. The handle returned via
* 'out_mapped' must be freed & unmapped using uacpi_unmap_gas() when
* no longer needed.
*/
uacpi_status uacpi_map_gas(const struct acpi_gas *gas, uacpi_mapped_gas **out_mapped);
void uacpi_unmap_gas(uacpi_mapped_gas*);
/*
* Same as uacpi_gas_{read,write} but operates on a pre-mapped handle for faster
* access and/or ability to use in critical sections/irq contexts.
*/
uacpi_status uacpi_gas_read_mapped(const uacpi_mapped_gas *gas, uacpi_u64 *value);
uacpi_status uacpi_gas_write_mapped(const uacpi_mapped_gas *gas, uacpi_u64 value);
#endif // !UACPI_BAREBONES_MODE
#ifdef __cplusplus
}
#endif

374
include/uacpi/kernel_api.h Normal file
View file

@ -0,0 +1,374 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/platform/arch_helpers.h>
#ifdef __cplusplus
extern "C" {
#endif
// Returns the PHYSICAL address of the RSDP structure via *out_rsdp_address.
uacpi_status uacpi_kernel_get_rsdp(uacpi_phys_addr *out_rsdp_address);
/*
* Map a physical memory range starting at 'addr' with length 'len', and return
* a virtual address that can be used to access it.
*
* NOTE: 'addr' may be misaligned, in this case the host is expected to round it
* down to the nearest page-aligned boundary and map that, while making
* sure that at least 'len' bytes are still mapped starting at 'addr'. The
* return value preserves the misaligned offset.
*
* Example for uacpi_kernel_map(0x1ABC, 0xF00):
* 1. Round down the 'addr' we got to the nearest page boundary.
* Considering a PAGE_SIZE of 4096 (or 0x1000), 0x1ABC rounded down
* is 0x1000, offset within the page is 0x1ABC - 0x1000 => 0xABC
* 2. Requested 'len' is 0xF00 bytes, but we just rounded the address
* down by 0xABC bytes, so add those on top. 0xF00 + 0xABC => 0x19BC
* 3. Round up the final 'len' to the nearest PAGE_SIZE boundary, in
* this case 0x19BC is 0x2000 bytes (2 pages if PAGE_SIZE is 4096)
* 4. Call the VMM to map the aligned address 0x1000 (from step 1)
* with length 0x2000 (from step 3). Let's assume the returned
* virtual address for the mapping is 0xF000.
* 5. Add the original offset within page 0xABC (from step 1) to the
* resulting virtual address 0xF000 + 0xABC => 0xFABC. Return it
* to uACPI.
*/
void *uacpi_kernel_map(uacpi_phys_addr addr, uacpi_size len);
/*
* Unmap a virtual memory range at 'addr' with a length of 'len' bytes.
*
* NOTE: 'addr' may be misaligned, see the comment above 'uacpi_kernel_map'.
* Similar steps to uacpi_kernel_map can be taken to retrieve the
* virtual address originally returned by the VMM for this mapping
* as well as its true length.
*/
void uacpi_kernel_unmap(void *addr, uacpi_size len);
#ifndef UACPI_FORMATTED_LOGGING
void uacpi_kernel_log(uacpi_log_level, const uacpi_char*);
#else
UACPI_PRINTF_DECL(2, 3)
void uacpi_kernel_log(uacpi_log_level, const uacpi_char*, ...);
void uacpi_kernel_vlog(uacpi_log_level, const uacpi_char*, uacpi_va_list);
#endif
/*
* Only the above ^^^ API may be used by early table access and
* UACPI_BAREBONES_MODE.
*/
#ifndef UACPI_BAREBONES_MODE
/*
* Convenience initialization/deinitialization hooks that will be called by
* uACPI automatically when appropriate if compiled-in.
*/
#ifdef UACPI_KERNEL_INITIALIZATION
/*
* This API is invoked for each initialization level so that appropriate parts
* of the host kernel and/or glue code can be initialized at different stages.
*
* uACPI API that triggers calls to uacpi_kernel_initialize and the respective
* 'current_init_lvl' passed to the hook at that stage:
* 1. uacpi_initialize() -> UACPI_INIT_LEVEL_EARLY
* 2. uacpi_namespace_load() -> UACPI_INIT_LEVEL_SUBSYSTEM_INITIALIZED
* 3. (start of) uacpi_namespace_initialize() -> UACPI_INIT_LEVEL_NAMESPACE_LOADED
* 4. (end of) uacpi_namespace_initialize() -> UACPI_INIT_LEVEL_NAMESPACE_INITIALIZED
*/
uacpi_status uacpi_kernel_initialize(uacpi_init_level current_init_lvl);
void uacpi_kernel_deinitialize(void);
#endif
/*
* Open a PCI device at 'address' for reading & writing.
*
* The device at 'address' might not actually exist on the system, in this case
* the api is allowed to return UACPI_STATUS_NOT_FOUND to indicate that, this
* error is handled gracefully by creating a dummy device internally that always
* returns 0xFF on reads and is no-op for writes. This is to support a common
* pattern in AML that probes for 0xFF reads to detect whether a device exists.
*
* The handle returned via 'out_handle' is used to perform IO on the
* configuration space of the device.
*/
uacpi_status uacpi_kernel_pci_device_open(
uacpi_pci_address address, uacpi_handle *out_handle
);
void uacpi_kernel_pci_device_close(uacpi_handle);
/*
* Read & write the configuration space of a previously open PCI device.
*/
uacpi_status uacpi_kernel_pci_read8(
uacpi_handle device, uacpi_size offset, uacpi_u8 *value
);
uacpi_status uacpi_kernel_pci_read16(
uacpi_handle device, uacpi_size offset, uacpi_u16 *value
);
uacpi_status uacpi_kernel_pci_read32(
uacpi_handle device, uacpi_size offset, uacpi_u32 *value
);
uacpi_status uacpi_kernel_pci_write8(
uacpi_handle device, uacpi_size offset, uacpi_u8 value
);
uacpi_status uacpi_kernel_pci_write16(
uacpi_handle device, uacpi_size offset, uacpi_u16 value
);
uacpi_status uacpi_kernel_pci_write32(
uacpi_handle device, uacpi_size offset, uacpi_u32 value
);
/*
* Map a SystemIO address at [base, base + len) and return a kernel-implemented
* handle that can be used for reading and writing the IO range.
*
* NOTE: The x86 architecture uses the in/out family of instructions
* to access the SystemIO address space.
*/
uacpi_status uacpi_kernel_io_map(
uacpi_io_addr base, uacpi_size len, uacpi_handle *out_handle
);
void uacpi_kernel_io_unmap(uacpi_handle handle);
/*
* Read/Write the IO range mapped via uacpi_kernel_io_map
* at a 0-based 'offset' within the range.
*
* NOTE:
* The x86 architecture uses the in/out family of instructions
* to access the SystemIO address space.
*
* You are NOT allowed to break e.g. a 4-byte access into four 1-byte accesses.
* Hardware ALWAYS expects accesses to be of the exact width.
*/
uacpi_status uacpi_kernel_io_read8(
uacpi_handle, uacpi_size offset, uacpi_u8 *out_value
);
uacpi_status uacpi_kernel_io_read16(
uacpi_handle, uacpi_size offset, uacpi_u16 *out_value
);
uacpi_status uacpi_kernel_io_read32(
uacpi_handle, uacpi_size offset, uacpi_u32 *out_value
);
uacpi_status uacpi_kernel_io_write8(
uacpi_handle, uacpi_size offset, uacpi_u8 in_value
);
uacpi_status uacpi_kernel_io_write16(
uacpi_handle, uacpi_size offset, uacpi_u16 in_value
);
uacpi_status uacpi_kernel_io_write32(
uacpi_handle, uacpi_size offset, uacpi_u32 in_value
);
/*
* Allocate a block of memory of 'size' bytes.
* The contents of the allocated memory are unspecified.
*/
void *uacpi_kernel_alloc(uacpi_size size);
#ifdef UACPI_NATIVE_ALLOC_ZEROED
/*
* Allocate a block of memory of 'size' bytes.
* The returned memory block is expected to be zero-filled.
*/
void *uacpi_kernel_alloc_zeroed(uacpi_size size);
#endif
/*
* Free a previously allocated memory block.
*
* 'mem' might be a NULL pointer. In this case, the call is assumed to be a
* no-op.
*
* An optionally enabled 'size_hint' parameter contains the size of the original
* allocation. Note that in some scenarios this incurs additional cost to
* calculate the object size.
*/
#ifndef UACPI_SIZED_FREES
void uacpi_kernel_free(void *mem);
#else
void uacpi_kernel_free(void *mem, uacpi_size size_hint);
#endif
/*
* Returns the number of nanosecond ticks elapsed since boot,
* strictly monotonic.
*/
uacpi_u64 uacpi_kernel_get_nanoseconds_since_boot(void);
/*
* Spin for N microseconds.
*/
void uacpi_kernel_stall(uacpi_u8 usec);
/*
* Sleep for N milliseconds.
*/
void uacpi_kernel_sleep(uacpi_u64 msec);
/*
* Create/free an opaque non-recursive kernel mutex object.
*/
uacpi_handle uacpi_kernel_create_mutex(void);
void uacpi_kernel_free_mutex(uacpi_handle);
/*
* Create/free an opaque kernel (semaphore-like) event object.
*/
uacpi_handle uacpi_kernel_create_event(void);
void uacpi_kernel_free_event(uacpi_handle);
/*
* Returns a unique identifier of the currently executing thread.
*
* The returned thread id cannot be UACPI_THREAD_ID_NONE.
*/
uacpi_thread_id uacpi_kernel_get_thread_id(void);
/*
* Disable interrupts and return an kernel-defined value representing the
* "before" state. This value is used in the subsequent call to restore the
* prior state.
*
* Note that this is talking about ALL interrupts on the current CPU, not just
* those installed by uACPI. This is typically achieved by executing the 'cli'
* instruction on x86, 'msr daifset, #3' on aarch64 etc.
*/
uacpi_interrupt_state uacpi_kernel_disable_interrupts(void);
/*
* Restore the state of the interrupt flags to the kernel-defined value provided
* in 'state'.
*/
void uacpi_kernel_restore_interrupts(uacpi_interrupt_state state);
/*
* Try to acquire the mutex with a millisecond timeout.
*
* The timeout value has the following meanings:
* 0x0000 - Attempt to acquire the mutex once, in a non-blocking manner
* 0x0001...0xFFFE - Attempt to acquire the mutex for at least 'timeout'
* milliseconds
* 0xFFFF - Infinite wait, block until the mutex is acquired
*
* The following are possible return values:
* 1. UACPI_STATUS_OK - successful acquire operation
* 2. UACPI_STATUS_TIMEOUT - timeout reached while attempting to acquire (or the
* single attempt to acquire was not successful for
* calls with timeout=0)
* 3. Any other value - signifies a host internal error and is treated as such
*/
uacpi_status uacpi_kernel_acquire_mutex(uacpi_handle, uacpi_u16);
void uacpi_kernel_release_mutex(uacpi_handle);
/*
* Try to wait for an event (counter > 0) with a millisecond timeout.
* A timeout value of 0xFFFF implies infinite wait.
*
* The internal counter is decremented by 1 if wait was successful.
*
* A successful wait is indicated by returning UACPI_TRUE.
*/
uacpi_bool uacpi_kernel_wait_for_event(uacpi_handle, uacpi_u16);
/*
* Signal the event object by incrementing its internal counter by 1.
*
* This function may be used in interrupt contexts.
*/
void uacpi_kernel_signal_event(uacpi_handle);
/*
* Reset the event counter to 0.
*/
void uacpi_kernel_reset_event(uacpi_handle);
/*
* Handle a firmware request.
*
* Currently either a Breakpoint or Fatal operators.
*/
uacpi_status uacpi_kernel_handle_firmware_request(uacpi_firmware_request*);
/*
* Install an interrupt handler at 'irq', 'ctx' is passed to the provided
* handler for every invocation.
*
* 'out_irq_handle' is set to a kernel-implemented value that can be used to
* refer to this handler from other API.
*/
uacpi_status uacpi_kernel_install_interrupt_handler(
uacpi_u32 irq, uacpi_interrupt_handler, uacpi_handle ctx,
uacpi_handle *out_irq_handle
);
/*
* Uninstall an interrupt handler. 'irq_handle' is the value returned via
* 'out_irq_handle' during installation.
*/
uacpi_status uacpi_kernel_uninstall_interrupt_handler(
uacpi_interrupt_handler, uacpi_handle irq_handle
);
/*
* Create/free a kernel spinlock object.
*
* Unlike other types of locks, spinlocks may be used in interrupt contexts.
*/
uacpi_handle uacpi_kernel_create_spinlock(void);
void uacpi_kernel_free_spinlock(uacpi_handle);
/*
* Lock/unlock helpers for spinlocks.
*
* These are expected to disable interrupts, returning the previous state of cpu
* flags, that can be used to possibly re-enable interrupts if they were enabled
* before.
*
* Note that lock is infalliable.
*/
uacpi_cpu_flags uacpi_kernel_lock_spinlock(uacpi_handle);
void uacpi_kernel_unlock_spinlock(uacpi_handle, uacpi_cpu_flags);
typedef enum uacpi_work_type {
/*
* Schedule a GPE handler method for execution.
* This should be scheduled to run on CPU0 to avoid potential SMI-related
* firmware bugs.
*/
UACPI_WORK_GPE_EXECUTION,
/*
* Schedule a Notify(device) firmware request for execution.
* This can run on any CPU.
*/
UACPI_WORK_NOTIFICATION,
} uacpi_work_type;
typedef void (*uacpi_work_handler)(uacpi_handle);
/*
* Schedules deferred work for execution.
* Might be invoked from an interrupt context.
*/
uacpi_status uacpi_kernel_schedule_work(
uacpi_work_type, uacpi_work_handler, uacpi_handle ctx
);
/*
* Waits for two types of work to finish:
* 1. All in-flight interrupts installed via uacpi_kernel_install_interrupt_handler
* 2. All work scheduled via uacpi_kernel_schedule_work
*
* Note that the waits must be done in this order specifically.
*/
uacpi_status uacpi_kernel_wait_for_work_completion(void);
#endif // !UACPI_BAREBONES_MODE
#ifdef __cplusplus
}
#endif

40
include/uacpi/log.h Normal file
View file

@ -0,0 +1,40 @@
#pragma once
#ifdef __cplusplus
extern "C" {
#endif
typedef enum uacpi_log_level {
/*
* Super verbose logging, every op & uop being processed is logged.
* Mostly useful for tracking down hangs/lockups.
*/
UACPI_LOG_DEBUG = 5,
/*
* A little verbose, every operation region access is traced with a bit of
* extra information on top.
*/
UACPI_LOG_TRACE = 4,
/*
* Only logs the bare minimum information about state changes and/or
* initialization progress.
*/
UACPI_LOG_INFO = 3,
/*
* Logs recoverable errors and/or non-important aborts.
*/
UACPI_LOG_WARN = 2,
/*
* Logs only critical errors that might affect the ability to initialize or
* prevent stable runtime.
*/
UACPI_LOG_ERROR = 1,
} uacpi_log_level;
#ifdef __cplusplus
}
#endif

186
include/uacpi/namespace.h Normal file
View file

@ -0,0 +1,186 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifndef UACPI_BAREBONES_MODE
typedef struct uacpi_namespace_node uacpi_namespace_node;
uacpi_namespace_node *uacpi_namespace_root(void);
typedef enum uacpi_predefined_namespace {
UACPI_PREDEFINED_NAMESPACE_ROOT = 0,
UACPI_PREDEFINED_NAMESPACE_GPE,
UACPI_PREDEFINED_NAMESPACE_PR,
UACPI_PREDEFINED_NAMESPACE_SB,
UACPI_PREDEFINED_NAMESPACE_SI,
UACPI_PREDEFINED_NAMESPACE_TZ,
UACPI_PREDEFINED_NAMESPACE_GL,
UACPI_PREDEFINED_NAMESPACE_OS,
UACPI_PREDEFINED_NAMESPACE_OSI,
UACPI_PREDEFINED_NAMESPACE_REV,
UACPI_PREDEFINED_NAMESPACE_MAX = UACPI_PREDEFINED_NAMESPACE_REV,
} uacpi_predefined_namespace;
uacpi_namespace_node *uacpi_namespace_get_predefined(
uacpi_predefined_namespace
);
/*
* Returns UACPI_TRUE if the provided 'node' is an alias.
*/
uacpi_bool uacpi_namespace_node_is_alias(uacpi_namespace_node *node);
uacpi_object_name uacpi_namespace_node_name(const uacpi_namespace_node *node);
/*
* Returns the type of object stored at the namespace node.
*
* NOTE: due to the existance of the CopyObject operator in AML, the
* return value of this function is subject to TOCTOU bugs.
*/
uacpi_status uacpi_namespace_node_type(
const uacpi_namespace_node *node, uacpi_object_type *out_type
);
/*
* Returns UACPI_TRUE via 'out' if the type of the object stored at the
* namespace node matches the provided value, UACPI_FALSE otherwise.
*
* NOTE: due to the existance of the CopyObject operator in AML, the
* return value of this function is subject to TOCTOU bugs.
*/
uacpi_status uacpi_namespace_node_is(
const uacpi_namespace_node *node, uacpi_object_type type, uacpi_bool *out
);
/*
* Returns UACPI_TRUE via 'out' if the type of the object stored at the
* namespace node matches any of the type bits in the provided value,
* UACPI_FALSE otherwise.
*
* NOTE: due to the existance of the CopyObject operator in AML, the
* return value of this function is subject to TOCTOU bugs.
*/
uacpi_status uacpi_namespace_node_is_one_of(
const uacpi_namespace_node *node, uacpi_object_type_bits type_mask,
uacpi_bool *out
);
uacpi_size uacpi_namespace_node_depth(const uacpi_namespace_node *node);
uacpi_namespace_node *uacpi_namespace_node_parent(
uacpi_namespace_node *node
);
uacpi_status uacpi_namespace_node_find(
uacpi_namespace_node *parent,
const uacpi_char *path,
uacpi_namespace_node **out_node
);
/*
* Same as uacpi_namespace_node_find, except the search recurses upwards when
* the namepath consists of only a single nameseg. Usually, this behavior is
* only desired if resolving a namepath specified in an aml-provided object,
* such as a package element.
*/
uacpi_status uacpi_namespace_node_resolve_from_aml_namepath(
uacpi_namespace_node *scope,
const uacpi_char *path,
uacpi_namespace_node **out_node
);
typedef uacpi_iteration_decision (*uacpi_iteration_callback) (
void *user, uacpi_namespace_node *node, uacpi_u32 node_depth
);
#define UACPI_MAX_DEPTH_ANY 0xFFFFFFFF
/*
* Depth-first iterate the namespace starting at the first child of 'parent'.
*/
uacpi_status uacpi_namespace_for_each_child_simple(
uacpi_namespace_node *parent, uacpi_iteration_callback callback, void *user
);
/*
* Depth-first iterate the namespace starting at the first child of 'parent'.
*
* 'descending_callback' is invoked the first time a node is visited when
* walking down. 'ascending_callback' is invoked the second time a node is
* visited after we reach the leaf node without children and start walking up.
* Either of the callbacks may be NULL, but not both at the same time.
*
* Only nodes matching 'type_mask' are passed to the callbacks.
*
* 'max_depth' is used to limit the maximum reachable depth from 'parent',
* where 1 is only direct children of 'parent', 2 is children of first-level
* children etc. Use UACPI_MAX_DEPTH_ANY or -1 to specify infinite depth.
*/
uacpi_status uacpi_namespace_for_each_child(
uacpi_namespace_node *parent, uacpi_iteration_callback descending_callback,
uacpi_iteration_callback ascending_callback,
uacpi_object_type_bits type_mask, uacpi_u32 max_depth, void *user
);
/*
* Retrieve the next peer namespace node of '*iter', or, if '*iter' is
* UACPI_NULL, retrieve the first child of 'parent' instead. The resulting
* namespace node is stored at '*iter'.
*
* This API can be used to implement an "iterator" version of the
* for_each_child helpers.
*
* Example usage:
* void recurse(uacpi_namespace_node *parent) {
* uacpi_namespace_node *iter = UACPI_NULL;
*
* while (uacpi_namespace_node_next(parent, &iter) == UACPI_STATUS_OK) {
* // Do something with iter...
* descending_callback(iter);
*
* // Recurse down to walk over the children of iter
* recurse(iter);
* }
* }
*
* Prefer the for_each_child family of helpers if possible instead of this API
* as they avoid recursion and/or the need to use dynamic data structures
* entirely.
*/
uacpi_status uacpi_namespace_node_next(
uacpi_namespace_node *parent, uacpi_namespace_node **iter
);
/*
* Retrieve the next peer namespace node of '*iter', or, if '*iter' is
* UACPI_NULL, retrieve the first child of 'parent' instead. The resulting
* namespace node is stored at '*iter'. Only nodes which type matches one
* of the types set in 'type_mask' are returned.
*
* See comment above 'uacpi_namespace_node_next' for usage examples.
*
* Prefer the for_each_child family of helpers if possible instead of this API
* as they avoid recursion and/or the need to use dynamic data structures
* entirely.
*/
uacpi_status uacpi_namespace_node_next_typed(
uacpi_namespace_node *parent, uacpi_namespace_node **iter,
uacpi_object_type_bits type_mask
);
const uacpi_char *uacpi_namespace_node_generate_absolute_path(
const uacpi_namespace_node *node
);
void uacpi_free_absolute_path(const uacpi_char *path);
#endif // !UACPI_BAREBONES_MODE
#ifdef __cplusplus
}
#endif

30
include/uacpi/notify.h Normal file
View file

@ -0,0 +1,30 @@
#pragma once
#include <uacpi/types.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifndef UACPI_BAREBONES_MODE
/*
* Install a Notify() handler to a device node.
* A handler installed to the root node will receive all notifications, even if
* a device already has a dedicated Notify handler.
* 'handler_context' is passed to the handler on every invocation.
*/
uacpi_status uacpi_install_notify_handler(
uacpi_namespace_node *node, uacpi_notify_handler handler,
uacpi_handle handler_context
);
uacpi_status uacpi_uninstall_notify_handler(
uacpi_namespace_node *node, uacpi_notify_handler handler
);
#endif // !UACPI_BAREBONES_MODE
#ifdef __cplusplus
}
#endif

47
include/uacpi/opregion.h Normal file
View file

@ -0,0 +1,47 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifndef UACPI_BAREBONES_MODE
/*
* Install an address space handler to a device node.
* The handler is recursively connected to all of the operation regions of
* type 'space' underneath 'device_node'. Note that this recursion stops as
* soon as another device node that already has an address space handler of
* this type installed is encountered.
*/
uacpi_status uacpi_install_address_space_handler(
uacpi_namespace_node *device_node, enum uacpi_address_space space,
uacpi_region_handler handler, uacpi_handle handler_context
);
/*
* Uninstall the handler of type 'space' from a given device node.
*/
uacpi_status uacpi_uninstall_address_space_handler(
uacpi_namespace_node *device_node,
enum uacpi_address_space space
);
/*
* Execute _REG(space, ACPI_REG_CONNECT) for all of the opregions with this
* address space underneath this device. This should only be called manually
* if you want to register an early handler that must be available before the
* call to uacpi_namespace_initialize().
*/
uacpi_status uacpi_reg_all_opregions(
uacpi_namespace_node *device_node,
enum uacpi_address_space space
);
#endif // !UACPI_BAREBONES_MODE
#ifdef __cplusplus
}
#endif

125
include/uacpi/osi.h Normal file
View file

@ -0,0 +1,125 @@
#pragma once
#include <uacpi/types.h>
#include <uacpi/status.h>
#ifdef __cplusplus
extern "C" {
#endif
#ifndef UACPI_BAREBONES_MODE
typedef enum uacpi_vendor_interface {
UACPI_VENDOR_INTERFACE_NONE = 0,
UACPI_VENDOR_INTERFACE_WINDOWS_2000,
UACPI_VENDOR_INTERFACE_WINDOWS_XP,
UACPI_VENDOR_INTERFACE_WINDOWS_XP_SP1,
UACPI_VENDOR_INTERFACE_WINDOWS_SERVER_2003,
UACPI_VENDOR_INTERFACE_WINDOWS_XP_SP2,
UACPI_VENDOR_INTERFACE_WINDOWS_SERVER_2003_SP1,
UACPI_VENDOR_INTERFACE_WINDOWS_VISTA,
UACPI_VENDOR_INTERFACE_WINDOWS_SERVER_2008,
UACPI_VENDOR_INTERFACE_WINDOWS_VISTA_SP1,
UACPI_VENDOR_INTERFACE_WINDOWS_VISTA_SP2,
UACPI_VENDOR_INTERFACE_WINDOWS_7,
UACPI_VENDOR_INTERFACE_WINDOWS_8,
UACPI_VENDOR_INTERFACE_WINDOWS_8_1,
UACPI_VENDOR_INTERFACE_WINDOWS_10,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS1,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS2,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS3,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS4,
UACPI_VENDOR_INTERFACE_WINDOWS_10_RS5,
UACPI_VENDOR_INTERFACE_WINDOWS_10_19H1,
UACPI_VENDOR_INTERFACE_WINDOWS_10_20H1,
UACPI_VENDOR_INTERFACE_WINDOWS_11,
UACPI_VENDOR_INTERFACE_WINDOWS_11_22H2,
} uacpi_vendor_interface;
/*
* Returns the "latest" AML-queried _OSI vendor interface.
*
* E.g. for the following AML code:
* _OSI("Windows 2021")
* _OSI("Windows 2000")
*
* This function will return UACPI_VENDOR_INTERFACE_WINDOWS_11, since this is
* the latest version of the interface the code queried, even though the
* "Windows 2000" query came after "Windows 2021".
*/
uacpi_vendor_interface uacpi_latest_queried_vendor_interface(void);
typedef enum uacpi_interface_kind {
UACPI_INTERFACE_KIND_VENDOR = (1 << 0),
UACPI_INTERFACE_KIND_FEATURE = (1 << 1),
UACPI_INTERFACE_KIND_ALL = UACPI_INTERFACE_KIND_VENDOR |
UACPI_INTERFACE_KIND_FEATURE,
} uacpi_interface_kind;
/*
* Install or uninstall an interface.
*
* The interface kind is used for matching during interface enumeration in
* uacpi_bulk_configure_interfaces().
*
* After installing an interface, all _OSI queries report it as supported.
*/
uacpi_status uacpi_install_interface(
const uacpi_char *name, uacpi_interface_kind
);
uacpi_status uacpi_uninstall_interface(const uacpi_char *name);
typedef enum uacpi_host_interface {
UACPI_HOST_INTERFACE_MODULE_DEVICE = 1,
UACPI_HOST_INTERFACE_PROCESSOR_DEVICE,
UACPI_HOST_INTERFACE_3_0_THERMAL_MODEL,
UACPI_HOST_INTERFACE_3_0_SCP_EXTENSIONS,
UACPI_HOST_INTERFACE_PROCESSOR_AGGREGATOR_DEVICE,
} uacpi_host_interface;
/*
* Same as install/uninstall interface, but comes with an enum of known
* interfaces defined by the ACPI specification. These are disabled by default
* as they depend on the host kernel support.
*/
uacpi_status uacpi_enable_host_interface(uacpi_host_interface);
uacpi_status uacpi_disable_host_interface(uacpi_host_interface);
typedef uacpi_bool (*uacpi_interface_handler)
(const uacpi_char *name, uacpi_bool supported);
/*
* Set a custom interface query (_OSI) handler.
*
* This callback will be invoked for each _OSI query with the value
* passed in the _OSI, as well as whether the interface was detected as
* supported. The callback is able to override the return value dynamically
* or leave it untouched if desired (e.g. if it simply wants to log something or
* do internal bookkeeping of some kind).
*/
uacpi_status uacpi_set_interface_query_handler(uacpi_interface_handler);
typedef enum uacpi_interface_action {
UACPI_INTERFACE_ACTION_DISABLE = 0,
UACPI_INTERFACE_ACTION_ENABLE,
} uacpi_interface_action;
/*
* Bulk interface configuration, used to disable or enable all interfaces that
* match 'kind'.
*
* This is generally only needed to work around buggy hardware, for example if
* requested from the kernel command line.
*
* By default, all vendor strings (like "Windows 2000") are enabled, and all
* host features (like "3.0 Thermal Model") are disabled.
*/
uacpi_status uacpi_bulk_configure_interfaces(
uacpi_interface_action action, uacpi_interface_kind kind
);
#endif // !UACPI_BAREBONES_MODE
#ifdef __cplusplus
}
#endif

View file

@ -0,0 +1,39 @@
#pragma once
#ifdef UACPI_OVERRIDE_ARCH_HELPERS
#include "uacpi_arch_helpers.h"
#else
#include <uacpi/platform/atomic.h>
#ifndef UACPI_ARCH_FLUSH_CPU_CACHE
#define UACPI_ARCH_FLUSH_CPU_CACHE() do {} while (0)
#endif
typedef unsigned long uacpi_cpu_flags;
typedef unsigned long uacpi_interrupt_state;
typedef void *uacpi_thread_id;
/*
* Replace as needed depending on your platform's way to represent thread ids.
* uACPI offers a few more helpers like uacpi_atomic_{load,store}{8,16,32,64,ptr}
* (or you could provide your own helpers)
*/
#ifndef UACPI_ATOMIC_LOAD_THREAD_ID
#define UACPI_ATOMIC_LOAD_THREAD_ID(ptr) ((uacpi_thread_id)uacpi_atomic_load_ptr(ptr))
#endif
#ifndef UACPI_ATOMIC_STORE_THREAD_ID
#define UACPI_ATOMIC_STORE_THREAD_ID(ptr, value) uacpi_atomic_store_ptr(ptr, value)
#endif
/*
* A sentinel value that the kernel promises to NEVER return from
* uacpi_kernel_get_current_thread_id or this will break
*/
#ifndef UACPI_THREAD_ID_NONE
#define UACPI_THREAD_ID_NONE ((uacpi_thread_id)-1)
#endif
#endif

View file

@ -0,0 +1,347 @@
#pragma once
/*
* Most of this header is a giant workaround for MSVC to make atomics into a
* somewhat unified interface with how GCC and Clang handle them.
*
* We don't use the absolutely disgusting C11 stdatomic.h header because it is
* unable to operate on non _Atomic types, which enforce implicit sequential
* consistency and alter the behavior of the standard C binary/unary operators.
*
* The strictness of the atomic helpers defined here is assumed to be at least
* acquire for loads and release for stores. Cmpxchg uses the standard acq/rel
* for success, acq for failure, and is assumed to be strong.
*/
#ifdef UACPI_OVERRIDE_ATOMIC
#include "uacpi_atomic.h"
#else
#include <uacpi/platform/compiler.h>
#if defined(_MSC_VER) && !defined(__clang__)
#include <intrin.h>
// mimic __atomic_compare_exchange_n that doesn't exist on MSVC
#define UACPI_MAKE_MSVC_CMPXCHG(width, type, suffix) \
static inline int uacpi_do_atomic_cmpxchg##width( \
type volatile *ptr, type volatile *expected, type desired \
) \
{ \
type current; \
\
current = _InterlockedCompareExchange##suffix(ptr, *expected, desired); \
if (current != *expected) { \
*expected = current; \
return 0; \
} \
return 1; \
}
#define UACPI_MSVC_CMPXCHG_INVOKE(ptr, expected, desired, width, type) \
uacpi_do_atomic_cmpxchg##width( \
(type volatile*)ptr, (type volatile*)expected, desired \
)
#define UACPI_MSVC_ATOMIC_STORE(ptr, value, type, width) \
_InterlockedExchange##width((type volatile*)(ptr), (type)(value))
#define UACPI_MSVC_ATOMIC_LOAD(ptr, type, width) \
_InterlockedOr##width((type volatile*)(ptr), 0)
#define UACPI_MSVC_ATOMIC_INC(ptr, type, width) \
_InterlockedIncrement##width((type volatile*)(ptr))
#define UACPI_MSVC_ATOMIC_DEC(ptr, type, width) \
_InterlockedDecrement##width((type volatile*)(ptr))
UACPI_MAKE_MSVC_CMPXCHG(64, __int64, 64)
UACPI_MAKE_MSVC_CMPXCHG(32, long,)
UACPI_MAKE_MSVC_CMPXCHG(16, short, 16)
#define uacpi_atomic_cmpxchg16(ptr, expected, desired) \
UACPI_MSVC_CMPXCHG_INVOKE(ptr, expected, desired, 16, short)
#define uacpi_atomic_cmpxchg32(ptr, expected, desired) \
UACPI_MSVC_CMPXCHG_INVOKE(ptr, expected, desired, 32, long)
#define uacpi_atomic_cmpxchg64(ptr, expected, desired) \
UACPI_MSVC_CMPXCHG_INVOKE(ptr, expected, desired, 64, __int64)
#define uacpi_atomic_load8(ptr) UACPI_MSVC_ATOMIC_LOAD(ptr, char, 8)
#define uacpi_atomic_load16(ptr) UACPI_MSVC_ATOMIC_LOAD(ptr, short, 16)
#define uacpi_atomic_load32(ptr) UACPI_MSVC_ATOMIC_LOAD(ptr, long,)
#define uacpi_atomic_load64(ptr) UACPI_MSVC_ATOMIC_LOAD(ptr, __int64, 64)
#define uacpi_atomic_store8(ptr, value) UACPI_MSVC_ATOMIC_STORE(ptr, value, char, 8)
#define uacpi_atomic_store16(ptr, value) UACPI_MSVC_ATOMIC_STORE(ptr, value, short, 16)
#define uacpi_atomic_store32(ptr, value) UACPI_MSVC_ATOMIC_STORE(ptr, value, long,)
#define uacpi_atomic_store64(ptr, value) UACPI_MSVC_ATOMIC_STORE(ptr, value, __int64, 64)
#define uacpi_atomic_inc16(ptr) UACPI_MSVC_ATOMIC_INC(ptr, short, 16)
#define uacpi_atomic_inc32(ptr) UACPI_MSVC_ATOMIC_INC(ptr, long,)
#define uacpi_atomic_inc64(ptr) UACPI_MSVC_ATOMIC_INC(ptr, __int64, 64)
#define uacpi_atomic_dec16(ptr) UACPI_MSVC_ATOMIC_DEC(ptr, short, 16)
#define uacpi_atomic_dec32(ptr) UACPI_MSVC_ATOMIC_DEC(ptr, long,)
#define uacpi_atomic_dec64(ptr) UACPI_MSVC_ATOMIC_DEC(ptr, __int64, 64)
#elif defined(__WATCOMC__)
#include <stdint.h>
static int uacpi_do_atomic_cmpxchg16(volatile uint16_t *ptr, volatile uint16_t *expected, uint16_t desired);
#pragma aux uacpi_do_atomic_cmpxchg16 = \
".486" \
"mov ax, [esi]" \
"lock cmpxchg [edi], bx" \
"mov [esi], ax" \
"setz al" \
"movzx eax, al" \
parm [ edi ] [ esi ] [ ebx ] \
value [ eax ]
static int uacpi_do_atomic_cmpxchg32(volatile uint32_t *ptr, volatile uint32_t *expected, uint32_t desired);
#pragma aux uacpi_do_atomic_cmpxchg32 = \
".486" \
"mov eax, [esi]" \
"lock cmpxchg [edi], ebx" \
"mov [esi], eax" \
"setz al" \
"movzx eax, al" \
parm [ edi ] [ esi ] [ ebx ] \
value [ eax ]
static int uacpi_do_atomic_cmpxchg64_asm(volatile uint64_t *ptr, volatile uint64_t *expected, uint32_t low, uint32_t high);
#pragma aux uacpi_do_atomic_cmpxchg64_asm = \
".586" \
"mov eax, [esi]" \
"mov edx, [esi + 4]" \
"lock cmpxchg8b [edi]" \
"mov [esi], eax" \
"mov [esi + 4], edx" \
"setz al" \
"movzx eax, al" \
modify [ edx ] \
parm [ edi ] [ esi ] [ ebx ] [ ecx ] \
value [ eax ]
static inline int uacpi_do_atomic_cmpxchg64(volatile uint64_t *ptr, volatile uint64_t *expected, uint64_t desired) {
return uacpi_do_atomic_cmpxchg64_asm(ptr, expected, desired, desired >> 32);
}
#define uacpi_atomic_cmpxchg16(ptr, expected, desired) \
uacpi_do_atomic_cmpxchg16((volatile uint16_t*)ptr, (volatile uint16_t*)expected, (uint16_t)desired)
#define uacpi_atomic_cmpxchg32(ptr, expected, desired) \
uacpi_do_atomic_cmpxchg32((volatile uint32_t*)ptr, (volatile uint32_t*)expected, (uint32_t)desired)
#define uacpi_atomic_cmpxchg64(ptr, expected, desired) \
uacpi_do_atomic_cmpxchg64((volatile uint64_t*)ptr, (volatile uint64_t*)expected, (uint64_t)desired)
static uint8_t uacpi_do_atomic_load8(volatile uint8_t *ptr);
#pragma aux uacpi_do_atomic_load8 = \
"mov al, [esi]" \
parm [ esi ] \
value [ al ]
static uint16_t uacpi_do_atomic_load16(volatile uint16_t *ptr);
#pragma aux uacpi_do_atomic_load16 = \
"mov ax, [esi]" \
parm [ esi ] \
value [ ax ]
static uint32_t uacpi_do_atomic_load32(volatile uint32_t *ptr);
#pragma aux uacpi_do_atomic_load32 = \
"mov eax, [esi]" \
parm [ esi ] \
value [ eax ]
static void uacpi_do_atomic_load64_asm(volatile uint64_t *ptr, uint64_t *out);
#pragma aux uacpi_do_atomic_load64_asm = \
".586" \
"xor eax, eax" \
"xor ebx, ebx" \
"xor ecx, ecx" \
"xor edx, edx" \
"lock cmpxchg8b [esi]" \
"mov [edi], eax" \
"mov [edi + 4], edx" \
modify [ eax ebx ecx edx ] \
parm [ esi ] [ edi ]
static inline uint64_t uacpi_do_atomic_load64(volatile uint64_t *ptr) {
uint64_t value;
uacpi_do_atomic_load64_asm(ptr, &value);
return value;
}
#define uacpi_atomic_load8(ptr) uacpi_do_atomic_load8((volatile uint8_t*)ptr)
#define uacpi_atomic_load16(ptr) uacpi_do_atomic_load16((volatile uint16_t*)ptr)
#define uacpi_atomic_load32(ptr) uacpi_do_atomic_load32((volatile uint32_t*)ptr)
#define uacpi_atomic_load64(ptr) uacpi_do_atomic_load64((volatile uint64_t*)ptr)
static void uacpi_do_atomic_store8(volatile uint8_t *ptr, uint8_t value);
#pragma aux uacpi_do_atomic_store8 = \
"mov [edi], al" \
parm [ edi ] [ eax ]
static void uacpi_do_atomic_store16(volatile uint16_t *ptr, uint16_t value);
#pragma aux uacpi_do_atomic_store16 = \
"mov [edi], ax" \
parm [ edi ] [ eax ]
static void uacpi_do_atomic_store32(volatile uint32_t *ptr, uint32_t value);
#pragma aux uacpi_do_atomic_store32 = \
"mov [edi], eax" \
parm [ edi ] [ eax ]
static void uacpi_do_atomic_store64_asm(volatile uint64_t *ptr, uint32_t low, uint32_t high);
#pragma aux uacpi_do_atomic_store64_asm = \
".586" \
"xor eax, eax" \
"xor edx, edx" \
"retry: lock cmpxchg8b [edi]" \
"jnz retry" \
modify [ eax edx ] \
parm [ edi ] [ ebx ] [ ecx ]
static inline void uacpi_do_atomic_store64(volatile uint64_t *ptr, uint64_t value) {
uacpi_do_atomic_store64_asm(ptr, value, value >> 32);
}
#define uacpi_atomic_store8(ptr, value) uacpi_do_atomic_store8((volatile uint8_t*)ptr, (uint8_t)value)
#define uacpi_atomic_store16(ptr, value) uacpi_do_atomic_store16((volatile uint16_t*)ptr, (uint16_t)value)
#define uacpi_atomic_store32(ptr, value) uacpi_do_atomic_store32((volatile uint32_t*)ptr, (uint32_t)value)
#define uacpi_atomic_store64(ptr, value) uacpi_do_atomic_store64((volatile uint64_t*)ptr, (uint64_t)value)
static uint16_t uacpi_do_atomic_inc16(volatile uint16_t *ptr);
#pragma aux uacpi_do_atomic_inc16 = \
".486" \
"mov ax, 1" \
"lock xadd [edi], ax" \
"add ax, 1" \
parm [ edi ] \
value [ ax ]
static uint32_t uacpi_do_atomic_inc32(volatile uint32_t *ptr);
#pragma aux uacpi_do_atomic_inc32 = \
".486" \
"mov eax, 1" \
"lock xadd [edi], eax" \
"add eax, 1" \
parm [ edi ] \
value [ eax ]
static void uacpi_do_atomic_inc64_asm(volatile uint64_t *ptr, uint64_t *out);
#pragma aux uacpi_do_atomic_inc64_asm = \
".586" \
"xor eax, eax" \
"xor edx, edx" \
"mov ebx, 1" \
"mov ecx, 1" \
"retry: lock cmpxchg8b [esi]" \
"mov ebx, eax" \
"mov ecx, edx" \
"add ebx, 1" \
"adc ecx, 0" \
"jnz retry" \
"mov [edi], ebx" \
"mov [edi + 4], ecx" \
modify [ eax ebx ecx edx ] \
parm [ esi ] [ edi ]
static inline uint64_t uacpi_do_atomic_inc64(volatile uint64_t *ptr) {
uint64_t value;
uacpi_do_atomic_inc64_asm(ptr, &value);
return value;
}
#define uacpi_atomic_inc16(ptr) uacpi_do_atomic_inc16((volatile uint16_t*)ptr)
#define uacpi_atomic_inc32(ptr) uacpi_do_atomic_inc32((volatile uint32_t*)ptr)
#define uacpi_atomic_inc64(ptr) uacpi_do_atomic_inc64((volatile uint64_t*)ptr)
static uint16_t uacpi_do_atomic_dec16(volatile uint16_t *ptr);
#pragma aux uacpi_do_atomic_dec16 = \
".486" \
"mov ax, -1" \
"lock xadd [edi], ax" \
"add ax, -1" \
parm [ edi ] \
value [ ax ]
static uint32_t uacpi_do_atomic_dec32(volatile uint32_t *ptr);
#pragma aux uacpi_do_atomic_dec32 = \
".486" \
"mov eax, -1" \
"lock xadd [edi], eax" \
"add eax, -1" \
parm [ edi ] \
value [ eax ]
static void uacpi_do_atomic_dec64_asm(volatile uint64_t *ptr, uint64_t *out);
#pragma aux uacpi_do_atomic_dec64_asm = \
".586" \
"xor eax, eax" \
"xor edx, edx" \
"mov ebx, -1" \
"mov ecx, -1" \
"retry: lock cmpxchg8b [esi]" \
"mov ebx, eax" \
"mov ecx, edx" \
"sub ebx, 1" \
"sbb ecx, 0" \
"jnz retry" \
"mov [edi], ebx" \
"mov [edi + 4], ecx" \
modify [ eax ebx ecx edx ] \
parm [ esi ] [ edi ]
static inline uint64_t uacpi_do_atomic_dec64(volatile uint64_t *ptr) {
uint64_t value;
uacpi_do_atomic_dec64_asm(ptr, &value);
return value;
}
#define uacpi_atomic_dec16(ptr) uacpi_do_atomic_dec16((volatile uint16_t*)ptr)
#define uacpi_atomic_dec32(ptr) uacpi_do_atomic_dec32((volatile uint32_t*)ptr)
#define uacpi_atomic_dec64(ptr) uacpi_do_atomic_dec64((volatile uint64_t*)ptr)
#else
#define UACPI_DO_CMPXCHG(ptr, expected, desired) \
__atomic_compare_exchange_n(ptr, expected, desired, 0, \
__ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)
#define uacpi_atomic_cmpxchg16(ptr, expected, desired) \
UACPI_DO_CMPXCHG(ptr, expected, desired)
#define uacpi_atomic_cmpxchg32(ptr, expected, desired) \
UACPI_DO_CMPXCHG(ptr, expected, desired)
#define uacpi_atomic_cmpxchg64(ptr, expected, desired) \
UACPI_DO_CMPXCHG(ptr, expected, desired)
#define uacpi_atomic_load8(ptr) __atomic_load_n(ptr, __ATOMIC_ACQUIRE)
#define uacpi_atomic_load16(ptr) __atomic_load_n(ptr, __ATOMIC_ACQUIRE)
#define uacpi_atomic_load32(ptr) __atomic_load_n(ptr, __ATOMIC_ACQUIRE)
#define uacpi_atomic_load64(ptr) __atomic_load_n(ptr, __ATOMIC_ACQUIRE)
#define uacpi_atomic_store8(ptr, value) __atomic_store_n(ptr, value, __ATOMIC_RELEASE)
#define uacpi_atomic_store16(ptr, value) __atomic_store_n(ptr, value, __ATOMIC_RELEASE)
#define uacpi_atomic_store32(ptr, value) __atomic_store_n(ptr, value, __ATOMIC_RELEASE)
#define uacpi_atomic_store64(ptr, value) __atomic_store_n(ptr, value, __ATOMIC_RELEASE)
#define uacpi_atomic_inc16(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_inc32(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_inc64(ptr) __atomic_add_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_dec16(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_dec32(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#define uacpi_atomic_dec64(ptr) __atomic_sub_fetch(ptr, 1, __ATOMIC_ACQ_REL)
#endif
#if UACPI_POINTER_SIZE == 4
#define uacpi_atomic_load_ptr(ptr_to_ptr) uacpi_atomic_load32(ptr_to_ptr)
#define uacpi_atomic_store_ptr(ptr_to_ptr, value) uacpi_atomic_store32(ptr_to_ptr, value)
#else
#define uacpi_atomic_load_ptr(ptr_to_ptr) uacpi_atomic_load64(ptr_to_ptr)
#define uacpi_atomic_store_ptr(ptr_to_ptr, value) uacpi_atomic_store64(ptr_to_ptr, value)
#endif
#endif

View file

@ -0,0 +1,125 @@
#pragma once
/*
* Compiler-specific attributes/macros go here. This is the default placeholder
* that should work for MSVC/GCC/clang.
*/
#ifdef UACPI_OVERRIDE_COMPILER
#include "uacpi_compiler.h"
#else
#define UACPI_ALIGN(x) __declspec(align(x))
#if defined(__WATCOMC__)
#define UACPI_STATIC_ASSERT(expr, msg)
#elif defined(__cplusplus)
#define UACPI_STATIC_ASSERT static_assert
#else
#define UACPI_STATIC_ASSERT _Static_assert
#endif
#ifdef _MSC_VER
#include <intrin.h>
#define UACPI_ALWAYS_INLINE __forceinline
#define UACPI_PACKED(decl) \
__pragma(pack(push, 1)) \
decl; \
__pragma(pack(pop))
#elif defined(__WATCOMC__)
#define UACPI_ALWAYS_INLINE inline
#define UACPI_PACKED(decl) _Packed decl;
#else
#define UACPI_ALWAYS_INLINE inline __attribute__((always_inline))
#define UACPI_PACKED(decl) decl __attribute__((packed));
#endif
#if defined(__GNUC__) || defined(__clang__)
#define uacpi_unlikely(expr) __builtin_expect(!!(expr), 0)
#define uacpi_likely(expr) __builtin_expect(!!(expr), 1)
#ifdef __has_attribute
#if __has_attribute(__fallthrough__)
#define UACPI_FALLTHROUGH __attribute__((__fallthrough__))
#endif
#endif
#define UACPI_MAYBE_UNUSED __attribute__ ((unused))
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_BEGIN \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wunused-parameter\"")
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_END \
_Pragma("GCC diagnostic pop")
#ifdef __clang__
#define UACPI_PRINTF_DECL(fmt_idx, args_idx) \
__attribute__((format(printf, fmt_idx, args_idx)))
#else
#define UACPI_PRINTF_DECL(fmt_idx, args_idx) \
__attribute__((format(gnu_printf, fmt_idx, args_idx)))
#endif
#define UACPI_COMPILER_HAS_BUILTIN_MEMCPY
#define UACPI_COMPILER_HAS_BUILTIN_MEMMOVE
#define UACPI_COMPILER_HAS_BUILTIN_MEMSET
#define UACPI_COMPILER_HAS_BUILTIN_MEMCMP
#elif defined(__WATCOMC__)
#define uacpi_unlikely(expr) expr
#define uacpi_likely(expr) expr
/*
* The OpenWatcom documentation suggests this should be done using
* _Pragma("off (unreferenced)") and _Pragma("pop (unreferenced)"),
* but these pragmas appear to be no-ops. Use inline as the next best thing.
* Note that OpenWatcom accepts redundant modifiers without a warning,
* so UACPI_MAYBE_UNUSED inline still works.
*/
#define UACPI_MAYBE_UNUSED inline
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_BEGIN
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_END
#define UACPI_PRINTF_DECL(fmt_idx, args_idx)
#else
#define uacpi_unlikely(expr) expr
#define uacpi_likely(expr) expr
#define UACPI_MAYBE_UNUSED
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_BEGIN
#define UACPI_NO_UNUSED_PARAMETER_WARNINGS_END
#define UACPI_PRINTF_DECL(fmt_idx, args_idx)
#endif
#ifndef UACPI_FALLTHROUGH
#define UACPI_FALLTHROUGH do {} while (0)
#endif
#ifndef UACPI_POINTER_SIZE
#ifdef _WIN32
#ifdef _WIN64
#define UACPI_POINTER_SIZE 8
#else
#define UACPI_POINTER_SIZE 4
#endif
#elif defined(__GNUC__)
#define UACPI_POINTER_SIZE __SIZEOF_POINTER__
#elif defined(__WATCOMC__)
#ifdef __386__
#define UACPI_POINTER_SIZE 4
#elif defined(__I86__)
#error uACPI does not support 16-bit mode compilation
#else
#error Unknown target architecture
#endif
#else
#error Failed to detect pointer size
#endif
#endif
#endif

View file

@ -0,0 +1,162 @@
#pragma once
#ifdef UACPI_OVERRIDE_CONFIG
#include "uacpi_config.h"
#else
#include <uacpi/helpers.h>
#include <uacpi/log.h>
/*
* =======================
* Context-related options
* =======================
*/
#ifndef UACPI_DEFAULT_LOG_LEVEL
#define UACPI_DEFAULT_LOG_LEVEL UACPI_LOG_INFO
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_DEFAULT_LOG_LEVEL < UACPI_LOG_ERROR ||
UACPI_DEFAULT_LOG_LEVEL > UACPI_LOG_DEBUG,
"configured default log level is invalid"
);
#ifndef UACPI_DEFAULT_LOOP_TIMEOUT_SECONDS
#define UACPI_DEFAULT_LOOP_TIMEOUT_SECONDS 30
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_DEFAULT_LOOP_TIMEOUT_SECONDS < 1,
"configured default loop timeout is invalid (expecting at least 1 second)"
);
#ifndef UACPI_DEFAULT_MAX_CALL_STACK_DEPTH
#define UACPI_DEFAULT_MAX_CALL_STACK_DEPTH 256
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_DEFAULT_MAX_CALL_STACK_DEPTH < 4,
"configured default max call stack depth is invalid "
"(expecting at least 4 frames)"
);
/*
* ===================
* Kernel-api options
* ===================
*/
/*
* Convenience initialization/deinitialization hooks that will be called by
* uACPI automatically when appropriate if compiled-in.
*/
// #define UACPI_KERNEL_INITIALIZATION
/*
* Makes kernel api logging callbacks work with unformatted printf-style
* strings and va_args instead of a pre-formatted string. Can be useful if
* your native logging is implemented in terms of this format as well.
*/
// #define UACPI_FORMATTED_LOGGING
/*
* Makes uacpi_kernel_free take in an additional 'size_hint' parameter, which
* contains the size of the original allocation. Note that this comes with a
* performance penalty in some cases.
*/
// #define UACPI_SIZED_FREES
/*
* Makes uacpi_kernel_alloc_zeroed mandatory to implement by the host, uACPI
* will not provide a default implementation if this is enabled.
*/
// #define UACPI_NATIVE_ALLOC_ZEROED
/*
* =========================
* Platform-specific options
* =========================
*/
/*
* Makes uACPI use the internal versions of mem{cpy,move,set,cmp} instead of
* relying on the host to provide them. Note that compilers like clang and GCC
* rely on these being available by default, even in freestanding mode, so
* compiling uACPI may theoretically generate implicit dependencies on them
* even if this option is defined.
*/
// #define UACPI_USE_BUILTIN_STRING
/*
* Turns uacpi_phys_addr and uacpi_io_addr into a 32-bit type, and adds extra
* code for address truncation. Needed for e.g. i686 platforms without PAE
* support.
*/
// #define UACPI_PHYS_ADDR_IS_32BITS
/*
* Switches uACPI into reduced-hardware-only mode. Strips all full-hardware
* ACPI support code at compile-time, including the event subsystem, the global
* lock, and other full-hardware features.
*/
// #define UACPI_REDUCED_HARDWARE
/*
* Switches uACPI into tables-subsystem-only mode and strips all other code.
* This means only the table API will be usable, no other subsystems are
* compiled in. In this mode, uACPI only depends on the following kernel APIs:
* - uacpi_kernel_get_rsdp
* - uacpi_kernel_{map,unmap}
* - uacpi_kernel_log
*
* Use uacpi_setup_early_table_access to initialize, uacpi_state_reset to
* deinitialize.
*
* This mode is primarily designed for these three use-cases:
* - Bootloader/pre-kernel environments that need to parse ACPI tables, but
* don't actually need a fully-featured AML interpreter, and everything else
* that a full APCI implementation entails.
* - A micro-kernel that has the full AML interpreter running in userspace, but
* still needs to parse ACPI tables to bootstrap allocators, timers, SMP etc.
* - A WIP kernel that needs to parse ACPI tables for bootrapping SMP/timers,
* ECAM, etc., but doesn't yet have enough subsystems implemented in order
* to run a fully-featured AML interpreter.
*/
// #define UACPI_BAREBONES_MODE
/*
* =============
* Misc. options
* =============
*/
/*
* If UACPI_FORMATTED_LOGGING is not enabled, this is the maximum length of the
* pre-formatted message that is passed to the logging callback.
*/
#ifndef UACPI_PLAIN_LOG_BUFFER_SIZE
#define UACPI_PLAIN_LOG_BUFFER_SIZE 128
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_PLAIN_LOG_BUFFER_SIZE < 16,
"configured log buffer size is too small (expecting at least 16 bytes)"
);
/*
* The size of the table descriptor inline storage. All table descriptors past
* this length will be stored in a dynamically allocated heap array. The size
* of one table descriptor is approximately 56 bytes.
*/
#ifndef UACPI_STATIC_TABLE_ARRAY_LEN
#define UACPI_STATIC_TABLE_ARRAY_LEN 16
#endif
UACPI_BUILD_BUG_ON_WITH_MSG(
UACPI_STATIC_TABLE_ARRAY_LEN < 1,
"configured static table array length is too small (expecting at least 1)"
);
#endif

View file

@ -0,0 +1,28 @@
#pragma once
#ifdef UACPI_OVERRIDE_LIBC
#include "uacpi_libc.h"
#else
/*
* The following libc functions are used internally by uACPI and have a default
* (sub-optimal) implementation:
* - strcmp
* - strnlen
* - strlen
* - snprintf
* - vsnprintf
*
* The following use a builtin implementation only if UACPI_USE_BUILTIN_STRING
* is defined (more information can be found in the config.h header):
* - memcpy
* - memmove
* - memset
* - memcmp
*
* In case your platform happens to implement optimized verisons of the helpers
* above, you are able to make uACPI use those instead by overriding them like so:
*
* #define uacpi_memcpy my_fast_memcpy
* #define uacpi_snprintf my_fast_snprintf
*/
#endif

View file

@ -0,0 +1,64 @@
#pragma once
/*
* Platform-specific types go here. This is the default placeholder using
* types from the standard headers.
*/
#ifdef UACPI_OVERRIDE_TYPES
#include "uacpi_types.h"
#else
#include <stdbool.h>
#include <stdint.h>
#include <stddef.h>
#include <stdarg.h>
#include <uacpi/helpers.h>
typedef uint8_t uacpi_u8;
typedef uint16_t uacpi_u16;
typedef uint32_t uacpi_u32;
typedef uint64_t uacpi_u64;
typedef int8_t uacpi_i8;
typedef int16_t uacpi_i16;
typedef int32_t uacpi_i32;
typedef int64_t uacpi_i64;
#define UACPI_TRUE true
#define UACPI_FALSE false
typedef bool uacpi_bool;
#define UACPI_NULL NULL
typedef uintptr_t uacpi_uintptr;
typedef uacpi_uintptr uacpi_virt_addr;
typedef size_t uacpi_size;
typedef va_list uacpi_va_list;
#define uacpi_va_start va_start
#define uacpi_va_end va_end
#define uacpi_va_arg va_arg
typedef char uacpi_char;
#define uacpi_offsetof offsetof
/*
* We use unsignd long long for 64-bit number formatting because 64-bit types
* don't have a standard way to format them. The inttypes.h header is not
* freestanding therefore it's not practical to force the user to define the
* corresponding PRI macros. Moreover, unsignd long long is required to be
* at least 64-bits as per C99.
*/
UACPI_BUILD_BUG_ON_WITH_MSG(
sizeof(unsigned long long) < 8,
"unsigned long long must be at least 64 bits large as per C99"
);
#define UACPI_PRIu64 "llu"
#define UACPI_PRIx64 "llx"
#define UACPI_PRIX64 "llX"
#define UACPI_FMT64(val) ((unsigned long long)(val))
#endif

Some files were not shown because too many files have changed in this diff Show more